Joint Effect, Joint Power
Tuesday February 24th 2026

Interesting Sites

Insider

Archives

Algorithm as Power, it Demands a Cage

【可持续发展】| Sustainability

By Wind, Jointing.Media, in Shanghai, 2026-02-01

A sensationalist headline (bad money) garners vast numbers of clicks and shares, while a rigorous, objective, in-depth report (good money) requiring time to read may be penalised by algorithms due to ‘low completion rates’ and ‘short dwell times’. The phenomenon of ‘bad money driving out good’ on the internet is both pervasive and profound. This is not merely a case of ‘good content going unseen,’ but rather a systemic consequence of an unchecked new power structure.

Algorithms are silent, yet they are legislating for billions. They determine what we see, what we believe, what we buy, and even whom we encounter. Constitutional scholars caution us: any concentrated and unchecked power inevitably becomes perverted, whether wielded by monarchs, governments, or code.

Sociologists offer deeper insight: this power is reshaping social structures—amplifying divisions, cementing biases, transforming public discourse into an emotional arena where reason, depth, and facts retreat before the algorithm’s ‘optimisation’ logic. The challenge we face is fundamentally political: how to construct a cage for the digital age’s Leviathan?

I. How Algorithms Distort Our World

The peril of algorithmic power lies in its covert operation and massive scale of efficacy.

Social psychologists and communication scholars have revealed its mechanisms: algorithms systematically exploit human cognitive shortcuts through meticulously designed ‘reward functions’. They discovered that anger spreads faster than reason, simplicity is more readily absorbed than complexity, and confirmation bias is more popular than challenging preconceptions. Thus, in maximising the core metric of ‘user engagement’, platforms inadvertently constructed a colossal production line for ‘bad content’.

Communication scholars further contend this has precipitated a ‘refeudalisation of the public sphere.’ The spaces of public discourse once safeguarded by professional editors are now governed by algorithmic logic. This logic disregards truth in favour of heat; it distinguishes not between the good and the bad, but optimises dwell time. Consequently, serious journalism is drowned out by gossip, in-depth analysis supplanted by emotional venting, and an invisible ‘cognitive downgrade’ spreads throughout society.

Ethicists specialising in technology and algorithms delve into the inner workings of this technological black box, identifying the root cause in the extreme simplification of objectives. When complex societal values are compressed into a handful of quantifiable commercial metrics—click-through rates, retention rates, conversion rates—algorithms inevitably diverge from humanity’s long-term welfare. Like a giant programmed with flawed objectives, the greater its power, the deeper the distortions it creates.

II. Why Confinement Within Bounds is an Inevitable Choice

In confronting such power, the voices of political philosophers must be heeded foremost. They remind us that the ancient adage – ‘Power tends to corrupt, and absolute power corrupts absolutely’ – remains no less pertinent in the digital age. Algorithmic platforms wield a composite authority that encompasses legislative (rule-making), executive (rule-enforcement), and judicial (dispute-resolution) functions. Yet this power emerged without democratic mandate, and its operation lacks fundamental transparency and accountability. Constraining it is therefore a modern imperative for safeguarding citizens’ freedoms and dignity.

Business strategists and platform economists arrive at the same conclusion from another angle. They argue that allowing ‘bad money to drive out good’ constitutes a short-sighted act akin to drinking poison to quench thirst. While substandard content may temporarily boost engagement metrics, it ultimately erodes the platform’s foundational trust, leading to user attrition, brand devaluation, and the withdrawal of societal licence. A healthy, credible, and sustainable digital ecosystem constitutes the true source of a platform’s long-term value. Thus, responsibly governing algorithmic power is not charity, but a prudent long-term investment.

III. Constructing a Multi-Layered ‘Cage of Power’

Translating principles into practice demands a sophisticated, multi-disciplinary social engineering effort. This requires not only specialised contributions from experts across fields but also deep synergy and subtle checks between their perspectives to navigate the complex tensions inherent in governance.

Tier One: The Cage of Values – Reshaping the Algorithmic Objective System

Technical and algorithmic ethicists serve as the chief engineers here. They advocate embedding ethical values directly into the system’s code. This entails shifting algorithmic optimisation from a singular focus on ‘maximising engagement’ to a comprehensive framework encompassing authenticity, diversity, fairness, and user wellbeing.

Specifically, recommendation systems could be mandated to incorporate ‘quality’ signals (such as source authority and content creation effort), reserving visibility for high-quality yet low-traffic content to dismantle the ‘traffic-first’ evaluation paradigm.

Second Layer: The Cage of Rules – Establishing Legal Boundaries and Platform Obligations

Platform governance and digital economy law experts are tasked with drafting the blueprint. They advocate for a dedicated Algorithmic Governance Act, whose core function is to provide mandatory legal safeguards for the ethical value framework. This legislation transforms abstract ethical principles into explicit prohibitions and liability clauses, such as:

Transparency Obligation: Requiring impact assessments of key algorithms with public summaries.

Non-Discrimination Obligation: Strictly prohibiting unreasonable price discrimination (‘big data price gouging’) or opportunity deprivation based on big data.

Duty of Care: Platforms bearing reasonable liability for significant societal harm caused by their recommended content (e.g., mass rumour propagation).

User Rights Protection: Granting individuals the right to explanation and appeal regarding automated decisions.

Here, a crucial governance tension emerges: achieving a delicate equilibrium between the rigorous, universal regulation pursued by legal experts and the innovative dynamism and market flexibility prioritised by platform economists. Excellent legislation does not stifle innovation; rather, it delineates clear ‘kerbs’ to enable innovation to race on safe, fair tracks. This necessitates rules possessing a degree of foresight and adaptability.

Third Layer: The Cage of Oversight – Establishing Independent External Checks

Regulatory science and independent audit experts serve as crucial gatekeepers. They advocate cultivating an independent third-party algorithm audit industry to conduct compliance, fairness, and non-discrimination audits on mainstream platform algorithms, with audit reports made public.

Concurrently, a nationally-level regulatory body with technical expertise should be established to oversee supervision, investigations, and enforcement. Its operations must remain independent while being subject to public scrutiny. Their work ensures legal provisions do not become mere ‘dead letters’ but exert genuine binding force.

The Fourth Layer: The Social Cage – Fostering Resilience Among Diverse Actors

Sociologists and techno-anthropologists have turned their gaze towards broader societal arenas. They emphasise that a healthy digital society cannot rely solely on top-down regulation; it must empower users and creators:

Enhancing users’ autonomy through “data portability rights”;

Boosting the viability of quality creators via market design and tool support (such as establishing funds for high-calibre content creators);

Boosting the public’s critical awareness of algorithms through widespread digital literacy education.

This empowerment transforms governance from unilateral control into multi-stakeholder co-governance. For instance, just as some platforms now allow users to mark ‘not interested’ to fine-tune recommendations, future systems should grant users more granular ‘value preference’ adjustment rights (such as ‘reduce emotional content’ or ‘increase diverse perspectives’), enabling individuals to ‘reverse-train’ algorithms to some extent. An alert, informed, and empowered civil society constitutes the broadest and most fundamental foundation for constraining algorithmic power.

IV. A Reconstruction of the Social Contract Concerning Our Future

Constraining algorithmic power is no simple task. It touches upon technological complexity, commercial interests, and the global competitive landscape. Yet this is no longer an optional technical optimisation issue, but a civilisational governance challenge that must be confronted.

We stand at the ‘constitutional moment’ of the digital age. The old social contract did not foresee the governing power of code; the new contract must incorporate algorithmic power into a framework of democratic accountability.

This demands profound collaboration: philosophers charting the course, ethicists establishing frameworks, legal experts defining boundaries, auditors conducting health checks, sociologists gauging public sentiment, economists planning for the long term. Meanwhile, business leaders and every citizen must recognise their rights and responsibilities within the digital commons, fulfilling their duty to safeguard this new contract through informed and prudent participation.

Taming algorithms ultimately serves to safeguard humanity. When we successfully confine this emerging power within a cage forged from values, rules, oversight, and social resilience, we protect not merely a clear cyberspace, but a human future where reason, truth, and dignity endure. The outcome of this battle will determine whether we are masters of technology or its captives.

中文原文

Translated by DeepL

Edited by Yiyi

Related:

In this age of algorithms, everyone should enhance their information literacy

从微软的实践,看组织如何捍卫数字主权

媒体寡头时代:全球超级富豪如何重塑信息与权力版图

克隆数字人、信息安全与隐私保护

DS:是否敢于用算法民主,对抗算法专制?

自由之教育

只有第一修正案才能保护学生、校园和言论

思想|当退化与疯狂成为潮流,更需要自我否定的勇气

环境影响行为,疫情下的路西法效应

Leave a Reply