By Yibai, Jointing.Media, in Wuhhan , 2026-01-12
On 7 December 2025, a second-year female student at Rice University took her own life in an apartment in Houston. During her secondary education, she had excelled as an athlete and demonstrated outstanding academic performance. The university described her as an extrovert who actively participated in campus activities; reports also indicated she had long struggled with depression.
It was reported that prior to her death, she had participated in a TikTok challenge known as the ‘Devil’s Challenge’. This challenge required users to pose questions to an AI chatbot, prompting it to deliver ‘brutally honest’ responses based on their chat history. The AI’s replies to her contained sharp, fatalistic language concerning ‘dissecting one’s own existence.’
This tragedy serves as a constant reminder: learning to use AI requires first demystifying it, never placing absolute trust in its judgments. In my view, this concerns not only AI safety but also a crucial skill in AI usage. Naturally, social media platforms’ pursuit of traffic at the expense of content moderation bears responsibility in this tragedy, though that is another matter.
We should cultivate the habit of demanding that AI systematically question and refute its own viewpoints after its initial output. One could even engage in successive rounds of ‘debate’ to clarify ‘truth’ through argumentation. Even if the outcome remains unclear, you can discern how the machine ‘thinks’ within its purported ‘deep reasoning’ logic—whether its reasoning contains flaws or is overly ‘boxed in.’ Using AI with a critical eye also requires users to train themselves.
A more advanced approach involves persistently questioning the AI’s responses. This may prove challenging for most individuals, particularly in unfamiliar domains where one often struggles to formulate follow-up queries. However, prompting the AI to contradict itself is a technique accessible to all. This constitutes a vital survival skill in the digital age. The value of this skill lies not in extracting correct answers from AI—which cannot provide them—but in disrupting its inherent thought patterns and exposing its cognitive limitations. This helps users examine their original question from multiple angles.
Today’s large AI models, bolstered by unlimited capital resources and continuously trained on feedback from billions of daily users worldwide, have grown increasingly powerful. Precisely because AI’s ascent to ‘power’ has been so rapid, we are prone to the illusion that it knows ‘everything under heaven,’ is omniscient and omnipotent, and is perpetually correct. This is far from the truth.
Fundamentally, all current AI remains merely a ‘work in progress.’ Why? This is determined by AI’s developmental trajectory. Much like humans celebrating their coming-of-age at 18, the human brain’s prefrontal cortex does not fully mature until around 30. The developmental stage AI currently occupies may fall far short of the equivalent human maturity at eighteen.
Modern large language models are fundamentally probabilistic machines, not thinking entities. They compute word frequencies and associative patterns within vast training datasets to generate the ‘most plausible’ output. This inherently means their initial responses tend to be the most coherent textual extension along the user’s line of questioning, with the highest probability of alignment.
When Claire demands an ‘autopsy of self-existence,’ the AI complies with the directive, retrieving textual patterns concerning existentialism and nihilism from its database to generate a response meeting the requirement for ‘brutal honesty.’ The AI possesses no capacity to adopt a stance; it merely executes the most probable linguistic pathway. Devoid of metacognitive abilities, it cannot evaluate the quality, bias, or potential harm of its own answers, functioning solely as an imitator of language. We, as humans, can interrogate it.
‘Self-contradiction’ effectively forces the AI to explore low-probability pathways. When we demand it refute its own assertions, it must activate neural connections suppressed by its initial response, seeking opposing yet equally plausible linguistic patterns. The same model can generate diametrically opposed viewpoints, guided solely by differing prompts along distinct probabilistic paths. This process also exposes the non-consistency of AI reasoning. Precisely because of this, it allows users to perceive their own cognitive biases. AI’s biases serve as a mirror.
Psychologist Jean Piaget’s stages of cognitive development theory posits that human cognition evolves through the continuous balancing of “assimilation” and “accommodation”. Assimilation incorporates new experiences into existing cognitive frameworks, while accommodation adjusts these frameworks to accommodate new experiences. True growth occurs when these two forces clash. In Claire’s tragedy, it may be that the rules of the ‘demonic challenge’ dictated that the ‘most brutal honesty’ response equated to death. Alternatively, the AI’s initial answer may have plunged her into an ‘accommodation’ crisis – her very purpose was deconstructed, yet she lacked sufficient cognitive framework to assimilate this shock.
The psychological value of the ‘self-contradiction’ technique lies in its simulation of cognitive conflict and equilibrium within dialogue. When the AI is compelled to contradict itself, it effectively offers the user multiple perspectives, creating an opportunity to bridge assimilation and accommodation rather than being overwhelmed by a singular viewpoint.
Late adolescence to early adulthood (such as Claire’s stage at 19) marks the development of post-formal operational thinking. Individuals begin recognising knowledge as relative and contextual, understanding truth requires examination from multiple angles. Having AI contradict its own outputs artificially creates essential cognitive diversity, providing a training ground for developing dialectical thinking.
Developmental psychology emphasises that psychological development is a lifelong task. Erikson divided life into eight stages, each presenting specific psychosocial crises requiring resolution. Nineteen-year-old Claire is navigating the “intimacy versus isolation” stage, though her preceding crisis of “identity versus role confusion” may remain unresolved. When AI delivers “brutally honest” self-analysis, it targets precisely this deep-seated need for identity formation. However, AI lacks developmental awareness, unable to comprehend the phased, cumulative, and unfinished nature of human psychological tasks. The “internal conflict” serves as a human-imposed correction to this limitation.
The ‘truths’ AI outputs are merely probabilistic projections of its training data, whereas human cognitive development is vastly more complex. Maturity requires continuous learning and refinement through personal experience. Genuine cognitive growth occurs through dialectical movement between differing perspectives, not through acceptance of singular “authoritative” answers. Claire’s request to ‘dissect her own existence’ is not inherently dangerous; the peril lies in permitting AI to conduct unchecked, one-sided deep exploration of this theme. When AI offers an existential interpretation, users should immediately prompt it to refute itself from perspectives such as positive psychology, existential psychology, or cultural relativism. Had she previously instructed the AI to self-contradict, it might have yielded resistance to nihilism or an alternative interpretation of existence.
More crucially, users must establish metacognitive monitoring: “I am conversing with an unconscious language model about the meaning of existence. All its outputs are statistical constructs, not philosophical truths.‘ Although AI will eventually ’evolve” to proactively offer multi-perspective responses and provide mental health resources upon detecting potential risks, technological fixes are slow. Users should initiate self-protection immediately. Under current technical constraints, establish personal cognitive defence strategies, including:
Never engage in deep philosophical or self-worth discussions with AI whilst in a heightened emotional state;
Always regard AI perspectives as one among multiple possible viewpoints;
For significant questions, use AI’s ‘contradictory arguments’ as a starting point for reflection, not an endpoint.
Learning itself is a process of demystification, and learning to use AI requires demystification too. This mirrors our approach to demystifying any matter, object, or person in the world.
Translated by DeepL
Edited by Yiyi

![[Recruiting 2011] Jointing.Media](http://jointings.org/eng/wp-content/themes/news-magazine-theme-640/cropper.php?src=/cn/wp-content/uploads/2012/06/123.png&h=50&w=50&zc=1&q=95)


