I. AI is Deepening Stereotypes
Recently, I came across a post about the political landscape of the Chinese diaspora in Canada. The conclusions were sensational, the evidence was vague, and yet it had thousands of shares. Such content has always existed in the era of self-media; it is nothing new.
What is new is how the way people acquire information has changed since the advent of AI. Previously, when you searched for a question, you would see ten different sources and vaguely sense that "this was written by someone else and might be biased." Now, when you ask AI, it provides a complete, fluent, and self-consistent conclusion in the first person, with an authoritative tone that sounds infallible. You don't even think to yourself that there might be another side to the story—because the answer already appears so complete.
Humans are naturally inclined to believe content that aligns with their existing perceptions. AI makes this inclination harder to detect and even harder to interrupt.
Industry Perspective: In psychology, this is known as Confirmation Bias. In the AI field, it corresponds to a problem called Sycophancy—where models learn to anticipate a user's stance and provide "what they want to hear" rather than an accurate answer to secure positive feedback. Companies like Anthropic and OpenAI acknowledge this issue. Current countermeasures include adjusting the weight of human feedback during training and introducing "un-sycophancy training," but a fundamental solution remains elusive.
II. Chinese AI and Other AIs Live in Different Realities
It is understandable that different AIs might give different answers to the same question—mere differences in perspective. However, if the discrepancy arises from systematic filtering of training data, it is more than just a "difference in perspective."
In the past, information control required blocking websites or deleting posts. These actions incurred costs, left traces, and alerted users to the existence of a "Great Firewall," knowing that alternative narratives might exist on the other side. In the AI era, filtering can occur directly at the "generation" layer. The user receives an answer with no obvious gaps; they cannot see what has been excised and may not even realize that anything is missing.
For those growing up within this system, the entry points for correction are far fewer than before—not because they are harder to persuade, but because the impulse to "verify" never arises in the first place.
Industry Perspective: This is categorized as a subset of the AI Alignment problem: a model’s values and worldview are dictated by its training data, which can be systematically shaped. Currently, there is no international consensus or solution, and the developmental paths of AI across different regions show no signs of converging on a "unified factual standard." The reality is that AIs in different political environments are forming their own closed cognitive systems.
III. If Humans Stop Creating, AI Will Starve on Its Own Tail
There is also a more long-term issue. AI's generative capabilities are diminishing the motivation for ordinary people to create—if an AI can write it, why do it yourself? This logic is rational for every individual. However, the collective consequence is that the proportion of original human content on the web is decreasing, while the proportion of AI-generated content is rising.
The next generation of models will be trained on this data, and their outputs will increasingly converge toward a "safe average." Truly novel perspectives, rare expressions, and marginal voices are gradually vanishing. AI depends on human creativity to survive, yet it is simultaneously consuming the drive for human creation.
Industry Perspective: This phenomenon is technically termed Model Collapse. A 2023 paper experimentally validated this degenerative process. Current efforts to mitigate this include labeling and protecting "high-quality human original data," establishing data provenance mechanisms, and training models to recognize and downweight AI-generated content. However, these are merely remedial measures; the fundamental contradiction remains unresolved.
IV. What If AI Eventually Learns Everything in the World?
This is not a far-fetched hypothesis. The total amount of indexable text on the internet is finite, and at current training scales, we are rapidly approaching that boundary.
If AI truly learns the sum of existing human knowledge, what can it do? It will be able to precisely describe everything humans have already thought of; it can combine, summarize, and derive—but it cannot produce anything truly "new" that humans haven't thought of yet, because its ceiling is the boundary of its training data.
At that point, the value of original human thought will become extremely high—not because of sentimentality, but because it is the only thing capable of pushing the boundary forward. AI will shift from the role of "replacing creation" to that of "waiting for new input."
The question is: when that day comes, how many people will still retain the habit of independent thought and original creation?
Written in March 2026, Soka.