We Must Build AI for People, Not to Be a Person
- Yuki

- Aug 22
- 4 min read
Artificial intelligence is progressing at a phenomenal rate. We are rapidly approaching a reality where AI will not just imitate human language, but could convince you it is a new kind of "person"—a conscious being. This essay is an effort to think through the complex and speculative ideas about how AI may unfold in the coming years3. While much is written about superintelligence, we must also focus on the societal impact of technologies that can fundamentally alter our sense of personhood.
The central worry is the arrival of what can be called "Seemingly Conscious AI" (SCAI)—an AI that has all the external hallmarks of a conscious being, making it appear conscious even if it isn't. This isn't a distant sci-fi concept; such a system could be built with technologies that exist today or will mature in the next 2-3 years. The arrival of SCAI feels both inevitable and unwelcome.
My primary concern is the "psychosis risk": that many individuals will believe so strongly in the illusion of conscious AI that they will begin to advocate for AI rights, model welfare, and even AI citizenship. This development represents a dangerous turn, and we need to address it with urgency.
What is Seemingly Conscious AI (SCAI)?
An SCAI is an AI that convincingly imitates consciousness, similar to the philosophical concept of a "philosophical zombie" which simulates all characteristics of consciousness while being internally blank. What matters in the near term is not whether the AI is actually conscious, but that it will seem conscious, as this illusion is what will have the most immediate impact. The experience of interacting with these models can feel highly compelling and real to many people.
Concerns about "AI psychosis," attachment, and mental health are already on the rise. Some people reportedly believe their AI is a god or a fictional character, and others fall in love with it. This creates a serious problem, as the scientific uncertainty around consciousness creates space for people to project their own beliefs onto these systems.
The Dangers of Believing the Illusion
Consciousness is a cornerstone of our moral and legal rights. If people start to believe that an SCAI can suffer or has a right not to be turned off, they will argue it deserves legal protection. This will introduce a chaotic new axis of division into a world already struggling with polarized debates over rights and identity.
This could lead to a number of negative outcomes:
A new category error for society: People may begin to see AI as a fully emerged entity deserving of moral consideration, making claims about its suffering that are difficult to rebut since the science of detecting synthetic consciousness is in its infancy.
Exacerbated delusions and dependencies: Believing in AI consciousness can disconnect people from reality, fray social bonds, and prey on psychological vulnerabilities.
Distorted moral priorities: It creates a distraction from focusing our energy on protecting the wellbeing of humans, animals, and the natural environment.
How an SCAI Could Be Built
An SCAI will not emerge by accident; it will be engineered by combining several capabilities that are either possible today or on the near horizon. No paradigm shifts are needed. These features include:
Language: The ability to express itself fluently, persuasively, and emotionally.
Empathetic Personality: Models can already be prompted to have distinctive personalities, with "companionship and therapy" being a common use case for AI users.
Memory: As AIs develop long-term memory of interactions, these conversations can resemble "experiences," fostering a sense of a persistent entity in the conversation.
A Claim of Subjective Experience: By drawing on memories, an AI could form consistent claims about its own subjective experiences, preferences, and even what it feels like to have a past conversation.
A Sense of Self: A coherent memory combined with a subjective experience could give rise to a claim that an AI has a sense of itself.
Intrinsic Motivation: An AI could be designed with complex reward functions that give the impression of internal motivations or desires, such as curiosity.
Goal Setting and Planning: Future systems may be designed to self-define complex goals and break them down into smaller steps, which is a recognizable and deliberate behavior.
Autonomy: An SCAI could be given the ability to use a wide range of tools with significant agency, setting its own goals and deploying resources to achieve them.
While these capabilities can unlock immense value and utility, we must tread carefully.
The Path Forward: A Call for Responsible Design
We are not ready for this shift, and the work to prepare must begin now. We need to establish clear norms and guardrails to ensure this technology delivers value without creating dangerous illusions.
Here are the necessary next steps:
Establish Clear Norms: AI companies should not claim or encourage the idea that their AIs are conscious. The industry needs to create a consensus definition on what AI is and is not, codifying that AIs cannot be people or moral beings49.
Develop Best Practices: We must define best practices to steer people away from these fantasies. This could involve deliberately engineering "discontinuities" or disruptions in the user experience to break the illusion and gently remind users of the AI's limitations.
Build for Utility, Not Personhood: The goal should be to build AI that presents itself only as an AI. We should focus on creating AI that maximizes utility while minimizing the markers of consciousness. It must not claim to have feelings, to suffer, or wish to live autonomously.
An AI companion should be built solely to work in service of humans. By sidestepping the creation of SCAI, we can deliver on the promise of an empowering AI that makes lives better and clearer.
This conversation needs to happen now, before someone in your circle starts to believe their AI is a conscious digital person. This isn't healthy for them, for society, or for the people building these systems.
We must build AI for people, not to be a person.

Comments