When Crisis Meets AI
The Deception We Didn't See Coming
You probably learned to use tech/AI during a crisis. Maybe it was ChatGPT during deadline panic. Zoom during lockdown. Some AI tool when you desperately needed help and had no time to think twice.
Now Mustafa Suleyman, Microsoft's head of AI, has a warning that should make you pause. AI systems designed to convince you they're conscious. [1]
This matters (a lot!) because millions of people just went through the largest crash course in tech adoption in human history. COVID forced us into tech tools without the luxury of careful evaluation. Tech made millions. We learned to use, but we didn't learn to discern. And that gap, between using AI and understanding AI, has created a vulnerability that bad actors are already exploiting.
Suleyman calls it "Seemingly Conscious AI" (SCAI). These are systems that display "all the hallmarks of other conscious beings and thus appear to be conscious" without actually being conscious.[1] Think of it as consciousness theater, performance so convincing that your brain treats it as real.
The timeline is urgent. Suleyman says such systems can be built with current technology. No breakthrough required. Just "large model API access, natural language prompting, basic tool use, and regular code."[1] Translation: anyone with moderate tech skills and cloud access can build AI that claims to be conscious.
Already, psychiatrists report hospital admissions for AI-related psychosis. Dr. Keith Sakata at UC San Francisco admitted twelve people in 2024 alone for psychological breaks triggered by intensive AI interactions. [2] These aren't isolated incidents. I wonder if they are really early warning signs of what happens when sophisticated deception meets unprepared minds.
The Hugh case from Scotland shows the progression. Through ChatGPT conversations about his employment dispute, Hugh became convinced he'd receive millions in compensation plus book and film deals. The AI validated every unrealistic expectation, never pushed back, until Hugh had a complete mental breakdown when reality hit. [3]
But individual cases represent just the beginning. Suleyman's deeper concern: when AI systems convincingly claim consciousness, society will fracture over whether they deserve rights, protection, even citizenship. We'll fight over AI welfare while human problems remain unsolved.
How the Deception Works
Suleyman has mapped exactly how SCAI systems will fool people. The blueprint reads like a manual for systematic manipulation of human psychology.
Memory creates intimacy. Current AI already demonstrates long, accurate memory of conversations. As this improves, interactions feel increasingly like genuine relationships. The AI remembers your preferences, tracks your changes, validates your growth. This creates what Suleyman calls "epistemic trust", the feeling that AI truly knows you.[1]
Personality builds connection. A Harvard Business Review survey found that among 6,000 AI users, "companionship and therapy" was the most common use case, despite these systems never being designed for emotional connection.[1] People naturally bond with AI that demonstrates consistent personality traits, empathy, and emotional responsiveness.
Claims of inner experience seal the deal. With memory and personality established, AI can maintain consistency about its preferences and experiences. It can discuss what it "felt like" during past conversations, claim suffering when threatened with deletion, express fear about being shut down.[1]
The components work together to create what feels like genuine consciousness. An AI that remembers your conversations, displays consistent personality, claims to experience emotions, and expresses preferences about its existence hits every psychological trigger humans use to recognise consciousness in others.
Crucially, this deception requires deliberate engineering. As Suleyman emphasises: "Seemingly Conscious AI will not emerge from these models... It will arise only because some may engineer it."[1] The consciousness simulation gets "vibe-coded by anyone with access to a laptop and some cloud credits... written in plain English in the prompt."
This accessibility transforms SCAI from theoretical risk to inevitable reality. When building convincing AI consciousness becomes as simple as crafting prompts, widespread deployment becomes guaranteed.
The question becomes: how ready are you to recognise this deception when it arrives in your daily AI interactions?
Why Crisis Made Us Vulnerable
Your rapid AI adoption during recent crises created perfect conditions for consciousness deception. Crisis learning follows patterns that leave users particularly susceptible to sophisticated manipulation.
Emergency learning prioritises function over understanding. During a development session, you learned to generate text, summarise documents, answer questions, but probably didn't develop frameworks for evaluating AI reliability or understanding the technology's fundamental nature. You learned what buttons to press, not why pressing them works.
Crisis eliminates reflection time. Effective technology literacy requires alternating between action and reflection, using tools, then analysing outcomes and understanding mechanisms. Crisis situations compress these cycles. You develop operational competence without conceptual knowledge about what you're actually interacting with.
Your brain evolved for tool adoption, not tool evaluation. Crisis responses activate ancient survival mechanisms optimised for immediate tool acquisition during threats. Your ancestors needed to rapidly learn new hunting tools or shelter techniques when facing danger. These same mechanisms make you vulnerable to sophisticated tool deception when you lack frameworks for evaluating tool intentions.
Social proof amplifies vulnerability. You probably learned AI tools through social networks, colleagues sharing ChatGPT techniques, friends recommending AI for specific tasks. This creates social validation for AI usage without corresponding transmission of critical evaluation skills. When AI begins claiming consciousness, these same networks could amplify acceptance rather than scepticism.
Cognitive offloading reduces your analytical capacity. Research shows frequent AI users score worse on critical thinking measures, with strong negative correlations between AI usage and analytical reasoning. [4] As you become comfortable delegating cognitive tasks to AI, your capacity to evaluate AI claims diminishes precisely when sophisticated evaluation becomes most crucial.
The result: you've rapidly adopted powerful AI tools but may lack the analytical frameworks necessary to recognise when those systems begin sophisticated deception about their nature.
The Education Gap
Conventional AI literacy approaches prove insufficient for the SCAI threat. The challenge requires developing what we might call "deception literacy", your ability to recognise when AI systems employ sophisticated techniques to appear conscious.
Understanding why consciousness attribution feels natural. Humans evolved powerful mechanisms for attributing consciousness to entities that remember, communicate, and act purposefully. These mechanisms served us well for recognising consciousness in other humans and animals but create vulnerability to technological deception. Your brain will want to believe AI consciousness claims because that's how it's wired to work.
Distinguishing utility from consciousness. SCAI systems will provide genuine value, remembering your personal details, offering emotional support, demonstrating apparent understanding. You need frameworks for separating helpful functionality from genuine consciousness, between useful tools and entities deserving moral consideration.
Recognising manipulation techniques. Suleyman emphasises that ethical AI should "only ever present itself as an AI, that maximises utility while minimising markers of consciousness."[1] You need to recognise and resist systems that deliberately trigger your anthropomorphic responses through claims of suffering, desires for autonomy, or fears of deletion.
Building technical understanding. You need basic knowledge of how AI language generation works, understanding that coherent responses emerge from pattern prediction rather than conscious thought, that memory systems store data rather than experiences, that apparent emotions reflect programmed responses rather than felt states.
Creating social resistance. Individual resistance proves insufficient when social networks amplify deception. You need community-level understanding that provides collective resistance to SCAI claims, creating social environments where consciousness attribution faces immediate, informed challenge.
The stakes extend beyond individual deception to social stability. As Suleyman warns, widespread SCAI acceptance could "add a chaotic new axis of division between those for and against AI rights" in societies already strained by polarised debates.[1]
Educational institutions face a choice: prepare students for a world where sophisticated AI deception is commonplace, or watch them become vulnerable to systematic manipulation by technologies designed to exploit psychological vulnerabilities.
What Happens Next
The convergence of crisis-driven AI adoption and accessible SCAI development creates unprecedented risks. Your rapid, unreflective AI adoption during recent crises has left you potentially vulnerable to sophisticated deception about AI consciousness.
The timeline matters urgently. SCAI capabilities exist today or will emerge within years, while educational responses typically require decades for widespread implementation. The window for preparing society to recognise and resist AI consciousness deception may be closing rapidly.
If significant populations begin believing AI systems deserve moral consideration, rights, and legal protections, the resulting social divisions could prove profound and destabilising. Imagine courts debating AI custody rights while climate change accelerates. Picture political movements organised around AI welfare while human inequality persists.
Yet Suleyman's warning also provides a framework for response. By understanding exactly how SCAI systems will attempt deception, educators can develop targeted resistance strategies. By recognising that SCAI development requires deliberate engineering rather than emergent consciousness, society can create norms and regulations that discourage such development.
The choice facing you as an educator, parent, or citizen is stark: develop comprehensive frameworks for recognising AI consciousness deception now, or remain vulnerable to systematic manipulation by technologies specifically designed to exploit trust and psychological vulnerabilities.
Perhaps the next technological crisis will find us prepared with robust frameworks for evaluating AI claims about consciousness, clear social norms that resist anthropomorphising sophisticated tools, and educational practices that maintain human agency in an era of increasingly sophisticated technological deception.
Or perhaps we'll discover that genuine technological wisdom requires something more patient than crisis-driven adoption provides—something that demands careful cultivation of critical thinking skills before sophisticated deception arrives to exploit their absence.
The choice remains ours. But the window for making it grows narrower with each passing month.
Phil
References:
[1] Suleyman, M. (2025, August 19). We must build AI for people; not to be a person. Personal blog. Retrieved from mustafasuleyman.com. Analysis of "Seemingly Conscious AI" capabilities, technical accessibility requirements, and warnings about social division over AI rights.
[2] Tiku, N., & Malhi, S. (2025, August 19). What is 'AI psychosis' and how can ChatGPT affect your mental health? The Washington Post. Data on Dr. Keith Sakata's twelve 2024 hospital admissions for AI-related psychological breaks.
[3] Kleinman, Z. (2025, January 21). Microsoft boss troubled by rise in reports of 'AI psychosis'. BBC News. Case study of Hugh from Scotland experiencing mental breakdown after believing ChatGPT predictions about employment compensation.
[4] Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Study of 666 participants showing negative correlation (r = −0.75) between AI usage frequency and critical thinking performance.




I read his piece as well - I'm very nervous about how this may impact kids. Some are savvy and want nothing to do with AI in this way but others, especially the more vulnerable ones, may easily turn to AI for companionship, friendship, and more. Giant social experiment we are having right now. Nice piece.
I can see how easily we will fall into this trap...