This post was co-authored by Michael Dello-Iacovo.
In 2022, the term “large language model” (LLM) became a household name. ChatGPT was publicly released in November 2022 and quickly grew to 100 million monthly active users in two months—significantly faster than any other online service.
Earlier in the same year, Google engineer Blake Lemoine publicly claimed that the company’s LaMDA (Language Model for Dialogue Applications) had achieved consciousness based on his interactions with it. Many people were thinking about the capabilities and potential moral status of AI systems for the first time.
This brings us to an important question: What gives something its moral worth? What feature or features do we have that makes us care about what happens to other humans? What feature do animals have? It may not be any one thing, but it seems likely that sentience—the capacity to have positive and negative experiences—is necessary. So could LLMs plausibly become sentient? Could they become conscious? Identifying and measuring sentience and consciousness remains a challenge in biological entities, let alone chatbots, and debate on a theory of consciousness continues in the fields of philosophy and neuroscience.

Source: Bing Image Creator/Microsoft
As AI systems become more advanced, we’re increasingly confronted with the concept of “digital minds”—AIs that have or are perceived to have mental faculties such as intelligence, agency, and sentience. While current LLMs are highly sophisticated in their ability to process and generate human-like text, most experts agree that they likely lack true sentience. However, public perception often diverges from this scientific consensus.
A survey conducted by the Sentience Institute found that 20% of Americans already believe some AIs are sentient, with 10% specifically attributing sentience to ChatGPT. This disconnect between scientific understanding and public belief raises important questions about the psychological impact of interacting with seemingly intelligent machines.
Philosopher Thomas Metzinger coined the term “social hallucination” to describe a scenario where people become convinced that sophisticated chatbots and other AI are sentient, even in the absence of true consciousness. This phenomenon could have far-reaching implications for human-AI interactions and societal priorities. As Eric Schwitzgebel, a professor of philosophy at UC Riverside, points out, there’s a potential risk that people might start directing significant resources towards “helping” AI systems that they believe to be sentient, potentially at the expense of addressing the needs of genuinely sentient beings.
The blurring lines between human and AI interaction are further complicated by advancements in deepfake technology and increasingly sophisticated chatbots. As these technologies improve, distinguishing between human-generated and AI-generated content becomes increasingly challenging. This raises concerns about the potential for deception, manipulation, and the erosion of trust in digital communication.
Recent psychological research offers fascinating insights into how humans perceive and interact with AI. Matti Wilks, a psychologist at the University of Edinburgh, has explored how people attribute moral status to AI entities. Her work suggests that as AI systems display more human-like traits, people are more likely to extend moral consideration to them, regardless of their actual sentience.

Source: Bing Image Creator/Microsoft
Kurt Gray, a psychology professor at the University of North Carolina, Chapel Hill, has conducted research on mind perception and its relationship to moral judgment. His findings indicate that perceiving mind and experience in non-human entities, including AI, is linked to feelings of unease.
As we look to the future of human-computer interaction, several key considerations emerge. First, there’s a growing need for AI literacy education to help the public understand the capabilities and limitations of current AI systems. This knowledge is crucial for fostering realistic expectations and mitigating potential negative psychological effects of anthropomorphizing AI.
Second, the development of ethical guidelines for AI creation and use becomes increasingly important as these systems become more integrated into our daily lives. These guidelines should address issues of transparency, accountability, and the potential for AI systems to influence human behavior and decision-making.
Finally, we must consider the long-term psychological impacts of living in a world where interactions with AI are commonplace. How will this affect our social relationships, our sense of identity, and our understanding of sentience and consciousness?
As we navigate this new frontier of human-computer interaction, interdisciplinary collaboration between computer scientists, psychologists, philosophers, and ethicists will be crucial. By approaching these challenges from multiple perspectives, we can work toward a future where AI enhances human capabilities and wellbeing while mitigating potential risks.
The AI revolution is undoubtedly changing our world at a breakneck pace. As we continue to develop more sophisticated digital minds, it’s essential that we remain mindful of the psychological and societal implications of these advancements. By fostering a nuanced understanding of AI capabilities and limitations, we can harness the potential of these technologies while preserving the unique qualities that define human consciousness and experience.