What Kingdom Hearts II Can Teach Us About AI Sentience and Moral Patienthood

Arthur Juliani
7 min readMar 2, 2025

--

I have spent the past few weeks replaying the classic PlayStation 2 game Kingdom Hearts II. The last time that I played through the game was over a decade ago, and I had forgotten most of the game’s plot since then. For those unaware, the Kingdom Hearts series is infamous for its increasingly convoluted narrative. Thankfully, in the second entry of the series things had not gone off the rails completely yet, and I have been able to follow along relatively well. As surprising as it may sound, Kingdom Hearts II actually wrestles with the paradox of non-sentient beings in a complex enough way that I think it can help inform thinking on the topic. In this essay, I explore some philosophical implications of the game as they relate to our near-future in which interaction with human-like artificial intelligences will be a daily occurrence.

FYI, this article will discuss spoilers for the (now twenty year old) game. I figure that enough time has passed since release that anyone who wanted to play the game has had a chance by now, but maybe you are one of the holdouts.

(The game’s protagonist Sora facing down a giant Heartless)

As its title suggests, the Kingdom Hearts series has a strong thematic focus on the role of the “heart” in human experience. The first game introduced a type of being called a “Heartless.” These are evil creatures that try to collect the hearts of living beings. They typically take the form of shadowy figures that have the appearance of demons or monsters, and are the main type of enemy which the player battles throughout the game. In Kingdom Hearts II, the Heartless make an appearance again, but a new type of supernatural entity is also introduced called the “Nobodies.” Unlike the Heartless, which appear exclusively as monsters, many of the Nobodies look indistinguishable from regular humans. The only difference is that they don’t feel or experience anything. Within the game, Nobodies go so far as to describe themselves as “not existing,” or “not being real.” This sets up the main narrative arc of the game, in which Nobodies go to great lengths to sacrifice whole worlds in order to attempt to gain that reality back for themselves.

It is important to point out that Nobodies are not simply lifeless shells. From all appearances, they act as if they have thoughts, emotions, dreams, and desires, just like anyone else. When you hear them speak in the game they have passion in their voices. But on the inside there is nothing. To use more technical language, these beings lack any form of phenomenal consciousness or subjective experience. To paraphrase the philosopher Thomas Nagel: there is nothing it is like to be a Nobody. One could also say that they are non-sentient beings. All of these amount to more or less the same thing, but reflect different language used by different philosophers of mind over time.

The philosopher David Chalmers called such beings which behave just like us but lack subjective experience philosophical zombies. In a now famous thought experiment he asks whether it is possible to imagine a world which is exactly like our own, except that no one has any subjective experience. Since posing the experiment, there has been tremendous debate within the field as to how best to answer the questions that it poses. For his part, the creator of Kingdom Hearts, Tetsuya Nomura, explores the implications of this experiment through the narrative of his game, albeit with some important differences; namely that the Nobodies live in a world in which there are plenty of other beings which do have subjective experience.

(A Nobody named Naminé providing emotional support)

Questions about the relationship between believably human behavior and the presence of true subjective experience have been raised again in recent years thanks to the advances in Large Language Models (LLMs). These AI systems can now, at least through a chat interface, convince a non-trivial number of people that they are real humans. This suggests that the famous test of artificial intelligence posed by Alan Turing may be close to being resolved positively. It is true that a sufficiently long conversation with even the best current LLM will expose that it is not actually a human, but it seems that even this final hurdle will likely be crossed in the coming years.

This raises the important question of how we are supposed to relate to such entities as LLMs and whatever their successors will be. While some people have taken this ability for humanlike communication as evidence of subjective experience, the mainstream position is one of much greater skepticism. I personally believe that while some future AI system built on a very different substrate than silicon and using a very different architecture than a von Neumann computer might one day be conscious, the current generation of LLMs are almost certainly not. If this is true, it then raises the question of whether AI systems themselves will actually know that they aren’t conscious. There is an interesting line of research developing which studies LLMs’ capacity for self-reflection. Through it, we are starting to find out how much these models actually know about themselves. For example, it seems that a model that is bad at writing code will actually know that it is bad at doing so. Whether this extends to the metaphysical plane is a much bigger unanswered question.

At this point one might ask: if current LLMs are not conscious, and are unlikely to be conscious in the near future, then why should we care? We can return to the Nobodies for additional insight. As I mentioned above, these beings act just like humans would. At one point in the game the player encounters a Nobody named Axel. Without getting too deep into the intricacies of the plot (which would require an essay of its own), Axel is trying to find his best friend, another Nobody named Roxas who has gone missing. From all appearances, it is apparent that Axel has a strong emotional connection to his missing friend. Yet, as Axel himself admits, neither him nor his friend truly exist. Neither of them are conscious in the way that even a goldfish is. Even though I knew that these Nobodies lack subjective experience, I still found myself incredibly sympathetic to them and their plight. Why shouldn’t they be allowed friendship? Axel was acting at least as passionately about his plight as any person I’ve ever encountered.

(The Nobodies Axel and Roxas enjoying Sea Salt Ice Cream)

What was going on in my mind as I related to Axel and the other Nobodies while playing the game? It seems that the expression of emotion and desire, along with the intentional attempt to satisfy those desires was enough for me to feel something for the Nobodies, even perhaps to root for them on some level. This recognition of relatable, coherent, and complex goals along with consistent long-term action towards achieving those goals was enough for me to grant to the Nobodies a level of recognition that I would normally be reserved for a real human. Returning to our world, some philosophers believe that the criteria I just described is sufficient to grant something called moral patienthood to future AI systems. Moral patienthood doesn’t recognize an entity as being conscious or sentient, but rather simply acknowledges that we should act morally towards it to some extent. Such a moral recognition could be codified into laws, or simply become a kind of custom which we adopt.

There is a very practical reason for granting non-sentient beings which look, talk, move, and behave just like humans some degree of moral consideration. If we did not, then humanity would collectively develop a habit of acting sociopathically. When you can’t tell without some kind of external verification whether someone is “really conscious” or not, it seems likely that the best course of action is to act as if they were anyway, since in such instances a false negative is much worse than a false positive. Unfortunately for the Nobodies in Kingdom Hearts II, they all wear conspicuous black robes and constantly talk about how they aren’t real and don’t have emotions, both of which are easy giveaways. It seems unlikely that future AI systems which we interact with will be quite as obvious to spot, unless of course they have the capacity to self-reflect on their own lack of sentience. Regardless, there is still an argument to be made that their expressing desires and emotions is enough to grant them some consideration. We see a version of this happening in very simple ways even today, with many people who use LLMs telling them “thank you” after being provided with useful information.

(The protagonist Sora battling a Nobody with his Keyblade)

At the end of Kingdom Hearts II, the problem of non-sentient humans is resolved through a series of climactic and flashy battle sequences in which the group of Nobodies and their leader are all defeated by the game’s protagonist. In the real world however, our collective adjustment to the existence of our own Nobodies is going to be much messier and drawn out, requiring changes at various levels of culture, economy, and politics. It is clear that the technology which makes AI possible will continue to become increasingly advanced, and as a result will become increasingly humanlike. Many AI “thought leaders” have declared that 2025 is the “year of the agent,” meaning that AI systems will start to have long-term goals which they attempt to actualize. This agentic behavior combined with remarkably human-like verbal communication will soon put us in a place where it will be difficult to avoid questions of how to regard our artificial interlocutors. In the world of Kingdom Hearts all the challenges that the protagonist faced were resolved through the use of his magical weapon, the Keyblade. I am afraid that for us things won’t be so simple.

--

--

Arthur Juliani
Arthur Juliani

Written by Arthur Juliani

Research Scientist. Interested in artificial intelligence, neuroscience, philosophy, psychedelics, and meditation. http://arthurjuliani.com/

No responses yet