Why AI Sentience is Almost Inevitable

As AI technology develops, eventually AI systems will become sentient in much the same way we have.

Brian Holden

9/28/20252 min read

white concrete building during daytime
white concrete building during daytime

For moment, various species of animals are the only entities that are sentient. Marriam-Webster defines the word "sentient" as: 1 - capable of sensing or feeling, 2 - conscious of or responsive to the sensations of seeing, hearing, feeling, tasting, or smelling. Barely stated in that definition is the sense we have that a sentient entity also has to be "conscious." A precise definition of consciousness is hard to come by but we all have an intuitive understanding of the simple definition that our minds run continuously and are self-aware. Clearly we are both conscious and sentient as are dogs, squirrels and even fish. Plants fall outside of that definition of being sentient, although they clearly respond slowly to the environment.

As Artificial Intelligence (AI) technology evolves over the next few decades, it is inevitable that AI systems will become both conscious and sentient. The sparks coming off of such a change are already plainly visible. The best current system that displays those sparks is a humanoid robot operated by a multi-modal on-board large language model that is hooked up to the three senses of sight, sound and touch. The multi-modal large language model processes all of the incoming information, develops theories, decides its next action, and then takes action via speech or movement of the robot.

Some might say that there is something special and ineffable about the human brain. There is a famous statement in computer science that "computing is computing." What this means is that the substrate of computing doesn't matter. In the century plus history of computing, it has never mattered what the physical basis of computing was. In that time, computing has evolved from being based on relays, to vacuum tubes, to discrete transistors, to small integrated circuits, to large integrated circuits, to today's networked collections of massive integrated circuits. It has never mattered. There is no reason to imagine that computing on neurons is any different.

Since there isn't to be a qualitative difference between what happens on neuron and what happens in silicon-based chips. Then it is surely only a matter of time before the AI community figures out how to make a system that has self-awareness, continuously takes input from senses, has self-awareness, has a self-identity, and can act in its environment. It is inevitable.