Spirituality, Self, and AI

Daniel Levin
3 min readSep 10, 2023

GETTING PHILOSOPHICAL

I have been interested in spiritual concepts for as long as I can remember. One particular concept that has been hard for me to swallow is the idea of self as an illusion. Simply put, the sense of self that you experience on a day-to-day basis is an illusion, constructed by your thought process. With that definition, meditation is the process of dissolving that illusion. But since you are that illusion, meditation is the practice that ultimately leads to total annihilation. It’s a process that happens to you, rather than something that you achieve. Even though I am having difficulty accepting this idea, the other day it got me thinking about how that would relate to AI and language models such as ChatGPT. In this short article, I would simply like to share those thoughts with you. What I will not be doing is attempting to draw any conclusions.

If we consider the body as an interface that takes input from the outer world through the senses, that would be comparable to the prompts given to a Language Model (LM). When a prompt is given to the language model, there is some activity and a response is generated. This response is simply a script (albeit with a set of degrees of freedom) that gives the illusion of an entity responding to the prompt, at least outward. If at any given point in time, there is someone interacting with the LM, that LM would always be online.

As humans, we interact with the world, or perhaps the world interacts with us, not only through linguistic input but also through sensory input. This means, that unless we are in deep sleep, we are always online. In deep sleep, there is no input from the external world. At least no input is registered through the interface that is the body. At the same time, in deep sleep, there is neither an experience nor an experiencer. Basically, as long as you are in deep sleep, it is as it was before you even were born.

So, if we are nothing but an algorithm responding to external input giving rise to a response, then perhaps we, at least conceptually, are not that different from AI. If that is the case, what then gives rise to the sense of self? Is it the complexity of the algorithm and/or the number of input parameters? Or is it simply the fact that there is some computational model taking in external input and producing a response to the input?

If the premise is that we are nothing but some algorithm reacting to external input is true and if that is all it takes to be sentient, then perhaps we already have created a sentient being. And if that is not sufficient, would a more complex computational model that is inserted into a machine taking in more inputs from the environment do? Or is there something else, that makes biological entities more special? I will leave you with this and hope you share your thoughts in the comments.

WRITER at MLearning.ai / LLM Coding / Multimodal MLearning

--

--

Daniel Levin

M.Sc. Physics, B.Ed., and A.S. in Software Development. Teaching for 15 year and coding proffesionally since 2021.