Recently, a New York Times article asked the question: Are we living in a computer simulation? Then adds: let’s not find out.
Why? Because if we are indeed living in a computer simulation, and the programmers of that sim find out we’re onto them, they might just pull the plug on our universe.
Unless what they’re studying is how humanity would respond to finding out we’re living in a computer simulation, in which case we’d better carry on or risk forcing those hyper-advanced geeks to reboot our universe and start over.
“They’re onto us. Delete them before they find my directory of Sonic the Hedgehog porn”
A lot of people believe this theory, or find it so plausible as to be virtually assured. Elon Musk says the chance that we are living in “base” reality is a million to one. Even everybody’s favorite science teddy bear, Neil DeGrasse Tyson, believes it’s about fifty-fifty that we’re all just figments in a futuristic virtual reality.
And honestly, the theory seems plausible enough when we look at the world around us and see merely a web of interconnected (albeit complicated) actions and reactions; a universe that could plausibly be dictated by some sufficiently advanced computer program.
Except for one major thing—a thing I can’t believe these super smart people haven’t taken into account. A thing that isn’t in fact, in the world around us at all.
The idea that you are a computer simulation stops being plausible when you ask: why does that matter to me?
“Why am I worried that Andy Dick might not be a real person?”
Meaning, if you, the reader of this blog, are merely a digital figment in a futuristic simulated universe, why should you experience any internal, invisible, conscious reaction to it?
What purpose would that reaction serve?
Why, in fact, should any of us have any internal, conscious self-awareness at all?
Elon Musk might argue that the internal consciousness—the part that makes you aware of being you and not anyone else—is an essential component of the simulation.
But why? What practical purpose could that possibly serve?
The machinations that occur inside your internal consciousness don’t affect me, as another person, until they motivate you to some action. For this reason, it would be non-essential (and terribly inefficient) to program billions of computer figments with rich, internal consciousnesses when all that matters to the simulation is their resulting actions.
“Self-awareness (being, by definition, only useful to the self experiencing it, and not at all to the simulated universe at large) would be utterly superfluous.”
Think about your own experience of other people. You have no direct interface with the inner consciousness of any other single being. Your only understanding of other humans is through symbols they present to your senses—the words and actions that comprise the sum total of our experience of the rest of humanity.
Thus, if a simulacrum of humanity was created for some advanced experiment, words and actions would be all that was needed to accomplish its purpose.
There would be absolutely no reason for you, as an actor in that simulation, to experience any internal self-awareness, since that self-awareness serves no purpose to the simulation’s outcome.
Instead, you, as a simulated figment, would be to the computer what all of the rest of humanity is to you: mere bundles of words and actions responding to a complex code of environment and programming.
Basically, high-res Donkey Kong
In short, even if some advanced civilization were to develop the capability to invest a simulated personality with self-awareness, there would be no practical reason to do so. It would be far easier (and more ethical, which we will come to in a moment) to simply rig each simulated personality to behave and speak according to that complex code of environment and programming.
Self-awareness (being, by definition, only useful to the self experiencing it, and not at all to the simulated universe at large) would be utterly superfluous.
Thus, in a simulated universe, you would not be consciously reflecting internally on what all this means to you, as a self-aware being. You would instead be a symbolic figment, like a non-player character in a video game, whose programmed actions would henceforth be nominally altered by this new input.
Since you are internally aware of this distinction, then you can feel confident that you, at least, are not a mere line of code in some hyper-advanced simulation.
Unless, of course, self-awareness is (for some reason) necessary to the simulation.
Which bring us to the ethical consideration.
Imagine a civilization advanced enough to create simulated personalities that experience self-awareness. Would not this civilization also understand the responsibility inherent in creating such a universe of beings? Certainly they would understand that the moment self-awareness is granted, a person is created.
With the insertion of consciousness, mere inert programming becomes new life.
Why else do we care about these two?
Conclusion: since you, reading this, have self-awareness—an internal and invisible consciousness of being you and no one else—then we can logically infer from this one of two comforting assumptions:
One: that we are probably not the simulated creation of a hyper-advanced computer model, since there would be no value in creating simulated figments with conscious self-awareness. It would simply not serve the simulation in any measurable, practical way to include such a complicated and ethically problematic detail.
Or two: that even if we are computer simulations imbued, for whatever reason, with conscious self-awareness, then it stands to reason that such an advanced society would also understand the ethical responsibility of creating what is, essentially, sentient life, and would treat it as such.
Unless, of course, that society is both hyper-advanced and painstakingly sadistic, which is possible, albeit highly unlikely (despite my conviction that humanity is, at best, only accidentally good, and only sometimes). I simply don’t believe that a truly sadistic culture could survive long enough to create such advanced computer technology as would be required for self-aware digital life.
So: Neil DeGrasse Tyson, Elon Musk, and the rest of us can breath easy knowing that either we are, in fact, “base reality” (most likely) or at least that our programmers know and respect that they’ve created a form of life deserving of preservation.
So how do the smartest minds in our universe miss this fairly obvious clue that we aren’t in grave danger of being turned off/rebooted?
Maybe it just proves that sometimes the smartest people are likely to miss the simplest truths.
Or maybe it proves that the purpose of this simulated reality is to make me believe that I’m smarter than Elon Musk and Neil Degrasse Tyson.
Both explanations seem equally likely to me.