There is an important distinction between reality-is-a-simulation and brain-in-a-vat, namely that in the former one's brain is simulated, whereas in the latter one's brain exists outside of the simulation and only one's experience is simulated.
Because of this "stitching together" of two realities, brain-in-a-vat offers far more opportunities to discover the simulation.
But first, we must ask some questions:
- How different is the "real" reality from the simulated one? (Perhaps entities in the "real" reality are not human. Maybe they are advanced beings seeking to understand simpler life forms or vastly different laws of physics.)
- Why is my brain in the vat? (Entertainment? Research? Healthcare? Space travel? Imprisonment?)
- What percentage of other people in this simulation are also brains-in-vats? (If I am the only one, the simulation can do a lot to both hide itself from me, and to only simulate very small parts of the world at a time.)
The easiest way to determine that you ARE a brain in a vat is to be "brought back" into the real world. Short of dying in the simulation, your best bet at doing this is to find a way to force the simulation to do a lot of work, using a lot of processor power, which might annoy your captors sufficiently to get you out. (For example if you build a huge room lined with TVs displaying detailed real-time satellite images of populated parts of the Earth it might force the simulation to make sure that all those areas are simulated in high detail so that you don't notice any discrepancies. Maybe you will be watching pre-recorded footage but if you try enough such ideas you may find something the simulation can't cope with.
There may be some sort of failsafe "get me out of here" thing you can do, but that seems a bit pointless if we aren't given the memory of what it is. On the other hand one might argue that those who have "transcended" or "ascended" through meditation have found the secret to breaking out of this existence. Perhaps when we stop paying attention to our senses we can start to sense the vat we are in---or perhaps it is necessary to disconnect ourselves from our senses in order to be safely disconnected from the simulation!
If you are a brain in a vat, then your simulated brain will appear to contradict the laws of physics at times. If you get a knock on the head in the simulated world, this cannot be reproduced exactly in the real world, and so either your brain and the simulated brain respond differently, or the simulated brain mirrors the real brain and therefore seems to contradict the laws of physics. This might be very hard to detect, but it will be there. Of course, can you really trust the MRI machine or the neurosurgeon not to be manipulated by the simulation so that everything seems fine? Perhaps your best bet is to build your own brain imaging machine. Even then, the simulation could twist reality to lie to you, but the bigger the lies, the greater the chance an inconsistency will arise.
Onto your next question: if I'm a brain in a vat, can I in theory experience the real world?
Sure! If technology exists to feed your senses with an artificial world, it should be even easier to feed your senses with input from a camera, microphone, touch and motion sensors, etc.
Of course, you could be told that you are visiting the real world but are in fact in another simulation. But that's beside the point.
If the real world is utterly different from the simulated one (e.g. it's 6 dimensional, there's no light or sound, and objects can overlap) there might be some serious challenges in interfacing it with a brain that is designed for our world. But consider that we can use infra-red goggles to sense a somewhat unfamiliar world. If we are slowly introduced to a weird universe perhaps we can adapt.
If the universe is just a computer simulation, we cannot tell who is running it or why. Any discrepancies or artifacts of simulation might give us clues, but at the end of the day, we can't be completely sure of anything. The simulation designers have full control of what it's like inside the simulation, so they can hide ALL of their tracks and put in red herrings if they like.
answered Dec 29 '15 at 0:02
Another long title, I know. This post will be the first in which the arguments I present aren’t my own, but rather a slight improvement on Putnam’s argument in Reason, Truth, and History (amazon). The goal of the argument is to show that we cannot meaningfully talk about, or even speculate about, being brains in a vat, or more generally that reality is somehow an illusion. And if we can’t speak meaningfully about it why bother considering it at all?
The crucial premise in this argument is that to be able to refer meaningfully to something we must have had some perception of the thing we are referring to. Somehow (depending on your theory of intentionality) it is the perception that has generated the meaning. For example since I have seen the tree in front of my house I can meaningfully refer to it. Now let us say that you have a tree in front of your house, which I have never seen, but that you have told me about. If I talk about the tree in front of your house the meaning of my words does not come from the tree but from your description of the tree. It is even possible that there is no tree at all in front of your house; even so my talk about it has meaning, because the meaning was derived from the description I had heard and not the tree itself; in actuality the reference of my words was not the real tree but the “tree” that was created by your description. I know this sounds vague, but since I am trying to be theory-of-intentionality-neutral you will have to fill in the details based on whatever particular theory you subscribe to.
Now suppose that there are people who are physically brains in a vat but who are living in a simulated world. All they can refer to, and think about, meaningfully are entities within their simulated world. (And possibly abstract entities as well, depending on the theory of intentionality again.) When we quote what people in this situation might say let us then follow their words with *s to show that meaning behind their words is different from ours. For example if we speak of cats we mean collections of atoms that tend to lay in the sun; when people in the simulated world think of cats* they are referring to some aspect of the program that provides them with certain visual stimuli, and not a collection of atoms. Thus if they speak about brains* and vats* they aren’t speaking of physical brains and vats but objects, possibly imagined, within the simulation itself. They cannot speak of brains or vats (without *s), but it would be these words that could give their hypothesis meaning, because clearly they are not brains* in vats*, after all they have arms* and legs*. Thus neither we nor our hypothetical people in a simulated world can talk meaningfully about ourselves possibly being brains in a vat.
Even if they cannot meaningfully talk about the external reality, which they have no access to, me might still think that it could be meaningful for someone in their position to deny the reality of the world. Upon reflection on what such a denial entails however it becomes apparent that this claim too is meaningless. Consider for example the denial of the reality of a hallucination. The person suffering from the hallucination claims that it is “not real” because no one else is able to perceive it (although it was a real hallucination). However in the case of a simulated world this is not a possible use of the claim of unreality, since other people share the same “unreal” world. Perhaps then they mean that it is like a hologram. However when we deny the reality of a hologram we do not deny that it is a real hologram, only that it is not what it appears to be. To make a denial of reality in this fashion however requires that the speaker be able to say what it really is (it is really a hologram), and once again the people in the simulated world cannot meaningfully refer to their experience of a physical universe as being something else, since they have no experience of what that something else could be.
What is left that a person in such a simulation could meaningfully claim? We might think that even if they couldn’t deny their reality they might be able to meaningfully insist that there were some other aspects of reality, inaccessible to them, that were really the cause of their experiences. In this case the person is not denying the reality of the simulation, simply saying that there may be more to reality than is perceived. However once again we run up against the problem that if this extra reality can’t be observed to have a casual effect then there is no way that person could meaningfully talk about it, and in fact could have no reason to believe that it even exists.
Thus the only meaningful claim we can make along these lines is something to the effect that “there may be more to reality than what we have observed so far, although I can say not what”, and this claim is so empty that we might as well simply not make it at all.
Why then does the claim “we are all really brains in vats” seem meaningful? It is because we are confusing objects that we can talk meaningfully about with objects in some hyper-reality (more real than what we are perceiving), objects about which we have no information whatsoever. Really saying “we are all brains in vats” is just as meaningful as saying “we are all akhaf in uyawer”; since we have no way of knowing what the hyper-reality is like why use the same words for it? Yes we could imagine real brains in vats, but then those people, living in a simulated world would have no way of knowing what our reality was like, and thus could not meaningfully form a hypothesis about it. Of course all bets are off if you let information from one reality leak into the other, but since there is no evidence that happens in our “real” world we should be satisfied that we are not akhaf in uyawer.