Virtual worlds and the problems of philosophy
In his book Reality+ (2022), (in Swedish: “Virtuella Världar”) David Chalmers discusses a simulation-hypothesis mainly based on today’s VR and AI technology What is really real? Can there be consciousness in a digital world? Is it possible to live a good life in a virtual world?
In a time of disruptive fragmentation David Chalmers articulates crucial philosophical questions. Are we living one level down in a parallel Matrix multiverse? Is the world as we know it about to disappear, and new worlds are about to take shape – a new virtual reality…. How do we know we're not in a computer simulation? Chalmers launches into a long speculative discussion with various dips into the problems of Descartes’, Hume’s, and Kant’s - i.e. how can we know anything about reality at all? And how do mind and body interact? How can something as seemingly non-material as a thought communicate with a body at all? Is there a mind beyond the body, and is there a God? Are illusions real? What is an illusion? Is a computer simulation a reality for the user? How do a VR-user distinguish between reality and the perceived simulations? Are the interactions real or illusions?
How can we understand this dualism? Augmented reality is a fact of life: we spend loads of time with smart watches, calculators, and mobile phones every day; what are the long-term implications of this for humanity, for the human mind, in the decades or centuries to come?
Chalmers does not answer these questions, which is perhaps not so surprising since our epistemology has not really reached the point where these questions can be answered. But Chalmers keep on struggling in those dualities, and friendly trying to strangle René Descartes. And I’m impressed with the eagerness he tries to dig into this. But this computer-AI-VR-simulation hypothesis, based on analogies with today's technology, is risky business. It's historically risky to compare ideas with new items – the arguments will, as Matthew Cobb has shown in his book “The Idea of the Brain” (2020), soon become outdated due to advancements in technology.
Sorry, but the book is a little too long. It seems that no editor had the courage to dare to question, or even the energy to edit the massive flow of words. Very interesting, though, almost like an ongoing personal discussion and very down-to-earth in his way of writing, always open and friendly rather than condescending. A book very worth reading.
Are you a Zombie?
Are you a REAL living person? Are you really you? How do you know…? What is a self?
Philosophical zombies, according to Chalmers, are indistinguishable from normal human beings in every respect except that they lack conscious awareness or experience. They have the same biological structures, including a brain that functions in the same way as a human brain. They can perform tasks that require cognition, memory, and problem-solving. Despite this human-like behaviour and physiology, philosophical zombies lack conscious experience. They do not have any subjective experiences or "qualia" such as the subjective, qualitative aspects of conscious experiences, such as the redness of red or the pain of a headache, there is 'nothing it is like' to be a philosophical zombie – no inner world (or is it?).
Internally, they are void of mental life or consciousness, but outwardly, they are indistinguishable from normal humans, including displaying behaviours that suggest emotions and feelings. Unlike humans, who can often be self-centred as part of their cognitive predictive mechanisms, a zombie does not exhibit narcissism. Its actions and responses are driven by calculated probabilities rather than self-reflection or egocentric motivations. But if robots are self-aware? If they have consciousness, would you turn them off? Is that ethical?
Humans, through experience, become more and more thoughtful or “wise” and might exhibit a certain slowness in response due to the awareness of the possibility or risk of being wrong 😊 This reflective nature adds a layer of “humanity” to their actions. In artificial intelligence systems like ChatGPT, a delay in response can give an impression of thoughtfulness, making the system appear more human-like. This parallels the slower, more deliberate responses seen in humans due to contemplation.
Uppsala, Sweden, January 2024