Will future civilisations have enough computing power and programming skills to be able to create "ancestor simulations" - that is the question Nick Bostrom asks us to consider. I.e. will someone be doing detailed simulations of the simulators' predecessors - detailed enough for the simulated minds to be conscious and have the same kind of experiences we have? Perhaps not now, or in 50 years, but within say 10 million years.
Put another way: If we extrapolate the expected technological advances and think through the logical conclusions we arrive at the the simulation argument :
1. Almost all civilisations at our level of development become extinct before becoming technologically mature.
2. The fraction of technologically mature civilisations that are interested in creating ancestor simulations is almost zero.
3. You are almost certainly living in a computer simulation.
See FAQ for the details of the simulation argument. If the simulation argument is correct, at least one of these three propostitions is true (it does not tell us which).See Usenet debate for a lively discussion. The thread was started by me and has contributions by SF author Greg Egan and others.
Isn't the simulation argument-hypothesis untestable?
Certainly, there are observations that would show us that we are living in a simulation. For example, the simulators could make a "window" pop up in mid air in front of you with the text "YOU ARE LIVING IN A COMPUTER SIMULATION" - think Greg Egan here - but the hypothesis is not testable in the sense that we can make an experiment that could show us either way? That you don't have signs popping up in midair is not evidence that you are not living in a simulation.
What kind of computer could do the simulation?
Seth Lloyd calculates an upper bound for a 1 kg. computer of 5*10^50 logical operations per second carried out on 10^31 bits - Compare this to that we would need 10^14 operations per second to simulate a human mind - perhaps as much as 10^17 . For a complete simulation of a human history , society and surroundings, Nick Bostrom speculates that we would need 10^36 operations (100 billion humans x 50 years/human x 30 mill secs/year x 10^17 flops in human mind/sec = app. 10^36) - which is far from the teoretical limit, but close to speculations of a planet size computer doing 10^42.
Finding a suitable energy source for all of this information processing is another problem. One could englobe the sun with sun ray collectors, this way the suns energy output could be used for information processing - or one could use some other kind of fusion.
A quantum computer, some kind of black hole computer would be great. Frank Tipler suggest modifying the universe (or the simulation or whatever it is?). In Tiplers model the universe is closed (engineered so) and experiences an anisotropic collapse. Usable energy grows faster than temperature and allows a divergence of information processing. Subjective time (information) goes to infinity as the singularity is approached (and all there ever was in the universe can the be resurrected based on the information gathered from the light (particles) that is on its way to collapsing in the singularity. Heaven, at last, in the omega point).
Tiplers argument is: Any mental life, any stream of consciousness, can be replicated on a computer. Total number of human like streams of consciousness is finite. The processing power of this end of the world computer is effectively infinite. The end of time computer will therefore be able to simulate every (possible) human consciousness there ever was. Hence our resurrection is inevitable.
And, even without Tipler the conclusion seems clear: Post human civilisations will have enough computing power to run hugely many ancestor simulations even while using only a tiny fraction of their resources for the purpose.
Is it ethical to start a human race in a simulation?
Maybe advanced civilisations ban all ancestor civilisations with humans, as unethical, because of the suffering that is inflicted on the inhabitants of the simulations? And maybe the scientific value of an ancestor simulator is neglible - and, for pleasure they, inhabitants of advanced civilisations, could just as well address their brains pleasure centers directly.
The existence of (unnecessay) evil is a huge problem in theology. But it is only a problem if one assumes that the world was created by an all powerful, all knowing and all perfectly good being. With the creator being less than these things, it should be no surprise if the simulation is less than perfect.
And, is it acceptable to stop a simulation (e.g. because it cost a lot of money) - when you have the results you want - if people are actually living in this world? When a person dies in the simulation, is it necessary for you (ethically speaking) to uplift/resurrect this person to another simulation where he can live forever with his simulated friends? Or should the uplift depend on how the person did in the simulation?
And is it only "the conscious" parts of the simulations we should worry about? And are such distinctions possible given what we know about the human mind. I.e. a human conscious mind consistently misattribute behaviour and perception to itself, even when these are computed by lower brain levels and occur several hundred milliseconds before we become conscious of these behaviours or perceptions? The conscious mind that experience itself as unitary, despite internal delays and obvious parts? Is it only that weird thing we should be nice to and treat with respect?
Maybe some humans in the simulations are real and others are zombies?
And are all parts of the simulation equally important?
Maybe you populate your simulation with a few "real like people" and the rest are just zombies to make it a credible universe? This leaves the question how to treat a zombie person, when you meet such in the simulation? Is it ok to kill a zombie? And how will you know the difference? We must assume zombies are put into the simulation because they are computational less expensive?
Substrate independence and superintelligence.
Given (brain) simulation you also have super intelligence!
Substrate independence is the idea that conscious minds could in principle be implemented not only on carbon-based biological neurons, but also on some other computational substrate such as silicon based processors.
Given substrate independence, a human brain transferred into silicon can become a reality, when a technology is available that can take a very detailed scan of a human mind, which can be uploaded to a computer. But if that becomes reality one is also very close to superintelligence. I.e. the upload will be identical to the original brain, but will be able to run at much higher speeds - This makes possible a scenario, where years of results from a scanned brain could be fed back to an original brain as instant knowledge. Normal human memories, skills, values and consciousness could then be adequate as a basis for superintelligence.
It follows that the morals in running simulations (brains and worlds) might be flawed, but we can't assume it is without benefits? Actually, we can assume that a simulation that contains consciousness, will be pretty close to breaking into superintelligence.
Saturday, April 14, 2007
Simon Laub--- www.simonlaub.net ---