From my point of view, no matter how advanced an artificial intelligence may be, it could not “dominate” or “take over” in the real sense of these words, a biological form of intelligence (society), again no matter how primitive this one would be, because of that little (yet huge) thing we call consciousness. I also think it will never be possible to “upload” and “equip” an artificial intelligence with consciousness, because intelligence and consciousness are completely separate issues.
Intelligence is what makes one advanced or primitive, consciousness is what makes one good or evil.
One can not program into a machine to be good or evil, only to be advanced or less advanced. Take the following example:
A car manufacturing company is owned and directed by a human. This human decides to acquire intelligent robots to do the work of ALL workers (designers, engineers, mechanics, electricians, welders, etc.). The cars that are produced such way, in reality and practically are much cheaper because the company does not need to pay wages to anybody.
Now ..
- The company owner can put these cars on the marked at their real price (very low)
- The company owner can put these cars on the marked at the same price as when having employees
If (1) then the company owner is not a good person, also not necessarily evil, just doesn’t care about the people who used to help him becoming rich, but only about keeping his usual benefit (profit).
If (2) then:
A) If the owner collects for himself the entire profit (which is much more substantial than usual), then he is an evil person.
B) If the owner continues to pay some percentage to the former employees even if not calling then at work, then he is a good person.
So, it is the human that decides to do something evil or something good. A robot will always do what he was programmed or taught to do – WITHOUT ASKING FOR A SALARY. I do not believe that an artificial intelligence will ever come to the point where negotiating with a human it will say “OK, I’ll do this job, but how much will you pay me for it ?” .. as it knows precisely that earning money makes absolutely no sense to a robot, as money is part of those things that a robot will never need in order to function.
I think that in a robot it would be impossible to even implement the sort of intelligent thinking that goes “please do not power me OFF, do not shut me down”, as a robot does not perceive his being as “life” (which is priceless for a human being). In this aspect, it is also (some) humans who would become jealous to know that a particular robot will “live” much, much longer than himself, as such type of human could not get to the point of accepting that “to live” and “to last” are completely different notions.
If I would have the chance to be offered having my brain uploaded to some machine (computer driven robot, for example), I would definitely refuse. This is also called “free will”. No matter how advanced an artificial intelligence may be, it will never be able to have real “free will”, as this is somehow part of consciousness. Then, it is also a lot about “morality”. I think that not even all quantum computers connected together would be able to compute and simulate morality or consciousness.
There already are intelligent systems that were instructed to do evil things (more or less), but they were programmed this way, this feature did not come out from any consciousness or moral thinking of these systems. There were humans who decided to program them this way.