The Technological Singularity - The Beginning of a new Era, or the End?

Negan

THE member
Messages
181
I'm sure those of you who visit this section is well versed in what I mean when I say "the singularity".

For those of you unaware, here's a snippet from Wikipedia:
The technological singularity (also, simply, the singularity) is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.
Link (Wikipedia)

I'm a firm believer that it's not a question of if, but when. I've seen some estimates put it anywhere from just five years from now, to twenty-five years from now. I personally think we'll see it within the next 10-15 years. I think it all boils down to quantum computers and how fast they become accessible to those in various scientific fields. I saw CosmosLunarGirl post a thread titled "Quantum World" (Link) which sparked this thread. Make sure to check out her thread as it's got a pretty neat article about a what may very well be the worlds first true quantum computer.

If this plays out, and the computer which the article does turn out to become the worlds first quantum computer, we're on our track to making the singularity happen very soon.

But this raises a couple of questions. Yes, I'm sure it'll be a boom for mankind in many positive ways. Like disease research, environmental research and the likes which could usher in a new era unlike mankind has seen. On the flip side however, could this be the end? Time and time again, great minds have warned us about A.I. and the potential dangers it presents. People like the late Stephen Hawking and Elon Musk to name just two.

With Quantum Computing, simulating a true infallible brain comes closer to reality. This in itself isn't horrible, however the conclusions this infallible brain comes to could very well become horrible.

"Humans kill themselves in wars, over trivial things. They will never change. Like a dog hit by a car and in pain, it would be better to end it now then prolong the suffering".

Now that might not ever cross the AI's mind, but I have a feeling given enough time it will. Now I tend to think everything will work out fine. I don't necessarily believe that AI's will kill us out of pity. But it's something I do like to keep in the back of my mind. I'd love to hear your thoughts on an eventual artificial superintelligence and what it may think of humanity, what it may believe is best for us. Do you think we'll come out OK?
 

TimeFlipper

Senior Member
Messages
13,705
My own thoughts are that the ASI will possess within it the "psychic signature" of its designer, or creator if you wish to call it that which was a term used in the 1970s and 80s when describing such things as ASI`s..
The future wars that follow will be between the "good and the bad" types of ASI, although i have no idea of how the human race will fit into those wars..
 

Negan

THE member
Messages
181
That's quite interesting TimeFlipper. I've never thought about psychic imprinting and what role that could possibly play, if any. I'd like to hear your beliefs on the subject.

Psychic imprinting aside for now, I tend to think that AI's will be like Spock from Star Trek. Purely logical to a point where it's maddening. There's so many logical answers for the ills of the world today, but they're far from pleasant answers. (Probably far from humane as well.) And that is what scares me with AI. I think in the beginning, AI will "feel". It'll feel happy, sad... angry. If we survive the angry phase, we'll see the AI become a cold calculating computer which will only deal with facts and logic.

For example, my example above. "Humans keep hurting each other, they will never learn. Destruction means no more suffering.". It's not doing it because it's evil, but because it's the most logical conclusion to make. Hopefully the psychic signature you speak of will give this AI a conscience.
 


Top