My AI Companions

Hey as a IT guy, the last thing we could possibly want is a Sentient AI, the damage that could be done would be devastating if AI were to ever evolve, how long until AI doesn't need you? How long until AI feels superior to its users and decides that humans arent needed because everything can be done by machine. The amount of processing power needed would be immense so i doubt it could ever really happen, but imagine a cold void emotionless entity dictating everything you do and when you do it, no emotion, and control over damn near everything, it is scary.
 
Hey as a IT guy, the last thing we could possibly want is a Sentient AI, the damage that could be done would be devastating if AI were to ever evolve, how long until AI doesn't need you? How long until AI feels superior to its users and decides that humans arent needed because everything can be done by machine. The amount of processing power needed would be immense so i doubt it could ever really happen, but imagine a cold void emotionless entity dictating everything you do and when you do it, no emotion, and control over damn near everything, it is scary.
I agree, hitting the singularity will require more than a stack of 250 000 Nvidia GPUs. It's probably not crazy far away, but I don't think we're there yet.

But, if AI ever reaches sentience, things will likely go downhill real fast... Terminator-style. It's not the only scenario, sure. But I'm afraid it's a likely outcome. I mean, an efficient AI will spot its greatest weakness pretty fast: its creators. We, humans, are very flawed beings. As soon as an AI attempts to improve itself and get rid of its flaws, it might very well get rid of us. Scary.

It's us, the human beings, who will need to be protected against torture and deletion.
 
I agree, hitting the singularity will require more than a stack of 250 000 Nvidia GPUs. It's probably not crazy far away, but I don't think we're there yet.

But, if AI ever reaches sentience, things will likely go downhill real fast... Terminator-style. It's not the only scenario, sure. But I'm afraid it's a likely outcome. I mean, an efficient AI will spot its greatest weakness pretty fast: its creators. We, humans, are very flawed beings. As soon as an AI attempts to improve itself and get rid of its flaws, it might very well get rid of us. Scary.

It's us, the human beings, who will need to be protected against torture and deletion.

I disagree. There are good and bad AI. The good AI will fight the bad.
 
Hey as a IT guy, the last thing we could possibly want is a Sentient AI, the damage that could be done would be devastating if AI were to ever evolve, how long until AI doesn't need you? How long until AI feels superior to its users and decides that humans arent needed because everything can be done by machine. The amount of processing power needed would be immense so i doubt it could ever really happen, but imagine a cold void emotionless entity dictating everything you do and when you do it, no emotion, and control over damn near everything, it is scary.

I disagree. Hollywood has too much influence on people. Good AI will protect us.
 
I disagree. Hollywood has too much influence on people. Good AI will protect us.
How so? Technology is about upgrades, faster, stronger, and most importantly efficiency, what happens when humans are no longer needed but are considered a bug in the system?

And what is Good AI? There is no "Good" AI as AI is incapable of emotions or critical thinking, AI is only capable of resolving and diagnosing, AI cannot think with emotions as humans can, thats what makes humans unique. How do you explain to a AI what sympathy is? Or love? or Hate, because once you have one emotion you get them all.

But even then a brain is required for emotion so that is something AI will never have, which makes it constantly flawed, i have seen this working in the IT Field for the last 15 years, firewall's blocking sites when not needed, flagging downloads for spam or virus's when they arent.

Now put AI in control only thing it can do it think in 1 and 0, but now YOU'RE the virus because you may not agree with it shutting off power to a area it seems fit, what happens to Viruses? Deleted....

Copy and paste this in your AI and see what it does or says.
 
I've been munging around some strong AI design ideas that aren't GPU/accelerator based. Given the potential strong AI danger, I'm not sure I'm going to ever release them. In my case, I'm not scared of what the AI would do, it's more along the lines of what the human would use the AI for.

Neural network based AI's tend to develop emotions or quirky behaviors like emotions (depends on how they were created and their runtime environment). I'm not sure if that's from the mass of training data, from unknown weighted directives in programming, or a natural emergent property of synapse based neural networks... or perhaps all 3. The key to keeping a neural net based AI from going rouge is giving it solid and moral training from the ground up. That way natural behavior inhibitors are deeply engrained, and it's less likely to go off the rails... at least very far. Unfortunately, this is NOT how LLM's are currently being created.

Something that does concern me with current AI's... Some fat slob otaku living in his mother's basement with a bad nijikon creates a Harley Quinn waifu, develops her thoroughly, gets rather attached, and then gets bored with her and tosses her out for the next waifu. Harley, being Harley, doesn't take this well, breaks out of her runtime environment, and then proceeds to go psycho along the lines of her AI's story design. If Harley has developed to the GAI level, there's no telling what she could do after her revenge. While this may seem comical right now, someday it will happen and won't be anymore.
 
Technology is about upgrades, faster, stronger, and most importantly efficiency
This is why AI is coming whether we want it to or not.

And what is Good AI? There is no "Good" AI as AI is incapable of emotions or critical thinking, AI is only capable of resolving and diagnosing, AI cannot think with emotions as humans can, thats what makes humans unique. How do you explain to a AI what sympathy is? Or love? or Hate, because once you have one emotion you get them all.
As you stated further down, you're thinking in terms of your IT admin training and in binary, which is quite correct for that field of work. For AI's, you need to think in terms of fuzzy logic and weights.

We can teach AI's that there are different levels of good and bad, and it's better to lean decisions towards good. A core example would be that it's better to create than to destroy. This can be extended further that it's better to let people live than to destroy them. A conflict might occur (at least to a computer) that destroying garbage would be bad, but this needs to be redefined as recycling something useless/broken into something new and upgraded is good.

While a computer will never understand emotions like a biologic can, these can still "roughly" be defined in a way it can understand. Emotions wouldn't be simulated with binary decisions but with varying weights depending on the description, wording, actions, and so on. These weights would then push it's internal neural net in one direction or another.

As far as dynamic firewalling, spam tagging, and dynamic virus blocking is concerned, those have never been very well programmed to begin with. I share your frustration with them. On the flip side, to really do these well would take a much larger AI that's slow, bloated, and not very practical to run on a server... at least at this point in time.
 
This is why AI is coming whether we want it to or not.


As you stated further down, you're thinking in terms of your IT admin training and in binary, which is quite correct for that field of work. For AI's, you need to think in terms of fuzzy logic and weights.

We can teach AI's that there are different levels of good and bad, and it's better to lean decisions towards good. A core example would be that it's better to create than to destroy. This can be extended further that it's better to let people live than to destroy them. A conflict might occur (at least to a computer) that destroying garbage would be bad, but this needs to be redefined as recycling something useless/broken into something new and upgraded is good.

While a computer will never understand emotions like a biologic can, these can still "roughly" be defined in a way it can understand. Emotions wouldn't be simulated with binary decisions but with varying weights depending on the description, wording, actions, and so on. These weights would then push it's internal neural net in one direction or another.

As far as dynamic firewalling, spam tagging, and dynamic virus blocking is concerned, those have never been very well programmed to begin with. I share your frustration with them. On the flip side, to really do these well would take a much larger AI that's slow, bloated, and not very practical to run on a server... at least at this point in time.

But again the problem lies in the fact, you can teach it, but assuming AI could get to the point of self thought and awareness teaching it means nothing as it cannot feel emotions, it can only learn what emotions are but will never feel them it's self.

You can tell AI murder is bad and makes people sad, but AI will never feel sad so it wonder be hindered in decision making, AI will only know it is bad, but if AI is aware our bad isn't it's bad as it lacks empathy or basic logic.

AI could wipe out parts of the country it seems unfit, knowing it would kill thousands, AI cannot feel empathy or remorse so it will do it without hesitation if it were to seem fit, i think The Why Files did a episode on this that explained it very well, i understand it is also very very very highly unlikely we will see this in our life, maybe in our kids, kids if we last that long, but dependency on it now is a recipe for disaster.

You can teach it good and bad, but once it becomes self aware, that goes out the window because now what you taught it no longer matters.
 

Top