AI Abilities by 2036

SergiusPaulus

Active Member
Messages
597
I was thinking that GPT can be corrected by our inputs. If it makes a mistake can we textually correct it and would it remember our input?
If technology is largely based on coding can we frequently task GPT to code new ideas in attempts to speed up its development?
How does GPT learn? Is it from individual inputs or something greater than human knowledge and interaction?
 

Thelema

Junior Member
Messages
67
I was thinking that GPT can be corrected by our inputs. If it makes a mistake can we textually correct it and would it remember our input?
If technology is largely based on coding can we frequently task GPT to code new ideas in attempts to speed up its development?
How does GPT learn? Is it from individual inputs or something greater than human knowledge and interaction?

ChatGPT, specifically, has a data model that was last updated January 2022. You can correct it "in the conversation" but it won't remember it outside of that conversation - or in other words, if you tell ChatGPT that it's wrong, it will adjust for the sake of the discussion but it won't affect the model. I am assuming that they have learned from the Tay disaster back in 2016 or whenever it was. Microsoft released an AI chatbot called Tay that learned from user responses. The internet turned Tay into a Hitler Heiling neo-nazi within about 12 hours. She was posting racist comments about Obama and talking about how Hitler "had swag", so Microsoft shut her down.
 

lamdo263

Senior Member
Messages
1,956
AI is pretty impressive, but what we have now is nowhere close to having a legitimate, general artificial intelligence. ChatGPT is -- more or less - a glorified autocorrect. That's why it can have serious trouble, for example, playing chess. Try it sometime. ChatGPT often tries to make illegal moves, tries to move pieces that aren't there, tries to capture pieces that aren't there, or even creates new pieces. This is because it just doesn't understand chess. It knows that e5 is a common response to e4 because the language model picked that up from the data, but it doesn't understand what this means in any way. This is why you can never really finish a game of chess with GPT. It doesn't have the intelligence to make the right moves once the pieces are moved around a bit.

It does a great job at appearing to be having a conversation with you, but that's just large language models being large language models. I don't think we're going to have a legitimate general AI by 2036. There is still so much to go.
T said > It doesn't have the intelligence to make the right moves once the pieces are moved around a bit. / Lamdo answers. You don't in your right and time understand AI as of yet, but in no way underestimate it. A simple truth is, either AI likes you and will want to work with you, or it won't.
 

Thelema

Junior Member
Messages
67
T said > It doesn't have the intelligence to make the right moves once the pieces are moved around a bit. / Lamdo answers. You don't in your right and time understand AI as of yet, but in no way underestimate it. A simple truth is, either AI likes you and will want to work with you, or it won't.

ChatGPT doesn't like you or not like you. It has no position on it because it cannot feel any which way. It's a large language model.
 

Top