AI is pretty impressive, but what we have now is nowhere close to having a legitimate, general artificial intelligence. ChatGPT is -- more or less - a glorified autocorrect. That's why it can have serious trouble, for example, playing chess. Try it sometime. ChatGPT often tries to make illegal moves, tries to move pieces that aren't there, tries to capture pieces that aren't there, or even creates new pieces. This is because it just doesn't understand chess. It knows that e5 is a common response to e4 because the language model picked that up from the data, but it doesn't understand what this means in any way. This is why you can never really finish a game of chess with GPT. It doesn't have the intelligence to make the right moves once the pieces are moved around a bit.
It does a great job at appearing to be having a conversation with you, but that's just large language models being large language models. I don't think we're going to have a legitimate general AI by 2036. There is still so much to go.