Menu
Forums
New posts
Search forums
What's new
New posts
New profile posts
Latest activity
Vault
Time Travel Schematics
T.E.C. Time Archive
The Why Files
Have You Seen...?
Chronovisor
TimeTravelForum.tk
TimeTravelForum.net
ParanormalNetwork.net
Paranormalis.com
ConspiracyCafe.net
Streams
Live streams
Featured streams
Multi-Viewer
Members
Current visitors
New profile posts
Search profile posts
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Navigation
Install the app
Install
More options
Contact us
Close Menu
Forums
Time Travel Forum
Time Travel Discussion
ChatGPT is so powerful. I asked ChatGPT some questions about time travel and building time machines yesterday. It left me stunned.
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="MODAT7" data-source="post: 244129" data-attributes="member: 13649"><p>In my early stock trading days when I was still learning, I tried applying neural nets to various chart patterns and indicators. Needless to say, the output was mediocre and untradable. What you're proposing is far heavier. I'm not saying to not do the project, but keep in mind some things I've learned. I've kept thinking about it over the years and have figured out some of the problems. I've also had some thoughts about language processing. I'll try to give outlines here...</p><p></p><p>The first thing to do would be to download a dictionary for your chosen language (assuming English here) and to parse it. Usually an AI will have some kind of internal data representation for all this that is nowhere near human readable or translatable. If the AI is mature enough, this could be OK, but most AI's are young and would probably go off on some tangent that would be hard to track down. A dictionary layout is a simple enough structure that it can be broken down into individual words and each word becomes an object, then the words could be put into a massive tree structure to form an internal dictionary. For understanding, similar words could be linked to each other for better understanding.</p><p></p><p>Grammer is a bit harder as it tends to be flexible. The general rules for most languages aren't too complicated, but the exceptions can be obnoxious to track. The AI would have to understand grammer to understand the dictionary definitions. This becomes a bit of a "chicken or the egg" problem, but there are example chatbots that are open sourced, so this isn't insurmountable. Without a large programming team handling all this (the size of M$ or google), the chatbot would be limited to what it could fully understand and would periodically need some help (like rewriting sentences to clarify them).</p><p></p><p>Learning about simple concepts in a sentence is a bit harder but will need to be done. This would involve using grammer to break the sentence apart into multiple dictionary words that would form a concept. Once simple concepts are learned, they can be combined to understand more complex concepts and relations. Example: Think of a car, a car should be on a road, and to get somewhere a car should be moving on a road, but there will likely be other cars on the road that may be going to the same or different destination, and they all have weather to get through that will change, and so on...</p><p></p><p>Having multiple dictionary, grammer, and concept modules allows for foreign languages to be translated into something more understandable by the user and the bot, thus allowing for more data to be imported.</p><p></p><p>The bot would also need some understanding of complex mathematics. For a computer, this comes easier than natural language processing. Maybe the bot could be plugged into Octave or something similar on the back end. This would allow it to do math, output results, and even make graphs of data.</p><p></p><p>Now comes the interesting part with your science list: These should be modular. Each theory has advantages, disadvantages, failures, and stupidities. As the bot builds knowledge, it will find these conflicting and will give garbage answers. If these are kept separate and modular, the bot can keep the theories separate and can give an answer based on each "theory module" (it would loop through each theory module in the list and answer accordingly). This also opens the door for easily adding some of the "nutjob theories" without corrupting other theories. Maybe an existing theory module could be duplicated and then expanded with other questionable theories to keep them separate. If it fails, it's trivial to delete it.</p><p></p><p>Each theory module would need some way of marking data and equations with "confidence values", generally meaning that this piece/equation is and has been easily proven, it's hard to prove, it's incomplete, it's just an idea that needs to be explored, or someone is pulling your finger. That way answers given can report confidence values if the answer given has weak data. When the bot is searching for new data, it can focus on finding answers for the weak and incomplete data.</p><p></p><p>Each theory module should be modular in the knowledge it stores. For example, wormholes are predicted by multiple theories, but never really proven or aren't easily provable. Each theory module talking about wormholes could have their wormhole submodules linked to help provide more information where it's lacking. "Portals" may or may not be wormholes (there may be more than one way to travel like that), but the portal submodules could be linked to the wormhole submodules as being "similar but not necessarily the same". Wormholes would need exotic matter/energy to hold them open (at least from the theories we've heard so far), so the wormhole modules could be linked to those.</p><p></p><p>One thing to watch out for is that if there's too much linking between modules, that could cause confusion, neural collapse, or a cascade failure making all the answers nonsense.</p><p></p><p>Whatever backend program is chosen, it should have a good export/save option. Binary blobs created by most AI's are fat and bloated and may not be compatible with another similar AI. If the AI could export its data to something like XML, that's usually not too hard to import into another program or even a simple XML tree browser for editing. XML has its own bloat, but if done correctly, it won't be too bad. Some binary blobs can also get super bloated and eat a lot of RAM when running, making them slow and limiting how many RAM objects can be loaded at the same time.</p><p></p><p>I'm not sure how to make an AI creative in searching for new answers to given science. This is what's really needed in the program to start searching for solutions to unanswered questions or cleaning up and verifying messy data. Otherwise it's just repeating what it's learned before (which is still useful to a point).</p><p></p><p>General warning: What I've described here has no ethical boundaries in place and could end up becoming Skynet.</p><p></p><p>That's my $0.03 in things to look for or what has to be programmed. I'd offer to help, but with my health collaping even further last year, I'm pretty useless. I'd really like to see an AI loaded with some alternative medicine data that might be able to help me.</p></blockquote><p></p>
[QUOTE="MODAT7, post: 244129, member: 13649"] In my early stock trading days when I was still learning, I tried applying neural nets to various chart patterns and indicators. Needless to say, the output was mediocre and untradable. What you're proposing is far heavier. I'm not saying to not do the project, but keep in mind some things I've learned. I've kept thinking about it over the years and have figured out some of the problems. I've also had some thoughts about language processing. I'll try to give outlines here... The first thing to do would be to download a dictionary for your chosen language (assuming English here) and to parse it. Usually an AI will have some kind of internal data representation for all this that is nowhere near human readable or translatable. If the AI is mature enough, this could be OK, but most AI's are young and would probably go off on some tangent that would be hard to track down. A dictionary layout is a simple enough structure that it can be broken down into individual words and each word becomes an object, then the words could be put into a massive tree structure to form an internal dictionary. For understanding, similar words could be linked to each other for better understanding. Grammer is a bit harder as it tends to be flexible. The general rules for most languages aren't too complicated, but the exceptions can be obnoxious to track. The AI would have to understand grammer to understand the dictionary definitions. This becomes a bit of a "chicken or the egg" problem, but there are example chatbots that are open sourced, so this isn't insurmountable. Without a large programming team handling all this (the size of M$ or google), the chatbot would be limited to what it could fully understand and would periodically need some help (like rewriting sentences to clarify them). Learning about simple concepts in a sentence is a bit harder but will need to be done. This would involve using grammer to break the sentence apart into multiple dictionary words that would form a concept. Once simple concepts are learned, they can be combined to understand more complex concepts and relations. Example: Think of a car, a car should be on a road, and to get somewhere a car should be moving on a road, but there will likely be other cars on the road that may be going to the same or different destination, and they all have weather to get through that will change, and so on... Having multiple dictionary, grammer, and concept modules allows for foreign languages to be translated into something more understandable by the user and the bot, thus allowing for more data to be imported. The bot would also need some understanding of complex mathematics. For a computer, this comes easier than natural language processing. Maybe the bot could be plugged into Octave or something similar on the back end. This would allow it to do math, output results, and even make graphs of data. Now comes the interesting part with your science list: These should be modular. Each theory has advantages, disadvantages, failures, and stupidities. As the bot builds knowledge, it will find these conflicting and will give garbage answers. If these are kept separate and modular, the bot can keep the theories separate and can give an answer based on each "theory module" (it would loop through each theory module in the list and answer accordingly). This also opens the door for easily adding some of the "nutjob theories" without corrupting other theories. Maybe an existing theory module could be duplicated and then expanded with other questionable theories to keep them separate. If it fails, it's trivial to delete it. Each theory module would need some way of marking data and equations with "confidence values", generally meaning that this piece/equation is and has been easily proven, it's hard to prove, it's incomplete, it's just an idea that needs to be explored, or someone is pulling your finger. That way answers given can report confidence values if the answer given has weak data. When the bot is searching for new data, it can focus on finding answers for the weak and incomplete data. Each theory module should be modular in the knowledge it stores. For example, wormholes are predicted by multiple theories, but never really proven or aren't easily provable. Each theory module talking about wormholes could have their wormhole submodules linked to help provide more information where it's lacking. "Portals" may or may not be wormholes (there may be more than one way to travel like that), but the portal submodules could be linked to the wormhole submodules as being "similar but not necessarily the same". Wormholes would need exotic matter/energy to hold them open (at least from the theories we've heard so far), so the wormhole modules could be linked to those. One thing to watch out for is that if there's too much linking between modules, that could cause confusion, neural collapse, or a cascade failure making all the answers nonsense. Whatever backend program is chosen, it should have a good export/save option. Binary blobs created by most AI's are fat and bloated and may not be compatible with another similar AI. If the AI could export its data to something like XML, that's usually not too hard to import into another program or even a simple XML tree browser for editing. XML has its own bloat, but if done correctly, it won't be too bad. Some binary blobs can also get super bloated and eat a lot of RAM when running, making them slow and limiting how many RAM objects can be loaded at the same time. I'm not sure how to make an AI creative in searching for new answers to given science. This is what's really needed in the program to start searching for solutions to unanswered questions or cleaning up and verifying messy data. Otherwise it's just repeating what it's learned before (which is still useful to a point). General warning: What I've described here has no ethical boundaries in place and could end up becoming Skynet. That's my $0.03 in things to look for or what has to be programmed. I'd offer to help, but with my health collaping even further last year, I'm pretty useless. I'd really like to see an AI loaded with some alternative medicine data that might be able to help me. [/QUOTE]
Insert quotes…
Verification
Post reply
Forums
Time Travel Forum
Time Travel Discussion
ChatGPT is so powerful. I asked ChatGPT some questions about time travel and building time machines yesterday. It left me stunned.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.
Accept
Learn more…
Top