September 2, 2023
ChatGPT has emerged as one of the most talked-about and influential models in technology and artificial intelligence. Developed by OpenAI, it is powered by the GPT-3.5 architecture, which incorporates extensive data and information – anything previously written — to generate human-like text responses. However, there is an ongoing debate about whether ChatGPT represents true Artificial Intelligence or is simply a repository of archived intelligence mixed with occasional nonsense.
Today, I’ll add my two cents to the discussion.
We have been informed that ChatGPT-3.5 is built upon a foundation of extensive knowledge recorded up to two years ago. This means that the system’s responses are not current, and the model doesn’t have real-time awareness. It relies on the vast amount of text data it has been trained on, encompassing a wide range of topics, not all of which are accurate, to generate coherent and contextually relevant responses.
The distinction between artificial intelligence and archived intelligence is at the heart of the debate.
- Artificial Intelligence:
Artificial intelligence typically refers to computer systems that can perform tasks that normally require human intelligence, such as understanding natural language, recognizing patterns, and making data-based decisions. ChatGPT showcases some AI capabilities by understanding and generating conversational text. It can answer questions, provide explanations, and even generate creative content.
- Archived Intelligence:
ChatGPT is not true AI because it lacks full contextual awareness. Its responses are based on the questions it is guided by and the data it was trained on, including outdated human knowledge and information that is misinformation and disinformation. So, it is a sophisticated search engine that retrieves pre-existing information from its archives and presents it in a conversational format.
- The Role of Nonsense
One of ChatGPT’s intriguing aspects is its ability to generate incorrect information and nonsense even though the responses are usually coherent. This is because it cannot verify the accuracy of the information it generates, relying solely on patterns and associations in the data it was trained on, including the questions submitted.
As a frequent user of ChatGPT, I believe it represents a fascinating intersection of artificial and archived intelligence capable of understanding and generating human-like text responses to my in-depth questions. The answers I accept are carefully filtered, edited, and almost always the result of additional questions from further research.
In its present form, ChatGPT is a powerful productivity tool with many practical uses but is not a true form of artificial intelligence. For example, if I want ChatGPT to produce an argument pro or con to the same question, I can make it happen simply by editing the initial answers with a particular bias to subsequent questions I ask. In other words, I can make ChatGPT produce fake news. And if I can, many people are.