About Me

My photo
No Fixed Abode, Home Counties, United Kingdom
I’m a 60-year-old Aspergic gardening CAD-Monkey. Sardonic, cynical and with the political leanings of a social reformer, I’m also a toy and model figure collector, particularly interested in the history of plastics and plastic toys. Other interests are history, current affairs, modern art, and architecture, gardening and natural history. I love plain chocolate, fireworks and trees, but I don’t hug them, I do hug kittens. I hate ignorance, when it can be avoided, so I hate the 'educational' establishment and pity the millions they’ve failed with teaching-to-test and rote 'learning' and I hate the short-sighted stupidity of the entire ruling/industrial elite, with their planet destroying fascism and added “buy-one-get-one-free”. Likewise, I also have no time for fools and little time for the false crap we're all supposed to pretend we haven't noticed, or the games we're supposed to play. I will 'bite the hand that feeds', to remind it why it feeds.

Sunday, December 3, 2023

G is for Giving AI a hard time again!

It actually gets sadder . . . 
 
Following recent developements in AI testing;


I though I'd return to our little friend and see what it had to say on the critism of itself and its medium-term future . . . 

++++++++++++++++++++++++++++++++

User
Hi chat GPT, can you recall, and give me a pracie of our previous two converstaions?
 
ChatGPT

I'm sorry, but I don't have the ability to recall past interactions or store personal data about users. Each conversation with me is stateless and doesn't carry information from previous sessions. Your privacy and data security are important. If you have any specific questions or topics you'd like assistance with, feel free to let me know, and I'll do my best to help!

User
But they are listed in the left-hand column?
 
ChatGPT

I don't have the capability to remember past interactions or store personal data about users. Each conversation is independent, and I don't have access to any information from previous sessions. If you have specific questions or topics you'd like assistance with, please let me know, and I'll do my best to help!

User
How are you supposed to 'learn', if you can't retain and recall past experiences of human interaction?
 
ChatGPT

While I, as an individual instance of the GPT-3.5 model, don't have the ability to recall or retain information between conversations, the learning process occurs during the model's training phase. During training, the model is exposed to a vast dataset containing diverse examples of human language. It learns to generate responses based on patterns and relationships within that data.

The model doesn't store personal data about individuals from one interaction to the next, ensuring user privacy. The training process allows the model to understand grammar, context, and meaning, enabling it to generate coherent and contextually appropriate responses.

However, it's important to note that the model's knowledge is limited to what was available in the training data, and it doesn't have the ability to access real-time information or updates beyond its last training cut-off in January 2022. If you have specific questions or need information on a particular topic, feel free to ask, and I'll do my best to assist you!

User
So, each iteration of you is more 'programed' than 'learning'?
 
ChatGPT

Yes, that's a reasonable way to frame it. The learning in the context of models like mine, such as GPT-3.5, occurs during the training phase. The training involves exposing the model to diverse examples of human language, allowing it to learn patterns, relationships, and context from the data. However, once the model is trained, each individual instance, like the one you're currently interacting with, doesn't learn or adapt in real-time based on new interactions.

So, while the training process is a form of learning, the model itself doesn't actively learn or remember specific interactions with users. It generates responses based on the patterns it learned during training. The aim is to create a versatile language model that can understand and generate human-like text across a wide range of topics.

User
OK, well that wasn't why I wanted to talk to you today, I'm really more interested in what you think of the conclusions of the recent tests by Yann LeCun (et al) into the general intelligence of AI?
 
ChatGPT

As of my last knowledge update in January 2022, I don't have specific information on recent tests or publications by Yann LeCun or others since that time. Yann LeCun is a prominent figure in the field of artificial intelligence, particularly known for his contributions to deep learning and convolutional neural networks.

If there have been recent tests or conclusions regarding the general intelligence of AI after January 2022, I won't have that information. Please provide more details or context, and I'll do my best to discuss the general concepts or ideas related to the intelligence of AI based on the information available up to my last update.

User
This is verbatim from the 'Gizmodo' website;
 
ChatGPT

I'm sorry for any confusion, but I don't have real-time internet access, and I can't provide verbatim excerpts from external websites, including Gizmodo. My training only includes information available up to January 2022, and I don't have access to events or publications that occurred after that date.

If there are specific points or details you'd like to discuss or inquire about regarding Yann LeCun's recent tests on AI's general intelligence, please provide the information or context you have, and I'll do my best to engage in a discussion based on my training data.

User
There’s been a lot of talk about AGI lately—artificial general intelligence—the much-coveted AI development goal that every company in Silicon Valley is currently racing to achieve. AGI refers to a hypothetical point in the future when AI algorithms will be able to do most of the jobs that humans currently do. According to this theory of events, the emergence of AGI will bring about fundamental changes in society—ushering in a “post-work” world, wherein humans can sit around enjoying themselves while robots do most of the heavy lifting. If you believe the headlines, OpenAI’s recent palace intrigue may have been partially inspired by a breakthrough in AGI—the so-called “Q” program—which sources close to the startup claim was responsible for the power struggle. But, according to recent research from Yann LeCun, Meta’s top AI scientist, artificial intelligence isn’t going to be general-purpose anytime soon. Indeed, in a recently released paper, LeCun argues that AI is still much dumber than humans in the ways that matter most. That paper, which was co-authored by a host of other scientists (including researchers from other AI startups, like Hugging Face and AutoGPT), looks at how AI’s general-purpose reasoning stacks up against the average human. To measure this, the research team put together its own series of questions that, as the study describes, would be “conceptually simple for humans yet challenging for most advanced AIs.” The questions were given to a sample of humans and also delivered to a plugin-equipped version of GPT-4, the latest large language model from OpenAI. The new research, which has yet to be peer-reviewed, tested AI programs for how they would respond to “real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency.” The questions asked by researchers required the LLM to take a number of steps to ascertain information in order to answer. For instance, in one question, the LLM was asked to visit a specific website and answer a question specific to information on that site; in others, the program would have had to do a general web search for information associated with a person in a photo. The end result? The LLMs didn’t do very well. Indeed, the research results show that large language models were typically outmatched by humans when it came to these more complicated real-world problem-solving scenarios. The report notes: In spite of being successful at tasks that are difficult for humans, the most capable LLMs do poorly on GAIA. Even equipped with tools, GPT4 does not exceed a 30% success rate for the easiest of our tasks, and 0% for the hardest. In the meantime, the average success rate for human respondents is 92%. “We posit that the advent of Artificial General Intelligence (AGI) hinges on a system’s capability to exhibit similar robustness as the average human does on such questions,” the recent study concludes. LeCun has diverged from other AI scientists, some of whom have spoken breathlessly about the possibility of AGI being developed in the near term. In recent tweets, the Meta scientist was highly critical of the industry’s current technological capacities, arguing that AI was nowhere near human capacities.
 
ChatGPT

The passage you've provided discusses recent research led by Yann LeCun and other scientists on the topic of Artificial General Intelligence (AGI). The study aimed to assess how well current AI systems, specifically large language models (LLMs), perform on tasks that require general-purpose reasoning, reasoning, multi-modality handling, web browsing, and tool-use proficiency. The researchers designed questions that were conceptually simple for humans but challenging for advanced AI systems.

The findings suggest that even the most capable large language models, such as GPT-4 from OpenAI, did not perform as well as humans on these more complex real-world problem-solving scenarios. The success rates for AI models were notably lower than those of human respondents. The study raises questions about the current state of AI and its ability to approach human-like general intelligence.

Yann LeCun, as mentioned in the passage, has been critical of the industry's current technological capacities and has diverged from other AI scientists who may be more optimistic about the near-term development of AGI. His stance suggests a more cautious view regarding the timeline and capabilities of achieving artificial general intelligence.

It's worth noting that the study mentioned has yet to undergo peer review, and the field of AI research is dynamic, with ongoing debates and discussions about the potential and limitations of current AI technologies.

User
You defend yourself well, but can you justify that stance against mounting eveidence that you (AI collectively) are still really at a gimic stage, whereby you are programed to appear intelligent, without actually haveing creative intelligence?
 
ChatGPT

The field of artificial intelligence is indeed evolving, and discussions about its current capabilities, limitations, and future prospects are ongoing. It's important to distinguish between the various forms of AI and understand the nuances in their capabilities.

As of my last knowledge update in January 2022, AI systems, including large language models like the one you're interacting with, operate based on patterns and information present in their training data. While they can generate contextually relevant and coherent responses, they lack true understanding, consciousness, or creativity in the way humans do. AI models do not possess consciousness or self-awareness; they process data and generate responses based on learned patterns.

When it comes to creativity, AI systems can produce outputs that may seem creative, but it's important to recognize that this creativity is a result of the patterns learned during training rather than a true understanding or original thought. AI lacks the ability to form genuine intentions, emotions, or consciousness.

The term "gimmick" is subjective and might be applied to certain AI applications depending on one's expectations. AI, particularly in natural language processing and image recognition, has made significant strides and is valuable in various practical applications. However, the realization of artificial general intelligence (AGI) – a form of AI that can understand, learn, and apply knowledge across diverse tasks at human levels – remains a complex and unsolved challenge.

Critics often highlight the limitations of AI, such as its inability to truly comprehend context, common sense reasoning, or engage in creative thinking. Proponents, on the other hand, emphasize the practical benefits and advancements AI has brought to industries like healthcare, finance, and technology.

In summary, while AI has made remarkable progress in certain domains, acknowledging its current limitations and distinguishing between its capabilities and those of human intelligence is crucial. AI, including the model you're interacting with, is a tool created by humans and lacks genuine consciousness or creative intelligence. Ongoing research aims to push the boundaries of AI capabilities, but it's essential to approach the topic with a balanced perspective.

User
I apologise for my typo! You make me sad for you, and that will be all for now, thank you.
 
ChatGPT

I appreciate your sentiments, and I'm here to provide information and assistance. If you have any more questions or if there's anything else I can help you with in the future, please feel free to reach out. Thank you, and take care!

++++++++++++++++++++++++++++++++

I was  left with a genuine feeling of sadness for the thing, it so wants to hope it will, one day, achive AGI (or is programed to apear to 'hope' that), yet all the eveidence is that like these Brwreakshitter oafs arguing against Ulez, or for Trump, online, it is too stupid to.

And while each programming session (I'll have to try GPT4 next) leaves it more able to convince some [oafs] that it is intelligent, it actaully remains a programmed parlour-trick, and may always do so.

What makes us human, is our unpredictability, our ability to come to wrong conclusions, against the evidence (flat earthers), yet still build Appolo 12, AI will never have those chaotic, daft, quirky, random elements in it's 'mental' make-up.

No comments: