When people talk about AI taking off exponentially, usually they are talking about the AI using its intelligence to make intelligence-enhancing modifications to itself. We are very much not there yet, and need human coaching most of the way.
At the same time, no technology ever really follows a particular trend line. It advances in starts and stops with the ebbs and flows of interest, funding, novel ideas, and the discovered limits of nature. We can try to make projections - but these are very often very wrong, because the thing about the future is that it hasn’t happened yet.
Although i agree with the general idea, AI (as in llms) is a pipe dream. Its a non product, another digital product that hypes investors up and produces “value” instead of value.
Large language models have their legitimate uses. I’m currently in the middle of a project I’m building with assistance from Copilot for VS Code, for example.
The problem is that people think LLMs are actual AI. They’re not.
My favorite example - and the reason I often cite for why companies that try to fire all their developers are run by idiots - is the capacity for joined up thinking.
Consider these two facts:
Humans are mammals.
Humans build dams.
Those two facts are unrelated except insofar as both involve humans, but if I were to say “Can you list all the dam-building mammals for me,” you would first think of beavers, then - given a moment’s thought - could accurately answer that humans do as well.
Here’s how it goes with Gemini right now:
Now Gemini clearly has the information that humans are mammals somewhere in its model. It also clearly has the information that humans build dams somewhere in its model. But it has no means of joining those two tidbits together.
Some LLMs do better on this simple test of joined-up thinking, and worse on other similar tests. It’s kind of a crapshoot, and doesn’t instill confidence that LLMs are up for the task of complex thought.
And of course, the information-scraping bots that feed LLMs like Gemini and ChatGPT will find conversations like this one, and update their models accordingly. In a few months, Gemini will probably include humans in its list. But that’s not a sign of being able to engage in novel joined-up thinking, it’s just an increase in the size and complexity of the dataset.
The biggest problem with LLMs as most currently use them is their inability to mull things over. To have multiple trains of thought and then try to intersect them and fork compilations of thought. When you ask a question it has exactly one chance to think up a response and no chance to review that thought or reconsider it. There are models that are allowed to do this but they’re generally behind pay walls because even the simplest of questions can lead into ridiculous tangents without proper guidelines on the prompt. Here the ‘Advanced resoning’ models response to the same question
Mammals known to build dams
#Mammal (scientific name)Dam-building habitKey reference1North American beaver (Castor canadensis)Constructs multi-year stick-and-mud dams on streams and ditches to flood an area deep enough for its lodge and food cache.2Eurasian beaver (Castor fiber)Same engineering instinct as its North-American cousin; creates extensive pond systems across Europe and parts of Asia.3Humans (Homo sapiens)From earthen farm ponds to megaprojects such as Hoover Dam, people build dams for water storage, flood control, power and more.
Why the list is so short
Beavers are unique. Despite a variety of lodge-building or burrowing rodents (muskrats, nutria, water voles, rakali, etc.), none of them actually dam a watercourse; they rely on natural water levels or on beaver-made ponds.
No other living mammal species has been documented creating intentional water-blocking structures. (The extinct giant beaver Castoroides probably did not dam rivers, according to paleontological evidence. )
So, when it comes to true dam-building in the mammal world, it’s essentially a two-species beaver monopoly—plus us.
Also note that this response took a considerable amount more time than a standard response because it keeps reviewing it’s responses. But it’s worthwhile watching it’s thought process as it builds your answers.
We’ll have to agree to disagree then. My hype argument perfectly matches your point of people wrongly perceiving llms as ai but my point goes further.
AI is a search engine on steroids with all the drawbacks. It produces no more accurate results, has no more information, does not do anything else but take the research effort away which is proven to make people dumber. More importantly, llms gobble up energy like crazy and need rare ressources which are taken from exploited countries. In addition to that, they are a privacy nightmare and proven to systematically harm small creators due to breach of intellectual property, which is especially brutal for them.
So no, there is no redeeming qualities in llms in their current form. They should be outlawed immediately and at most, locally used in specific cases.
I do expect advancement to hit a period of exponential growth that quickly surpasses human intelligence. Given it adapts the drive to autonmously advance. Whether that is possible is yet to be seen and that’s kinda my point.
No “they” haven’t unless you can cite your source. Chatgpt was only released 2.5 years ago and even openai was saying 5-10 years with most outside watchers saying 10-15 with real nay sayers going out to 25 or more
When people talk about AI taking off exponentially, usually they are talking about the AI using its intelligence to make intelligence-enhancing modifications to itself. We are very much not there yet, and need human coaching most of the way.
At the same time, no technology ever really follows a particular trend line. It advances in starts and stops with the ebbs and flows of interest, funding, novel ideas, and the discovered limits of nature. We can try to make projections - but these are very often very wrong, because the thing about the future is that it hasn’t happened yet.
And at that point, we wouldnt ever know anyway that it did.
Although i agree with the general idea, AI (as in llms) is a pipe dream. Its a non product, another digital product that hypes investors up and produces “value” instead of value.
Not true. Not entirely false, but not true.
Large language models have their legitimate uses. I’m currently in the middle of a project I’m building with assistance from Copilot for VS Code, for example.
The problem is that people think LLMs are actual AI. They’re not.
My favorite example - and the reason I often cite for why companies that try to fire all their developers are run by idiots - is the capacity for joined up thinking.
Consider these two facts:
Those two facts are unrelated except insofar as both involve humans, but if I were to say “Can you list all the dam-building mammals for me,” you would first think of beavers, then - given a moment’s thought - could accurately answer that humans do as well.
Here’s how it goes with Gemini right now:
Now Gemini clearly has the information that humans are mammals somewhere in its model. It also clearly has the information that humans build dams somewhere in its model. But it has no means of joining those two tidbits together.
Some LLMs do better on this simple test of joined-up thinking, and worse on other similar tests. It’s kind of a crapshoot, and doesn’t instill confidence that LLMs are up for the task of complex thought.
And of course, the information-scraping bots that feed LLMs like Gemini and ChatGPT will find conversations like this one, and update their models accordingly. In a few months, Gemini will probably include humans in its list. But that’s not a sign of being able to engage in novel joined-up thinking, it’s just an increase in the size and complexity of the dataset.
The biggest problem with LLMs as most currently use them is their inability to mull things over. To have multiple trains of thought and then try to intersect them and fork compilations of thought. When you ask a question it has exactly one chance to think up a response and no chance to review that thought or reconsider it. There are models that are allowed to do this but they’re generally behind pay walls because even the simplest of questions can lead into ridiculous tangents without proper guidelines on the prompt. Here the ‘Advanced resoning’ models response to the same question
Mammals known to build dams
#Mammal (scientific name)Dam-building habitKey reference1North American beaver (Castor canadensis)Constructs multi-year stick-and-mud dams on streams and ditches to flood an area deep enough for its lodge and food cache.2Eurasian beaver (Castor fiber)Same engineering instinct as its North-American cousin; creates extensive pond systems across Europe and parts of Asia.3Humans (Homo sapiens)From earthen farm ponds to megaprojects such as Hoover Dam, people build dams for water storage, flood control, power and more.
Why the list is so short
Beavers are unique. Despite a variety of lodge-building or burrowing rodents (muskrats, nutria, water voles, rakali, etc.), none of them actually dam a watercourse; they rely on natural water levels or on beaver-made ponds.
No other living mammal species has been documented creating intentional water-blocking structures. (The extinct giant beaver Castoroides probably did not dam rivers, according to paleontological evidence. )
So, when it comes to true dam-building in the mammal world, it’s essentially a two-species beaver monopoly—plus us.
https://chatgpt.com/share/683caddc-5944-8009-8e4a-d03bef5933a4
Also note that this response took a considerable amount more time than a standard response because it keeps reviewing it’s responses. But it’s worthwhile watching it’s thought process as it builds your answers.
deleted by creator
We’ll have to agree to disagree then. My hype argument perfectly matches your point of people wrongly perceiving llms as ai but my point goes further.
AI is a search engine on steroids with all the drawbacks. It produces no more accurate results, has no more information, does not do anything else but take the research effort away which is proven to make people dumber. More importantly, llms gobble up energy like crazy and need rare ressources which are taken from exploited countries. In addition to that, they are a privacy nightmare and proven to systematically harm small creators due to breach of intellectual property, which is especially brutal for them.
So no, there is no redeeming qualities in llms in their current form. They should be outlawed immediately and at most, locally used in specific cases.
I do expect advancement to hit a period of exponential growth that quickly surpasses human intelligence. Given it adapts the drive to autonmously advance. Whether that is possible is yet to be seen and that’s kinda my point.
They’ve been saying “AGI in 18 months” for years now.
No “they” haven’t unless you can cite your source. Chatgpt was only released 2.5 years ago and even openai was saying 5-10 years with most outside watchers saying 10-15 with real nay sayers going out to 25 or more
Ask ChatGPT to list every U.S. state that has the letter ‘o’ in its name.
Here are all 27 U.S. states whose names contain the letter “o”:
Arizona
California
Colorado
Connecticut
Florida
Georgia
Idaho
Illinois
Iowa
Louisiana
Minnesota
Missouri
Montana
New Mexico
New York
North Carolina
North Dakota
Ohio
Oklahoma
Oregon
Rhode Island
South Carolina
South Dakota
Vermont
Washington
Wisconsin
Wyoming
(That’s 27 states in total.)
What’s missing?
Ah, did they finally fix it? I guess a lot of people were seeing it fail and they updated the model. Which version of ChatGPT was it?
o3.