https://twipped.social/@twipped/114662771295312758
article they are referencing: https://futurism.com/atari-beats-chatgpt-chess
https://twipped.social/@twipped/114662771295312758
article they are referencing: https://futurism.com/atari-beats-chatgpt-chess
So read and learn.
Fair enough. It’s not going to get better because the fundamental problem is AI as represented by, say, ChatGPT doesn’t know anything. It has no understanding of anything it’s “saying”. Therefore, any results derived from ChatGPT or equivalent, will need to be double-checked in any serious endeavor. So, yes it can poop out a legal brief in two seconds but it still has to be revised, refined, and inevitably fixed when it hallucinates precedent citations and just about anything else. That, the core of it, will never get better. It might get faster. It might “sound” “more human”. But it won’t get better.
Well tell that to the half a million people laid off in the last couple of years. Damage is done. Also, the bubble is still growing, and if you haven’t noticed what AI has done to the HR industry, let me summarize it thusly: it has destroyed it.
Well, yes. Every company which has chosen to promote and focus on AI has done so intentionally. That doesn’t mean it’s good. If AI wasn’t the all-hype vaporware it is, this wouldn’t have been an option. If OpenAI had been honest about it and said “it’s very interesting and we’re still working on it” instead of “it’s absolutely going to change the world in six months” this wouldn’t be the unusable shitpile it is.
I don’t think we disagree that much.
But still, I cringe when someone implies open-model locally-hosted AIs are environmentally problematic. They have no sense of scale whatsoever.
Good, so we agree that there is the potential for long-term damage. In other words, AIs are a long-term threat, not just a short-term one. Maybe the bubble will pop but so did the dotcom bubble and we still have the internet.
No, I think enshittification started well before 2022 (ChatGPT). Sure, even before that LLMs were making SEO garbage webpages that google was reporting, so you can blame AI in that regard – but I don’t believe for a second that Google couldn’t have found a way to filter those kinds of results out. The user-negative feature was profitable for them, so they didn’t fix it. If LLMs hadn’t been around, they would have found other ways to make search more user-negative (and they probably did indeed employ such techniques).