• Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    21 hours ago

    Massive environmental harms

    I find this questionable; people forget that a locally-hosted LLM is no more taxing than a video game.

    So read and learn.

    No chance it’s going to get better

    Why do you believe this? It has continued to get dramatically better over the past 5 years. Look at where GPT2 was in 2019.

    Fair enough. It’s not going to get better because the fundamental problem is AI as represented by, say, ChatGPT doesn’t know anything. It has no understanding of anything it’s “saying”. Therefore, any results derived from ChatGPT or equivalent, will need to be double-checked in any serious endeavor. So, yes it can poop out a legal brief in two seconds but it still has to be revised, refined, and inevitably fixed when it hallucinates precedent citations and just about anything else. That, the core of it, will never get better. It might get faster. It might “sound” “more human”. But it won’t get better.

    No profitable product […] Tens of thousands of (useful!) careers terminated

    Do you not see the obvious contradiction here? If you are sure that this is not going to get better and it’s not profitable, then you have nothing to worry about in the long-term about careers being replaced by AIs.

    Well tell that to the half a million people laid off in the last couple of years. Damage is done. Also, the bubble is still growing, and if you haven’t noticed what AI has done to the HR industry, let me summarize it thusly: it has destroyed it.

    Destroyed Internet search, arguably the one necessary service on the Internet

    Google did this intentionally as part of enshittification.

    Well, yes. Every company which has chosen to promote and focus on AI has done so intentionally. That doesn’t mean it’s good. If AI wasn’t the all-hype vaporware it is, this wouldn’t have been an option. If OpenAI had been honest about it and said “it’s very interesting and we’re still working on it” instead of “it’s absolutely going to change the world in six months” this wouldn’t be the unusable shitpile it is.

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 hours ago

      I don’t think we disagree that much.

      So read and learn. Okay, I agree that it can have environmental impact due to power usage and water consumption. But this isn’t a fundamental problem – we can use green power (I’ve heard there are plans to build nuclear plants in California for this reason) and build them in a place without water shortages (i.e. somewhere other than California.) AI differs from fossil fuels in this regard, which are fundamentally environmentally damaging.

      But still, I cringe when someone implies open-model locally-hosted AIs are environmentally problematic. They have no sense of scale whatsoever.

      But it still has to be revised, refined, and inevitably fixed when it hallucinates precedent citations and just about anything else. Well yeah, it’s slop, as I said. These are only suitable in cases where complete reliability is not required. But there’s no reason to believe that hallucinations won’t decrease in frequency over time (as they already have been), or at that the domains in which hallucinations are common won’t shrink over time. I’m not claiming these methods will ever reach 100% reliability, but humans (the thing they are meant to replace) also don’t have reliability. So how many years until the reliability of an LLM exceeds that of a human? Yes I know I’m making humans sound fungible, but to our corporate overlords we mostly are.

      if you haven’t noticed what AI has done to the HR industry, let me summarize it thusly: it has destroyed it.

      Good, so we agree that there is the potential for long-term damage. In other words, AIs are a long-term threat, not just a short-term one. Maybe the bubble will pop but so did the dotcom bubble and we still have the internet.

      enshittification

      No, I think enshittification started well before 2022 (ChatGPT). Sure, even before that LLMs were making SEO garbage webpages that google was reporting, so you can blame AI in that regard – but I don’t believe for a second that Google couldn’t have found a way to filter those kinds of results out. The user-negative feature was profitable for them, so they didn’t fix it. If LLMs hadn’t been around, they would have found other ways to make search more user-negative (and they probably did indeed employ such techniques).