• vithigar@lemmy.ca
    link
    fedilink
    arrow-up
    85
    ·
    13 hours ago

    I love the detail that she put “+ AI” on both sides of the equation so that it’s still technically correct regardless of what the AI stands for.

  • JohnSmith@feddit.uk
    cake
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    2
    ·
    14 hours ago

    I’m old enough to have gone through a number of these technology bubbles, so much so that I haven’t paid much attention to them for a fair while. This AI bs feels a bit different, though. It seems to me that lots more people have completely lost their minds this time.

    Like all bubbles, this too will end up in the same rubbish heap.

    • surewhynotlem@lemmy.world
      link
      fedilink
      arrow-up
      43
      ·
      14 hours ago

      That’s because there’s a non zero amount of actually functionality. Chatgpt does some useful stuff for normal people. It’s accessible.

      Contrast that to crypto, which was only accessible to tech folks and barely useful, or NFT which had no use at all.

      Ok, I guess to be fair, the purpose of NFT was to separate chumps from their money, and it was quite good at that.

      • utopiah@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        3 hours ago

        Can’t believe I’m doing this… but here I go, actually defending cryptocurrency/blockchain :

        … so yes there are some functionalities to AI. In fact I don’t think anybody is saying 100% of it is BS and a scam, rather… just 99.99% of the marketing claims during the last decade ARE overhyped if not plain false. One could say the same for crypto/blockchain, namely that SQLite or a random DB or is enough for most people BUT there are SOME cases where it might actually be somehow useful, ideally not hijacked by “entrepreneurs” (namely VC tools) who only care about making money but not what the technology could actually bring.

        Now anyway both AI & crypto use an inconceivable amount of resources (energy, water, GPU and dedicated hardware, real estimate, R&D top talent, human resources for dataset annotation including very VERY gruesome ones, etc) so yes even if in 0.01% they are actually useful one still must ask, is it worth it? Is it OK to burn literally tons of CO2eq … to generate an image that one could have done quite easily another way? Summarize a text?

        IMHO both AI & crypto are not entirely useless in theory yet in practice have been :

        • hijacked by VCs and grifters or all kinds,
        • abused by pretty terrible people, including scammers and spammers,
        • absolutely underestimated in terms of resource consumption and thus ecological and societal impact

        So… sure, go generate some “stuff” if you want to but please be mindful of what it genuinely costs.

      • Dicska@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        11 hours ago

        There are pretty great applications in medicine. AI is an umbrella term that includes working with LLMs, image processing, pattern recognition and other stuff. There are fields where AI is a blessing. The problem is, as JohnSmith mentioned, it’s the “solar battery” of the current day. At one point they had to make and/or advertise everything with solar batteries, even stuff that was better off with… batteries. Or the good ol’ plug. Hopefully, it will settle down in a few year’s time and they will focus on areas where it is more successful. They just need to find out which areas those are.

        • utopiah@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          3 hours ago

          There are pretty great applications in medicine.

          Like what? I discussed just 2 days ago with a friend who works in public healthcare, who is bullish about AI and best he could come up with DeepMind AlphaFold which is yes interesting, even important, and yet in a way “good old fashion AI” as has been the case for the last half century or so, namely a team of dedicated researchers, actual humans, focusing on a hard problem, throwing state of the art algorithms at a problem and some compute resources… but AFAICT there is so significant medicine research that made a significant change through “modern” AI like LLMs.

      • Captain Aggravated@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        12 hours ago

        Possibly through ignorance or misunderstanding, btu I still think the tech behind NFTs may have some function, but it’s certainly not the weird pictures of badly colored in monkeys speculation market that happened there.

        • mnemonicmonkeys@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          It could potentially work for DRM, in that you can have a key assigned to an identity that can later be transferred and not be dependent on a particular marketplace.

          For example, you could buy a copy of whatever next year’s Call of Duty game will be, and have the key added to your NFT wallet. Then you could play it on XBox, Playstation, Steam, or GOG with that single license.

          Of course that will never happen because that’d be more consumer friendly than we have now.

    • Revan343@lemmy.ca
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      14 hours ago

      It seems to me that lots more people have completely lost their minds this time

      That’s not really an AI thing, that’s just… everything.

    • RunawayFixer@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      13 hours ago

      The internet did not end up in the trash heap after the dot com bubble burst. Ai too has real world uses that go beyond the current planet wrecking bubble.

  • jballs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    94
    arrow-down
    1
    ·
    17 hours ago

    My company, while cutting back elsewhere, has dedicated a few million to AI projects over the next couple years. Not “projects to solve X business problem.” Just projects that use AI.

    So of course now, anything that is automated in any way is now being touted as AI. Taking data from one system and populating another? That’s AI.

    • Bobby Turkalino@lemmy.yachts
      link
      fedilink
      arrow-up
      6
      ·
      6 hours ago

      AI is such a loose term that calling anything with if-else statements “AI” wouldn’t be lying (I learned about decision trees in my university machine learning class and those are just giant nested if-else statements)

    • mmddmm@lemm.ee
      link
      fedilink
      arrow-up
      29
      arrow-down
      1
      ·
      16 hours ago

      Taking data from one system and populating another? That’s AI.

      Well, it is. You just have to go back enough in time to find the context when people still called it so.

      Gotta use those automatic computers full of electronic brains to do all those tasks that used to take years on rooms full of people with chemical brains hired as computers!

        • jballs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          16 hours ago

          More like “cp? That’s a violation of my ethical constraints and you have been reported to the authorities.”

          • Serinus@lemmy.world
            cake
            link
            fedilink
            arrow-up
            10
            ·
            16 hours ago

            Really though, about a quarter of my work related coding queries come back with “redacted to meet responsible AI guidelines”.

            It’s an AI specifically for code. Apparently it thinks half the stuff I do is hacking.

  • ZweiEuro@lemmy.world
    link
    fedilink
    arrow-up
    57
    ·
    18 hours ago

    This is exactly what my masters thesis feels like ATM, every attention is on all the AI crap also because the Uni gets grants ont the topic. Everything else just dies

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        11
        ·
        14 hours ago

        It’s frustrating to translate from what they said to what they mean. It’s more effort on my part and this is my free time, I don’t want to work.

        Just communicate as clearly as you can.

        • baguettefish@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          52 minutes ago

          I understand, but people also have very different standards of communication clarity. There are a lot of hidden assumptions, even when you’re trying to be 100% clear. Sometimes people can’t put their thoughts into words, or they don’t have the capacity for what you think is clarity. And in this case it’s just a very minor mistake. The person might not be native, or they may have been failed by their education system, or they might just be tired or stressed. There are lots of valid reasons why communication can degrade. I’m a bit autistic and struggle with ambiguous meaning or communication that doesn’t fit patterns I’m used to, sometimes to a truly irrational degree, and I’d like for others to speak my language more so I can understand them better, and I’d like to be able to speak their language more, to make them understand me better, but it’s just sort of the way of life. People are very fluid beings, not at all tied to rigid logic. People are also all very different, and their efforts all come in different forms. They emphasize different things, focus on different things, not just communication efficiency. What I’ve learned too with other autistic people is that everyone’s standards for communication clarity are different. I don’t think you can speak a universal language that everybody understands perfectly 100% of the time. What does happen is that people who talk to each other often learn each other’s language, able to talk more concisely and efficiently, but you can’t really expect that of strangers on the internet. Of course “birds of a feather flock together” as they say. People in the same internet communities might have the same interests, consume the same media, have the same discussions with the same people. But there’s no getting around communication degrading. In the worst case you just have to ask someone what they mean, maybe clearly explain your issue with the ambiguities, and wait for disambiguation. Learning to ask precise questions so as to elicit the best response from someone, to immediately get the answer you seek, is also a lifelong challenge. It’s not worth getting upset about a single instance of degraded communication, if you can even call it that. I’d be more upset with the universe for making us all so very different.

        • JammyDodger3579@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          8
          ·
          14 hours ago

          This seems like such a strange take. You make it sound like it’s cost you effort to translate the error, but how are you quantifying that effort? If effort efficiency is something you’re striving for, it doesn’t feel like it makes sense to correct the mistake (which costs effort to do)

          The gap between the two - what they said and what they meant - seems so small it probably took more “work” to correct them.

          I’d go as far as to say that the work to correct them will never be repaid by the saved effort of not having to encounter this particular mistakes from this particular person ever again.

          • queermunist she/her@lemmy.ml
            link
            fedilink
            arrow-up
            8
            arrow-down
            1
            ·
            13 hours ago

            It’s more effort than a straight read.

            I didn’t correct anyone, by the way. I’m just a different person griping about how much it sucks to have to communicate with people who don’t care about being understood.

            And you’re right, correcting people is even more work! So on top of the work of translating their stupid post we now have to tell them they were wrong so they don’t do this to us again. If they aren’t ever corrected they’ll just keep being wrong and we’ll have to keep translating their posts.

            The alternative is to block them so we never see their posts ever again, which honestly is a better idea. It not like we’re missing out.

  • underscores@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    ·
    17 hours ago

    Reminds me of the insane LinkedIn post where a brilliant person was sharing their new equation which was essentially word + buzzword + AI.

  • Halosheep@lemm.ee
    link
    fedilink
    arrow-up
    1
    arrow-down
    16
    ·
    14 hours ago

    This joke is already old. Time to find a new horse to beat to death.