As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?

    • SpicyColdFartChamber@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 days ago

      You know how the steel from prenuclear proliferation is prized? I wonder if that’s going to happen with data from before 2022 as well now. Lol.

      • chaosCruiser@futurology.todayOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        There might be a way to mitigate that damage. You could categorize the training data by the source. If it’s verified to be written by a human, you could give it a bigger weight. If not, it’s probably contaminated by AI, so give it a smaller weight. Humans still exist, so it’s still possible to obtain clean data. Quantity is still a problem, since these models are really thirsty for data.

        • Tar_Alcaran@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          2 days ago

          LLMs can’t distinguish truth from falsehoods, they only produce output that resembles other output. So they can’t tell the difference between human and AI input.

          • chaosCruiser@futurology.todayOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            That’s a problem when you want to automate the curation and annotation process. So far, you could have just dumped all of your data into the model, but that might not be an option in the future, as more and more of the training data was generated by other LLMs.

            When that approach stops working, AI companies need to figure out a way to get high quality data, and that’s when it becomes useful to have data that was verified to be written by actual people. This way, an AI doesn’t even need to be able to curate the data, as humans have done that to some extent. You could just prioritize the small amount of verified data while still using the vast amounts of unverified data for training.