As LLMs become the go-to for quick answers, fewer people are posting questions on forums or social media. This shift could make online searches less fruitful in the future, with fewer discussions and solutions available publicly. Imagine troubleshooting a tech issue and finding nothing online because everyone else asked an LLM instead. You do the same, but the LLM only knows the manual, offering no further help. Stuck, you contact tech support, wait weeks for a reply, and the cycle continues—no new training data for LLMs or new pages for search engines to index. Could this lead to a future where both search results and LLMs are less effective?
Maybe in the sense that the Internet may become so inundated with AI garbage that the only way to get factual information is by actually reading a book or finding a real person to ask, face to face.
You know how the steel from prenuclear proliferation is prized? I wonder if that’s going to happen with data from before 2022 as well now. Lol.
There might be a way to mitigate that damage. You could categorize the training data by the source. If it’s verified to be written by a human, you could give it a bigger weight. If not, it’s probably contaminated by AI, so give it a smaller weight. Humans still exist, so it’s still possible to obtain clean data. Quantity is still a problem, since these models are really thirsty for data.
LLMs can’t distinguish truth from falsehoods, they only produce output that resembles other output. So they can’t tell the difference between human and AI input.
That’s a problem when you want to automate the curation and annotation process. So far, you could have just dumped all of your data into the model, but that might not be an option in the future, as more and more of the training data was generated by other LLMs.
When that approach stops working, AI companies need to figure out a way to get high quality data, and that’s when it becomes useful to have data that was verified to be written by actual people. This way, an AI doesn’t even need to be able to curate the data, as humans have done that to some extent. You could just prioritize the small amount of verified data while still using the vast amounts of unverified data for training.