the tools and models will get a lot better, but more importantly the end products that succeed will make measured, judicious use of AI.
there always has been slop, and people will always misuse tools and create abominations, but the heights of greatness that are possible are increasing with AI, not decreasing
I’m not even steeped in the depths of it. I’ve got friends who are using MCP and agents that can modify files on their own and work through problems over time and self-learn on your existing code, etc.
It’s crazy and makes my head spin. Many of them were super skeptical early on but the tools that are being developed are absolutely BONKERS.
Probably there is a lower ceiling in text generation to things that are considered “progress” by everyone. People can verbalize exactly what is wrong with pictures and video, but for text, if it is coherent and gets the point across what is progress beyond passing the Turing test?
Developers present AI like it’s static and in its final form. No vulnerabilities or mistakes will ever get fed back into the training data. Everything will be direct to production. Their entire world is updates and new features until AI gets brought up. And “slop”. That’s their “woke”.
No AI developer presents it like it’s static and in its final form. None. Not one of them. They are all racing for market share. We’re seeing leaps and bounds of improvements in the shortest of timeframes, even in the open source sector.
And this zealotry doesn’t even take into account that…humans make mistakes too. Much more than AI generates ‘slop’, there are a million garbage artists out there that produce complete trash. There are millions of people on the internet generating bad advice (Where’s Mankrik’s wife?).
I’ve seen people with this same attitude, attribute things incorrectly to AI slop because someone used an em-dash somewhere in their reply. So not only can’t the detractors actually tell what’s AI or not, their fundamentalist attitudes lead them to perceive every mistake as “AI SLOP”.
Have you seen how far it’s come in just a few short months though? It’s incredible the pace they’ve been achieving.
yeah, I think the OP’s take is really naive
the tools and models will get a lot better, but more importantly the end products that succeed will make measured, judicious use of AI.
there always has been slop, and people will always misuse tools and create abominations, but the heights of greatness that are possible are increasing with AI, not decreasing
I’m not even steeped in the depths of it. I’ve got friends who are using MCP and agents that can modify files on their own and work through problems over time and self-learn on your existing code, etc.
It’s crazy and makes my head spin. Many of them were super skeptical early on but the tools that are being developed are absolutely BONKERS.
I feel like on image/video generation, I can agree that the recent months have shown a lot of progress.
I’m not so much feeling it on the text generation front, it’s felt about the same to me for a while here, notably impressive, but still a bit… off…
Probably there is a lower ceiling in text generation to things that are considered “progress” by everyone. People can verbalize exactly what is wrong with pictures and video, but for text, if it is coherent and gets the point across what is progress beyond passing the Turing test?
Yeah it’s actually gotten scary how fast it’s been progressing. Even this post is outdated, the hand issues haven’t been a problem in a while
Developers present AI like it’s static and in its final form. No vulnerabilities or mistakes will ever get fed back into the training data. Everything will be direct to production. Their entire world is updates and new features until AI gets brought up. And “slop”. That’s their “woke”.
No AI developer presents it like it’s static and in its final form. None. Not one of them. They are all racing for market share. We’re seeing leaps and bounds of improvements in the shortest of timeframes, even in the open source sector.
And this zealotry doesn’t even take into account that…humans make mistakes too. Much more than AI generates ‘slop’, there are a million garbage artists out there that produce complete trash. There are millions of people on the internet generating bad advice (Where’s Mankrik’s wife?).
I’ve seen people with this same attitude, attribute things incorrectly to AI slop because someone used an em-dash somewhere in their reply. So not only can’t the detractors actually tell what’s AI or not, their fundamentalist attitudes lead them to perceive every mistake as “AI SLOP”.
I can’t tell if you think I’m talking about AI developers when I said developers.