Ten years ago I was certain that a natural language voice interface to a computer was going to stay science fiction permanently. I was wrong. In ten years time you may also be wrong.
And yet I just forsaw a future in which it wasn’t. AI has already exceeded Trump levels of understanding, intelligence and truthfulness. Why wouldn’t it beat you or I later? Exponential growth in computing power and all that.
The diminishing returns from the computing power scale much faster than the very static rate (and in many sectors plateauing rate) of growth in computing power, but if you believe OpenAI and Deepmind then they’ve already proven INFINITE processing power cannot reach it from their studies in 2020 and also in 2023.
They already knew it wouldn’t succeed, they always knew, and they told everyone, but we’re still surrounded by people like you being grifted by it all.
EDIT: I must be talking to a fucking bot because I already linked those scientific articles earlier, too.
Can you go into a bit more details on why you think these papers are such a home run for your point?
Where do you get 95% from, these papers don’t really go into much detail on human performance and 95% isn’t mentioned in either of them
These papers are for transformer architectures using next token loss. There are other architectures (spiking, tsetlin, graph etc) and other losses (contrastive, RL, flow matching) to which these particular curves do not apply
These papers assume early stopping, have you heard of the grokking phenomenon? (Not to be confused with the Twitter bot)
These papers only consider finite size datasets, and relatively small ones at that. I.e. How many “tokens” would a 4 year old have processed? I imagine that question should be somewhat quantifiable
These papers do not consider multimodal systems.
You talked about permeance, does a RAG solution not overcome this problem?
I think there is a lot more we don’t know about these things than what we do know. To say we solved it all 2-5 years ago is, perhaps, optimistic
Thanks for the abuse. I love it when I’m discussing something with someone and they start swearing at me and calling me names because I disagree. Really makes it fun. /s You can fuck right off yourself too, you arrogant tool.
Ten years ago I was certain that a natural language voice interface to a computer was going to stay science fiction permanently. I was wrong. In ten years time you may also be wrong.
Well, if you want one that’s 98% accurate then you were actually correct that it’s science fiction for the foreseeable future.
And yet I just forsaw a future in which it wasn’t. AI has already exceeded Trump levels of understanding, intelligence and truthfulness. Why wouldn’t it beat you or I later? Exponential growth in computing power and all that.
The diminishing returns from the computing power scale much faster than the very static rate (and in many sectors plateauing rate) of growth in computing power, but if you believe OpenAI and Deepmind then they’ve already proven INFINITE processing power cannot reach it from their studies in 2020 and also in 2023.
They already knew it wouldn’t succeed, they always knew, and they told everyone, but we’re still surrounded by people like you being grifted by it all.
EDIT: I must be talking to a fucking bot because I already linked those scientific articles earlier, too.
Can you go into a bit more details on why you think these papers are such a home run for your point?
Where do you get 95% from, these papers don’t really go into much detail on human performance and 95% isn’t mentioned in either of them
These papers are for transformer architectures using next token loss. There are other architectures (spiking, tsetlin, graph etc) and other losses (contrastive, RL, flow matching) to which these particular curves do not apply
These papers assume early stopping, have you heard of the grokking phenomenon? (Not to be confused with the Twitter bot)
These papers only consider finite size datasets, and relatively small ones at that. I.e. How many “tokens” would a 4 year old have processed? I imagine that question should be somewhat quantifiable
These papers do not consider multimodal systems.
You talked about permeance, does a RAG solution not overcome this problem?
I think there is a lot more we don’t know about these things than what we do know. To say we solved it all 2-5 years ago is, perhaps, optimistic
You claim to be some kind of expert but you can’t even read the paper? Lmao.
Thanks for the abuse. I love it when I’m discussing something with someone and they start swearing at me and calling me names because I disagree. Really makes it fun. /s You can fuck right off yourself too, you arrogant tool.