Then each QA human will be paired with a second AI that will catch those mistakes the human ignores. And another human will be hired to watch that AI and that human will get an AI assistant to catch their mistakes.
Eventually they’ll need a rule that you can only communicate with the human/AI directly above you or below you in the chain to avoid meetings with entire countries of people.
That’s a part of it. Another part is that it looks for patterns that it can apply in other places, which is how it ends up hallucinating functions that don’t exist and things like that.
Like it can see that English has the verbs add, sort, and climb. And it will see a bunch of code that has functions like add(x, y) and sort( list ) and might conclude that there must also be a climb( thing ) function because that follows the pattern of functions being verb( objects ). It didn’t know what code is or even verbs for that matter. It could generate text explaining them because such explanations are definitely part of its training, but it understands it in the same way a dictionary understands words or an encyclopedia understands the concepts contained within.