That’s certainly one theory, but as we are largely out of training data there’s not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.
To get better it would need better training data. However there are always more junior devs creating bad training data, than senior devs who create slightly better training data.
It’ll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it’ll be way better than now.
That’s certainly one theory, but as we are largely out of training data there’s not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.
To get better it would need better training data. However there are always more junior devs creating bad training data, than senior devs who create slightly better training data.
And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.