The problem though (with AI compared to humans): The human team learns, i.e. at some point they probably know what the mistake was and avoids doing it again.
AI instead of humans: well maybe the next or different model will fix it maybe…
And what is very clear to me after trying to use these models, the larger the code-base the worse the AI gets, to the point of not helping at all or even being destructive.
Apart from dissecting small isolatable pieces of independent code (i.e. keep the context small for the AI).
Humans likely get slower with a larger code-base, but they (usually) don’t arrive at a point where they can’t progress any further.
The problem though (with AI compared to humans): The human team learns, i.e. at some point they probably know what the mistake was and avoids doing it again. AI instead of humans: well maybe the next or different model will fix it maybe…
And what is very clear to me after trying to use these models, the larger the code-base the worse the AI gets, to the point of not helping at all or even being destructive. Apart from dissecting small isolatable pieces of independent code (i.e. keep the context small for the AI).
Humans likely get slower with a larger code-base, but they (usually) don’t arrive at a point where they can’t progress any further.