I think the effects of it are… a bit more nuanced than that, perhaps?
I can definitely tell there are places where I’m plugging knowledge gaps fast. I just didn’t know how to do a thing, I did it AI-assisted once or twice and I don’t need to be AI assisted anymore because I understood how it works now. Cool, that. And I wouldn’t have learned it from traditional sources because asking in public support areas would have led to being told I suck and should read the documentation and/or to a 10 video series on Youtube where you can watch some guy type for seven hours.
But there are also places where AI assistance is never going to fill the blanks for me, you know? Larger trends, good habits, technical details or best practices that just aren’t going to come up from keeping a smart autocorrect that can explain why something was wrong.
Honestly, in those spaces the biggest barrier is still what it was: I don’t necessarily want to “progress” on those areas because I don’t need it and it’s not my job. I can automate a couple things I didn’t know how to automate before, and that’s alright. For the rest, I will probably live with the software someone else has made when it exists.
The problem is hubris, right? I know what I don’t know and which parts I care to learn. That’s fine. Coding assistant LLMs are a valid tool for someone like that to slightly expand their reach and I presume there’s a lot of people like that. It’s the random entrepeneurs who have been sold by big corpos that they don’t need a real programmer to build their billion-dollar app anymore that are going to crash and burn and may take some of the software industry down with them.
I don’t know, some of these guys have acccess to a LOT of code, and even more debate about what those good codebases entail.
I think the other issue is more relevant. Even 128K tokens is not enough for something really big, and the memory and processing costs for that do skyrocket. People are trying to work around it with draft models and summarization models, so they try to pick out the relevant parts of a codebase in one pass and then base their code generation just on that, and… I don’t think that’s going to work reliably at scale. The more chances you give a language model to lose their goddamn mind and start making crap up unsupervised the more work it’s going to be to take what they spit out and shape it into something reasonable.