- cross-posted to:
- programmer_humor@programming.dev
“If you don’t have organic intelligence at home, store-bought is fine.” - leo (probably)
Bonus points if the attackers use ai to script their attacks, too. We can fully automate the SaaS cycle!
That is the real dead Internet theory: everything from production to malicious actors to end users are all ai scripts wasting electricity and hardware resources for the benefit of no human.
Seems like a fitting end to the internet, imo. Or the recipe for the Singularity.
Not only internet. Soon everybody will use AI for everything. Lawyers will use AI in court on both sides. AI will fight against AI.
I was at a coffee shop the other day and 2 lawyers were discussing how they were doing stuff with ai that they didn’t know anything about and then just send to their clients.
That shit scared the hell out of me.
And everything will just keep getting worse with more and more common folk eating the hype and brainwash using these highly incorrect tools in all levels of our society everyday to make decisions about things they have no idea about.
AI is yet another technology that enables morons to think they can cut out the middleman of programming staff, only to very quickly realise that we’re more than just monkeys with typewriters.
Ha, you fools still pay for doors and locks? My house is now 100% done with fake locks and doors, they are so much lighter and easier to install.
Wait! why am I always getting robbed lately, it can not be my fake locks and doors! It has to be weirdos online following what I do.
Hilarious and true.
last week some new up and coming coder was showing me their tons and tons of sites made with the help of chatGPT. They all look great on the front end. So I tried to use one. Error. Tried to use another. Error. Mentioned the errors and they brushed it off. I am 99% sure they do not have the coding experience to fix the errors. I politely disconnected from them at that point.
What’s worse is when a noncoder asks me, a coder, to look over and fix their ai generated code. My response is “no, but if you set aside an hour I will teach you how HTML works so you can fix it yourself.” Never has one of these kids asking ai to code things accepted which, to me, means they aren’t worth my time. Don’t let them use you like that. You aren’t another tool they can combine with ai to generate things correctly without having to learn things themselves.
The fact that “AI” hallucinates so extensively and gratuitously just means that the only way it can benefit software development is as a gaggle of coked-up juniors making a senior incapable of working on their own stuff because they’re constantly in janitorial mode.
It’ll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it’ll be way better than now.
That’s certainly one theory, but as we are largely out of training data there’s not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.
To get better it would need better training data. However there are always more junior devs creating bad training data, than senior devs who create slightly better training data.
And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.
Plenty of good programmers use AI extensively while working. Me included.
Mostly as an advance autocomplete, template builder or documentation parser.
You obviously need to be good at it so you can see at a glance if the written code is good or if it’s bullshit. But if you are good it can really speed things up without any risk as you will only copy cody that you know is good and discard the bullshit.
Obviously you cannot develop without programming knowledge, but with programming knowledge is just another tool.
I maintain strong conviction that if a good programmer uses llm in their work, they just add more work for themselves, and if less than good one does it, they add new exciting and difficult to find bugs, while maintaining false confidence in their code and themselves.
I have seen so much code that looks good on first, second, and third glance, but actually is full of shit, and I was able to find that shit by doing external validation like talking to the dev or brainstorming the ways to test it, the things you categorically cannot do with unreliable random words generator.That’s why you use unit test and integration test.
I can write bad code myself or copy bad code from who-knows where. It’s not something introduced by LLM.
Remember famous Linus letter? “You code this function without understanding it and thus you code is shit”.
As I said, just a tool like many other before it.
I use it as a regular practice while coding. And to be true, reading my code after that I could not distinguish what parts where LLM and what parts I wrote fully by myself, and, to be honest, I don’t think anyone would be able to tell the difference.
It would probably a nice idea to do some kind of turing test, a put a blind test to distinguish the AI written part of some code, and see how precisely people can tell it apart.
I may come back with a particular piece of code that I specifically remember to be an output from deepseek, and probably withing the whole context it would be indistinguishable.
Also, not all LLM usage is for copying from it. Many times you copy to it and ask the thing yo explain it to you, or ask general questions. For instance, to seek for specific functions in C# extensive libraries.
That’s why you use unit test and integration test.
Good start, but not even close to being enough. What if code introduces UB? Unless you specifically look for that, and nobody does, neither unit nor on-target tests will find it. What if it’s drastically ineffective? What if there are weird and unusual corner cases?
Now you spend more time looking for all of that and designing tests that you didn’t need to do if you had proper practices from the beginning.It would probably a nice idea to do some kind of turing test, a put a blind test to distinguish the AI written part of some code, and see how precisely people can tell it apart.
But that’s worse! You do realise how that’s worse, right? You lose all the external ways to validate the code, now you have to treat all the code as malicious.
For instance, to seek for specific functions in C# extensive libraries.
And spend twice as much time trying to understand why can’t you find a function that your LLM just invented with absolute certainty of a fancy autocomplete. And if that’s an easy task for you, well, then why do you need this middle layer of randomness. I can’t think of a reason why not to search in the documentation instead of introducing this weird game of “will it lie to me”