• dil@lemmy.zip
    link
    fedilink
    arrow-up
    3
    ·
    21 days ago

    Would ai coders even get faster over time or just stay stagnant since they aren’t learning anything about what they’re doing

  • _cnt0@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    22 days ago

    I’ll quote myself from some time ago:

    The entire article is based on the flawed premise, that “AI” would improve the performance of developers. From my daily observation the only people increasing their throughput with “AI” are inexperienced and/or bad developers. So, create terrible code faster with “AI”. Suggestions by copilot are >95% garbage (even for trivial stuff) just slowing me down in writing proper code (obviously I disabled it precisely for that reason). And I spend more time on PRs to filter out the “AI” garbage inserted by juniors and idiots. “AI” is killing the productivity of the best developers even if they don’t use it themselves, decreases code quality leading to more bugs (more time wasted) and reducing maintainability (more time wasted). At this point I assume ignorance and incompetence of everybody talking about benefits of “AI” for software development. Oh, you have 15 years of experience in the field and “AI” has improved your workflow? You sucked at what you’ve been doing for 15 years and “AI” increases the damage you are doing which later has to be fixed by people who are more competent.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      21 days ago

      Why? That is a great use for AI. I’m guessing you are imagining that people are just blindly asking for unit tests and not even reading the results? Obviously don’t do that.

  • Scrath@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    22 days ago

    I talked to Microsoft Copilot 3 times for work related reasons because I couldn’t find something in documentation. I was lied to 3 times. It either made stuff up about how the thing I asked about works or even invented entirely new configuration settings

    • Senal@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      20 days ago

      In fairness the msdn documentation is prone to this also.

      By “this” I mean having what looks like a comprehensive section about the thing you want but the actual information you need isn’t there, but you need to read the whole thing to find out.

    • rozodru@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      22 days ago

      Claude AI does this ALL the time too. It NEEDS to give a solution, it rarely can say “I don’t know” so it will just completely make up a solution that it thinks is right without actually checking to see the solution exists. It will make/dream up programs or libraries that don’t and have never existed OR it will tell you something can do something when it has never been able to do that thing ever.

      And that’s just how all these LLMs have been built. they MUST provide a solution so they all lie. they’ve been programmed this way to ensure maximum profits. Github Copilot is a bit better because it’s with me in my code so it’s suggestions, most of the time, actually work because it can see the context and whats around it. Claude is absolute garbage, MS Copilot is about the same caliber if not worse than Claude, and Chatgpt is only good for content writing or bouncing ideas off of.

      • Croquette@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        21 days ago

        LLM are just sophisticated text predictions engine. They don’t know anything, so they can’t produce an “I don’t know” because they can always generate a text prediction and they can’t think.

        • Cyberflunk@lemmy.world
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          21 days ago

          Tool use, reasoning, chain of thought, those are the things that set llm systems apart. While you are correct in the most basic sense, it’s like saying a car is only a platform with wheels, it’s reductive of the capabilities

          • Croquette@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            21 days ago

            LLM are prediction engine. They don’t have knowledge, they only chain words together related to your topic.

            They don’t know they are wrong because they just don’t know anything period.

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    22 days ago

    The study was centered on bugfixing large established projects. This task is not really the one that AI helpers excel at.

    Also small number of participants (16) , the participants were familiar with the code base and all tasks seems to be smaller in completion time can screw results.

    Thus the divergence between studio results and many people personal experience that would experience increase of productivity because they are doing different tasks in a different scenario.

    • Feyd@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      22 days ago

      familiar with the code base

      Call me crazy but I think developers should understand what they’re working on, and using LLM tools doesn’t provide a shortcut there.

    • 6nk06@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      22 days ago

      The study was centered on bugfixing large established projects. This task is not really the one that AI helpers excel at.

      “AI is good for Hello World projects written in javascript.”

      Managers will still fire real engineers though.

      • daniskarma@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        22 days ago

        I find it more useful doing large language transformations and delving into unknown patterns, languages or environments.

        If I know a source head to toe, and I’m proficient with that environment, it’s going to offer little help. Specially if it’s a highly specialized problem.

        Since SVB crash there have been firings left and right. I suspect AI is only an excuse for them.

        • Zos_Kia@lemmynsfw.com
          link
          fedilink
          arrow-up
          0
          ·
          22 days ago

          Same experience here, performance is mediocre at best on an established code base. Recall tends to drop sharply as the context expands leading to a lot of errors.

          I’ve found coding agents to be great at bootstrapping projects on popular stacks, but once you reach a certain size it’s better to either make it work on isolated files, or code manually and rely on the auto complete.

          • justastranger@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            21 days ago

            So far I’ve only found it useful when describing bite-sized tasks in order to get suggestions on which functions are useful from the library/API I’m using. And only when those functions have documentation available on the Internet.

  • Arghblarg@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    22 days ago

    I feel this – we had a junior dev on our project who started using AI for coding, without management approval BTW (it was a small company and we didn’t yet have a policy specifically for it. Alas.)

    I got the fun task, months later, of going through an entire component that I’m almost certain was ‘vibe coded’ – it “worked” the first time the main APIs were called, but leaked and crashed on subsequent calls. It used double- and even triple-pointers to data structures, which the API vendor’s documentation upon some casual reading indicated could all be declared statically and re-used (this was an embedded system); needless arguments; mallocs and frees everywhere for no good reason (again due to all of the un-needed dynamic storage involving said double/triple pointers to stuff). It was a horrible mess.

    It should have never gotten through code review, but the senior devs were themselves overloaded with work (another, separate problem) …

    I took two days and cleaned it all up, much simpler, no mem leaks, and could actually be, you know, used more than once.

    Fucking mess, and LLMs (don’t call it “AI”) just allow those who are lazy and/or inexperienced to skate through short-term tasks, leaving huge technical debt for those that have to clean up after.

    If you’re doing job interviews, ensure the interviewee is not connected to LLMs in any way and make them do the code themselves. No exceptions. Consider blocking LLMs from your corp network as well and ban locally-installed things like Ollama.

    • jonathan7luke@lemmy.zip
      link
      fedilink
      arrow-up
      1
      ·
      21 days ago

      It should have never gotten through code review, but the senior devs were themselves overloaded with work

      Ngl, as much as I dislike AI, I think this is really the bigger issue. Hiring a junior and then merging his contributions without code reviewing is a disaster waiting to happen with or without AI.

  • Phen@lemmy.eco.br
    link
    fedilink
    arrow-up
    0
    ·
    22 days ago

    Reading the paper, AI did a lot better than I would expect. It showed experienced devs working on a familiar code base got 19% slower. It’s telling that they thought they had been more productive, but the result was not that bad tbh.

    I wish we had similar research for experienced devs on unfamiliar code bases, or for inexperienced devs, but those would probably be much harder to measure.

    • staircase@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      22 days ago

      I don’t understand your point. How is it good that the developers thought they were faster? Does that imply anything at all in LLMs’ favour? IMO that makes the situation worse because we’re not only fighting inefficiency, but delusion.

      20% slower is substantial. Imagine the effect on the economy if 20% of all output was discarded (or more accurately, spent using electricity).

  • Cyberflunk@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    21 days ago

    My velocity has taken an unreasonable rocket trajectory. Deploying internal tooling, agent creation, automation. I have teams/swarms that tackle so many things, and do it well. I understand there are issues, but learning how to use the tools is critical to improving performance, blindly expecting the tools to be sci-fi super coders is unrealistic.

  • rizzothesmall@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    22 days ago

    Ai-only vibe coders. As a development manager I can tell you that AI-augmented actual developers who know how to write software and what good and bad code looks like are unquestionably faster. GitHub Copilot makes creating a suite of unit tests and documentation for a class take very little time.