• coffee_with_cream@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    It’s weird to me that people on Lemmy are so anti ML. If you aren’t impressed, you haven’t used it enough. “Oh it’s not 100% perfect”

    • smiletolerantly@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      I was fully on board until, like, a year ago. But the more I used it, the more obviously it came undone.

      I initially felt like it could really help with programming. And it looked like it, too - when you fed it toy problems where you don’t really care about how the solution looks, as long as it’s somewhat OK. But once you start giving it constraints that stem from a real project, it just stops being useful. It ignores constraints (use this library, do not make additional queries, …), and when you point out its mistake and ask it to to better it goes “oh, sorry! Here, let me do the same thing again, with the same error!”.

      If you’re working in a less common language, it even dreams up non-existing syntax.

      Even the one thing it should be good at - plain old language - it sucks ass at. It’s become so easy to spot LLM garbage, just due to its style.

      Worse, asking it to proofread a text for spelling and grammar mistakes, but to explicitly do not change the wording or style, there’s about a 50/50 chance it will either

      • change your wording or style, or
      • point out errors that are not even in the original text in the first place!

      I could honestly go on and on, but what it boils down to is: it is able to string together words that make it sound like it knows what it is doing, but it is just that, a facade. And it looks like for more and more people, the spell is finally breaking.

    • nandeEbisu@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      In terms of practical commercial uses, these highly human in the loop systems are about where it is and there are practical applications and products build off of it. I think what was sold though is a much more of either a replacement of people or a significant jump in functionality.

      For example, there are products that will give you an AI summary of a structured or fairly uniform document like a generic press release, but there’s not really a good replacement for something to read backgrounds on 50 different companies and figure out which one you should invest in without a human basically doing all of that work themselves anyway just to check the work of the AI. The latter is what is being sold to make the enormous cost of hosting and training AI worth it.

  • mctoasterson@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    I mean they aren’t wrong. From an efficiency standpoint, current AI is like using a 350hp car engine to turn a childs rock tumbler, or spin art thingy. Sure, it produces some interesting outputs, at the cost of way too much energy for what is being done. That is the current scenario of using generalized compute or even high end GPUs for AI.

    Best I can tell is, the “way forward” is further development of ASICs that are specific to the model being run. This should increase efficiency, decrease the ecological impact (less electricity usage) and free up silicon and components, possibly decreasing price and increasing availablity of things like consumer graphics cards again (but I won’t hold my breath for that part).

  • LEX@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Oh yeah? It’s great for porn, Goldman Sachs, you bunch of suit wearing degenerates. I bet a lot of people would argue that’s pretty freaking useful.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Came here to say, we read last week that the industry spent $600bn on GPUs, they need that investment returned and we’re getting AI whether it’s useful or not… But that’s also written in the article.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 months ago

    Yeah but it’s Goldman Sachs saying it. Presumably because they haven’t invested in AI.

    Perhaps we could get a non-biased opinion and also from an actual expert rather than some finance ghoul who really doesn’t know anything?

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 months ago

      The problem is experts in AI are biased towards AI (it pays their salaries).

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      I’d say they know a thing or two about finance… so maybe they didn’t invest because they see it as overhype?

    • demonsword@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      Presumably because they haven’t invested in AI.

      Presumably is carrying all the weight of your whole post here

      Perhaps we could get a non-biased opinion and also from an actual expert rather than some finance ghoul who really doesn’t know anything?

      I also hate banks, but usually those guys can sniff out market failures way ahead of the rest of us. All their bacon rides on that, after all

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      It’s noteworthy because it’s Goldman Sachs. Lots of money people are dumping it into AI. When a major outlet for money people starts to show skepticism, that could mean the bubble is about to pop.

  • Fades@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 months ago

    Absolutely true, but the morons (willful and no) will take this as additional proof that it’s altogether useless and a net negative.