• acastcandream@beehaw.org
    link
    fedilink
    arrow-up
    21
    ·
    edit-2
    2 years ago

    AI is just compositing a bunch of information. It’s not making anything in any sense of the word. It is approximating the weighted/sum results of all the things you gave it.

    What this allows is AI evangelists to talk out of both sides of their mouth. They can sit there and say “it’s nothing different than what we do,” but when we talk about how we are different from LLM’s, they say “well you can’t expect it to be like or treat it like people.” It’s maddening. There’s no consistency in the logic. 

    I am not against AI/LLM‘s. I use them in my work already. They have their place. But this weird, almost religious devotion to some promise of AI and the weird white knighting I see folks do for it is just baffling to watch. We are entering uncharted territory and most reasonable people are simply saying “can we just stop and think about things for a second before we just unleash them on ourselves?“ But apparently that makes you a luddite.

    • donuts@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      2 years ago

      But this weird, almost religious devotion to some promise of AI and the weird white knighting I see folks do for it is just baffling to watch.

      When you look at it through the lens of the latest get-rich-quick-off-some-tech-that-few- people-understand grift, it makes perfect sense.

      They naively see AI as a magic box that can make infinite “content” (which of course belongs to them for some reason, and is fair use of other people’s copyrighted data for some reason), and infinite content = infinite money, just as long as you can ignore the fundamentals of economics and intellectual property.

      People have invested a lot of their money and emotional energy into AI because they think it’ll make them a return on investment.

    • lloram239@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      2 years ago

      they say “well you can’t expect it to be like or treat it like people.” It’s maddening.

      Current AI models are 100% static. They do not change, at all. So trying to ascribe any kind of sentience to them or anything going in that direction, makes no sense at all, since the models fundamentally aren’t capable of it. They learn patterns from the world and can mush them together in original ways, that’s neat and might even be a very important step towards something more human-like, but AI is not people, but that’s all they do. They don’t think while you aren’t looking. Treating them like a person is fundamentally misunderstanding how they work.

      But this weird, almost religious devotion to some promise of AI

      AI can solve a lot of problems that are unsolvable by any other means. It also has made rapid progress over the last 10 years and seems to continue to do so. So it’s not terribly surprising that there is hype about it.

      “can we just stop and think about things for a second before we just unleash them on ourselves?“

      Problem with that is, if you aren’t developing AI right now, the competition will. It’s just math. Even if you’d outlaw it, companies would just go to different countries. Technology is hard to stop, especially when it’s clearly a superior solution to the alternatives.

      Another problem is that “think about things” so far just hasn’t been very productive. The problems AI can create are quite real, the solutions on the other side much less so. I do agree with Hinton that we should put way more effort into AI safety research, but at the same time I expect that to just show us more ways in which AI can go wrong, without providing anything to prevent it.

      I am not terribly optimistic here, just look at how well we are doing with climate change, which is a much simpler problem with much easier solutions, and we are still not exactly close to actually solving it.