• Dkarma@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I love how ppl who don’t have a clue what AI is or how it works say dumb shit like this all the time.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        I also love making sweeping generalizations about a stranger’s knowledge on this forum. The smaller the data sample the better!

      • Ragnarok314159@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        There is no AI. It’s all shitty LLM’s. But keep sucking that techbro cheesy balls. They will never invite you to the table.

        • WindyRebel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Honest question, but aren’t LLM’s a form of AI and thus…Maybe not AI as people expect, but still AI?

          • whats_all_this_then@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            The issue is that “AI” has become a marketing buzz word instead of anything meaningful. When someone says “AI” these days, what they’re actually referring to is “machine learning”. Like in LLMs for example: what’s actually happening (at a very basic level, and please correct me if I’m wrong, people) is that given one or more words/tokens, it tries to calculate the most probable next word/token based on its model (trained on ridiculously large numbers of bodies of text written by humans). It does this well enough and at a large enough scale that the output is cohesive, comprehensive, and useful.

            While the results are undeniably impressive, this is not intelligence in the traditional sense; there is no reasoning or comprehension, and definitely no consciousness, or awareness here. To grossly oversimplify, LLMs are really really good word calculators and can be very useful. But leave it to tech bros to make them sound like the second coming and shove them where they don’t belong just to get more VC money.

            • slackassassin@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              Sure, but people seem to buy into that very buzz wordyness and ignore the usefulness of the technology as a whole because “ai bad.”

              • whats_all_this_then@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                9 months ago

                True. Even I’ve been guilty of that at times. It’s just hard right now to see the positives through the countless downsides and the fact that the biggest application we’re moving towards seems to be taking value from talented people and putting it back into the pockets of companies that were already hoarding wealth and treating their workers like shit.

                So usually when people say “AI is the next big thing”, I say “Eh, idk how useful an automated idiot would be” because it’s easier than getting into the weeds of the topic with someone who’s probably not interested haha.

                Edit: Exhibit A

  • sudo42@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Sam Altman is demonstrating the power of AI. He’s showing how a single CEO can fire the entire company and continue to develop the product to be even better than when humans were involved.

    “OpenAI. No real humans involved!” ™

  • barnaclebutt@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I’m sure they were dead weight. I trust open AI completely and all tech gurus named Sam. Btw, what happened to that Crypto guy? He seemed so nice.

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.

    Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.

    Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Or we get to a time where we send a reprogrammed terminator back in time to kill altman 🤓

    • patatahooligan@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        There are infinite timelines, so, it has to exist some(wehere/when/[insert w word for additional dimension]).

      • rsuri@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 months ago

        I mean wikipedia managed to do it. It just requires honest people to retain control long enough. I think it was allowed to happen in wikipedia’s case because the wealthiest/greediest people hadn’t caught on to the potential yet.

        There’s probably an alternate timeline where wikipedia is a social network with paid verification by corporate interests who write articles about their own companies and state-funded accounts spreading conspiracy theories.

      • mustbe3to20signs@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
        Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          That is a different kind of machine learning model, though.

          You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.

          And those image recognition models aren’t something OpenAI is currently working on, iirc.

          • Grandwolf319@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Not only that, image analysis and statistical guesses have always been around and do not need ML to work. It’s just one more tool in the toolbox.

          • TFO Winder@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Don’t know about image recognition but they released DALL-E , which is image generating and in painting model.

          • Petter1@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            9 months ago

            Fun thing is, most of the things AI can, they never planned it to be able to do it. All they tried to achieve was auto completion tool.

          • mustbe3to20signs@feddit.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            I’m fully aware that those are different machine learning models but instead of focussing on LLMs with only limited use for mankind, advancing on Image Recognition models would have been much better.

            • Grandwolf319@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              I agree but I also like to point out that the AI craze started with LLMs and those MLs have been around before OpenAI.

              So if openAI never released chat GPT, it wouldn’t have become synonymous with crypto in terms of false promises.

        • msage@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?

          • mustbe3to20signs@feddit.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
            Since multiple multiple image recognition systems are in development, I can’t imagine they’re all this faulty.

            • msage@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              They are not ‘faulty’, they have been fed wrong training data.

              This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.

              That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.

    /s

    • nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      They are typically closed-loop for home computers. Datacenters are a different beast and a fair amount of open-loop systems seem to be in place.

      • boonhet@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        But even then, is the water truly consumed? Does it get contaminated with something like the cooling water of a nuclear power plant? Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?

        There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.

        • Cryophilia@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          It evaporates. A lot of datacenters use evaporative cooling. They take water from a useable source like a river, and make it into unuseable water vapor.

        • nickwitha_k (he/him)@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          But even then, is the water truly consumed?

          Yes. People and crops can’t drink steam.

          Does it get contaminated with something like the cooling water of a nuclear power plant?

          That’s not a thing in nuclear plants that are functioning correctly. Water that may be evaporated is kept from contact with fissile material, by design, to prevent regional contamination. Now, Cold War era nuclear jet airplanes were a different matter.

          Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?

          A minority of datacenters use water in such a way Helsinki is the only one that comes to mind. This would be an excellent way of reducing the environmental impacts but requires investments that corporations are seldom willing to make.

          There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.

          Unfortunately, it is. Primarily due to climate change. Water insecurity is an an issue of increasing importance and some companies, like Nestlé (fuck Nestlé) are accelerating it for profit. Of vital importance to human lives is getting ahead of the problem, rather than trying to fix it when it inevitably becomes a disaster and millions are dying from thirst.

        • JustTesting@lemmy.hogru.ch
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          In addition to all the other comments, pumping warm water into natural bodies of water can also be bad for the environment.

          i know of one nuclear powerplant that does this and it’s pretty bad for the coral population there.

        • utopiah@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Search for “water positive” commitment. You will quickly see it’s a “goal” thus it is consequently NOT the case. In some places where water is abundant it might not be a problem, where it’s scarce then it’s literally a choice made between crops to feed people and… compute cycles.

        • JamesFire@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Does it get contaminated with something like the cooling water of a nuclear power plant?

          This doesn’t happen unless the reactor was sabotaged. Cooling water that interacts with the core is always a closed-loop system. For exactly this reason.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      It’s amusing. Meta’s AI team is more open than "Open"AI ever was - they publish so many research papers for free, and the latest versions of Llama are very capable models that you can run on your own hardware (if it’s powerful enough) for free as long as you don’t use it in an app with more than 700 million monthly users.

      • a9cx34udP4ZZ0@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        That’s because Facebook is selling your data and access to advertise to you. The better AI gets across the board, the more money they make. AI isn’t the product, you are.

        OpenAI makes money off selling AI to others. AI is the product, not you.

        The fact facebook release more code, in this instance, isn’t a good thing. It’s a reminder how fucked we all are because they make so much off our personal data they can afford to give away literally BILLIONS of dollars in IP.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Facebook doesn’t sell your data, nor does Google. That’s a common misconception. They sell your attention. Advertisers can show ads to people based on some targeting criteria, but they never see any user data.

            • wischi@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              Selling your data would be stupid, because they make money with the fact that they have data about you nobody else has. Selling it would completely break their business model.

  • kippinitreal@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Putting my tin foil hat on…Sam Altman knows the AI train might slowing down soon.

    The OpenAI brand is the most valuable part of the company right now, since the models from Google, Anthropic, etc. can beat or match what ChatGPT is, but they aren’t taking off coz they aren’t as cool as OpenAI.

    The business models to train & run models is not sustainable. If there is any money to be made it is NOW, while the speculation is highest. The nonprofit is just getting in the way.

    This could be wishful thinking coz fuck corporate AI, but no one can deny AI is in a speculative bubble.

    • Kalysta@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      If you can’t make money without stealing copywritten works from authors without proper compensation, you should be shut down as a company

    • trollblox_@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      ai is such a dead end. it can’t operate without a constant inflow of human creations, and people are trying to replace human creatures with AI. it’s fundamentally unsustainable. I am counting the days until the ai bubble pops and everyone can move on. although AI generated images, video, and audio will still probably be abused for the foreseeable future. (propaganda, porn, etc)

    • somethingsnappy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Take the hat off. This was the goal. Whoops, gotta cash in and leave! I’m sure it’s super great, but I’m gone.

        • frunch@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          It honestly just never occurred to me that such a transformation was allowed/possible. A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it. Still, it would almost seem like the company benefits from the goodwill that comes with being a nonprofit but then gets to transform that goodwill into real gains when they drop the act and cease being a nonprofit.

          I don’t really understand most of this shit though, so I’m probably missing some key component that makes it make a lot more sense.

          • sunzu2@thebrainbin.org
            link
            fedilink
            arrow-up
            0
            ·
            9 months ago

            A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it

            Life time of propaganda got people confused lol

            Nonprofit merely means that their core income generating activities are not subject next to the income tax regimes.

            While some non profits are charities, many are just shelters for rich people’s bullshit behaviors like foundations, lobby groups, propaganda orgs, political campaigns etc

  • Kyrgizion@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Canceled my sub as a means of protest. I used it for research and testing purposes and 20$ wasn’t that big of a deal. But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies. Voting with our wallets may be the very last vestige of freedom we have left, since money equals speech.

    I hope he gets raped by an irate Roomba with a broomstick.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies.

      I mean it was already not open-source, right?

    • eatthecake@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Good. If people would actually stop buying all the crap assholes are selling we might make some progress.

      • vane@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 months ago

        But their operation cost is 5 billions per year, they plan to raise 6.5 billions from microsoft, apple and nvidia this year and they have not raised it yet. If their model fail next year and sales not happen will shareholders of big 3 pay 6.5 billions in 2026. There were couple companies that raised such amount of money at start like for example Docker Inc. Where is Docker now in enterprise ? They needed to change licensing model to even survive and their operation cost is just storage of docker containers. I doubt openai will survive this decade. Sam Altman is just preparing for Microsoft takeover before the ship is sunk.