• Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Oh just want we need, a hive mind of morons. Together they may be able to reach average

  • Ænima@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    If anyone is dumb enough to put anything from that dude into their head, that brain was already damaged!

  • DrCake@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Brain chips from the people who “move fast and break things”. This can only end well.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    This is the best summary I could come up with:


    Of all Elon Musk’s exploits — the Tesla cars, the SpaceX rockets, the Twitter takeover, the plans to colonize Mars — his secretive brain chip company Neuralink may be the most dangerous.

    Former Neuralink employees as well as experts in the field have alleged that the company pushed for an unnecessarily invasive, potentially dangerous approach to the implants that can damage the brain (and apparently has done so in animal test subjects) to advance Musk’s goal of merging with AI.

    The letter warned that “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and went on to ask: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?

    If the intravascular approach can restore key functioning to paralyzed patients, and also avoids some of the safety risks that come with crossing the blood-brain barrier, such as inflammation and scar tissue buildup in the brain, why opt for something more invasive than necessary?

    Which perhaps helps make sense of the company’s dual mission: to “create a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.”

    Watanabe believes Neuralink prioritized maximizing bandwidth because that serves Musk’s goal of creating a generalized BCI that lets us merge with AI and develop all sorts of new capacities.


    The original article contains 3,312 words, the summary contains 220 words. Saved 93%. I’m a bot and I’m open source!

    • GoosLife@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      At the same time, I feel like we shouldn’t let that happen because imagine if he actually succeeds? And then we just have immortal crackhead Lex Luthor with a hallucinating ChatGPT whispering further delusions directly into his brain. That can’t be good for any of us.

      • GladiusB@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Absolutely. However through his maniacal adventures he may find where this technology should NOT go to progress.

    • lemmy_user_838586@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      See, this is why I love SciFi. “Amazing! I wonder how this will go…”, questions the average person. “Here’s 50 books and movies that will show you how this will go…”

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        Except it doesn’t.

        Don’t overlook the ‘Fi’ in ‘SciFi.’

        Aspects of tech are often correctly predicted in SciFi going all the way back to Lucian writing about a ship of men flying up to the moon in the 2nd century.

        But surrounding what they often get right the authors always get things wrong too. For example, contrary to Lucian’s ideas, in reality the ship of men that flew up to the moon didn’t find a race of human like aliens that were only men who could carry children and had a bunch of gay sex with the men of Apollo-11.

        TL;DR: Correctly predicting a technology in a story doesn’t mean correctly predicting the social impact and context for that technology.