• Aisteru@lemmy.aisteru.ch
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Honestly? Before the AI craze, I’d have said yes, because I believe AIs tailored to do one specific thing can outperform humans. Today? I’d rather not, as I could not let go of the thought that it might be somme shitty model quickly put together by the nephew of the CEO…

  • andallthat@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 months ago

    I’m not sure we, as a society, are ready to trust ML models to do things that might affect lives. This is true for self-driving cars and I expect it to be even more true for medicine. In particular, we can’t accept ML failures, even when they get to a point where they are statistically less likely than human errors.

    I don’t know if this is currently true or not, so please don’t shoot me for this specific example, but IF we were to have reliable stats that everything else being equal, self-driving cars cause less accidents than humans, a machine error will always be weird and alien and harder for us to justify than a human one.

    “He was drinking too much because his partner left him”, “she was suffering from a health condition and had an episode while driving”… we have the illusion that we understand humans and (to an extent) that this understanding helps us predict who we can trust not to drive us to our death or not to misdiagnose some STI and have our genitals wither. But machines? Even if they were 20% more reliable than humans, how would we know which ones we can trust?

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      I think ML is used since about 20 years in medicine already. In various laboratory processes/equipment.

      Maybe not as pure decision, but to point experts to where to watch and what to check.

  • pixeltree@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Would I trust the accuracy of the output? No, but it might be a decent warning to get tested to make sure. Would I trust a company with pictures of my genitals attached to my identity? Certainly not an AI company.

    • SkaveRat@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      but it might be a decent warning to get tested to make sure

      just show “better get checked by a professional” as the only result. no AI needed

    • Midnight Wolf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      AI: “Your penis appears to be an avocado. This is normal, and you should not be concerned. However you have 3 testicles and this should be looked into.”

      You, a female: “uhhhhhh”

      • lemmyng@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        That’s LLM AI, but the type I’m talking about is the machine learning kind. I can envision a system that takes e.g. a sample’s test data and provides a summary, which is not far from what doctors do anyway. If you ever get a blood test’s results explained to you it’s “this value is high, which would be concerning except that this other value is not high, so you’re probably fine regarding X. However, I notice that this other value is low, and this can be an indicator of Y. I’m going to request a follow-up test regarding that.” Yes, I would trust an AI to give me that explanation, because those are very strict parameters to work with, and the input comes from a trusted source (lab results and medical training data) and not “Bob’s shrimping and hoola hoop dancing blog”.

  • Randomgal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Honestly? I’ve leaked pics of those voluntarily, so curiously I’d be a-okay with this one.

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    no, but not for why you think.

    because it’s far more effective to scan samples from you than whole organs.

  • solrize@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    I dunno, maybe the diagnosis is fine but the companies that run it are sure to save copies. I can just see databreaches now, “5 million stolen dick picks uploaded to dark web”. Complete with labelling of which ones are diseased though, so that’s a help.

    • Midnight Wolf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      If we could filter by length, girth, un/cut, ball size, hair amount, and (most importantly) diagnosis… I’m not saying I would put that tool together, but as a user

    • Jesus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Twitter is mostly verified dicks these days. That might be the better platform.