A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • randon31415@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    Think about how they reconstructed what the Egyptian Pharoahs looks like, or what a kidnap victim who was kidnapped at age 7 would look like at age 12. Yes, it can’t make something look exactly right, but it also isn’t just randomly guessing. Of course, it can be abused by people who want jurys to THINK the AI can perfectly reproduce stuff, but that is a problem with people’s knowledge of tech, not the tech itself.

    • zout@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 years ago

      Unfortunately, the people with no knowledge of tech will then proceed to judge if someone is innocent or guilty.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    Used to be that people called it the “CSI Effect” and blamed it on television.

    Funny thing. While people worry about unjust convictions, the “AI-enhanced” video was actually offered as evidence by the defense.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    During Kyle Rittenhouse’s trial the defense attorney objected to using the pinch to zoom feature of an iPad because it (supposedly) used AI. This was upheld by the judge so the prosecution couldn’t zoom in on the video.

  • milkjug@lemmy.wildfyre.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    I’d love to see the “training data” for this model, but I can already predict it will be 99.999% footage of minorities labelled ‘criminal’.

    And cops going “Aha! Even AI thinks minorities are committing all the crime”!

    • Richard@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Tell me you didn’t read the article without telling me you didn’t read the article

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      It’s incredibly obvious when you call the current generation of AI by its full name, generative AI. It’s creating data, that’s what it’s generating.

        • Gabu@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 years ago

          video camera doesn’t make up video, an ai does.

          What’s that even supposed to mean? Do you even know how a camera works? What about an AI?

          • hperrin@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 years ago

            Yes, I do. Cameras work by detecting light using a charged coupled device or an active pixel sensor (CMOS). Cameras essentially take a series of pictures, which makes a video. They can have camera or lens artifacts (like rolling shutter illusion or lens flare) or compression artifacts (like DCT blocks) depending on how they save the video stream, but they don’t make up data.

            Generative AI video upscaling works by essentially guessing (generating) what would be there if the frame were larger. I’m using “guessing” colloquially, since it doesn’t have agency to make a guess. It uses a model that has been trained on real data. What it can’t do is show you what was actually there, just its best guess using its diffusion model. It is literally making up data. Like, that’s not an analogy, it actually is making up data.

            • Gabu@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 years ago

              Ok, you clearly have no fucking idea what you’re talking about. No, reading a few terms on Wikipedia doesn’t count as “knowing”.
              CMOS isn’t the only transducer for cameras - in fact, no one would start the explanation there. Generative AI doesn’t have to be based on diffusion. You’re clearly just repeating words you’ve seen used elsewhere - you are the AI.

    • TurtleJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Everything that is labeled “AI” is made up. It’s all just statistically probable guessing, made by a machine that doesn’t know what it is doing.

    • melpomenesclevage@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Its not actually worse than eyewitness testimony.

      This is not an endorsement if AI, just pointing out that truth has no place in a courtroom, and refusing to lie will get you locked in a cafe.

      Too good, not fixing it.

    • Whirling_Cloudburst@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 years ago

      Unfortunately it does need pointing out. Back when I was in college, professors would need to repeatedly tell their students that the real world forensics don’t work like they do on NCIS. I’m not sure as to how much thing may or may not have changed since then, but based on American literacy levels being what they are, I do not suppose things have changed that much.

        • Whirling_Cloudburst@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 years ago

          Its certainly similar in that CSI played a role in forming unrealistic expectations in student’s minds. But. Rather than expecting more physical evidence in order to make a prosecution, the students expected magic to happen on computers and lab work (often faster than physically possible).

          AI enhancement is not uncovering hidden visual data, but rather it generates that information based on previously existing training data and shoe horns that in. It certainly could be useful, but it is not real evidence.

    • lole@iusearchlinux.fyi
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      I met a student at university last week at lunch who told me he is stressed out about some homework assignment. He told me that he needs to write a report with a minimum number of words so he pasted the text into chatGPT and asked it about the number of words in the text.

      I told him that every common text editor has a word count built in and that chatGPT is probably not good at counting words (even though it pretends to be good at it)

      Turns out that his report was already waaaaay above the minimum word count and even needed to be shortened.

      So much about the understanding of AI in the general population.

      I’m studying at a technical university.

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 years ago

        And people who believe the Earth is flat, and that Bigfoot and the Loch Ness Monster exist, and there are reptillians replacing the British royal family…

        People are very good at deluding themselves into all kinds of bullshit. In fact, I posit that they’re better even at it than learning the facts or comprehending empirical reality.

    • Stopthatgirl7@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Yes. When people were in full conspiracy mode on Twitter over Kate Middleton, someone took that grainy pic of her in a car and used AI to “enhance it,” to declare it wasn’t her because her mole was gone. It got so much traction people thought the ai fixed up pic WAS her.

      • Mirshe@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 years ago

        Don’t forget people thinking that scanlines in a news broadcast over Obama’s suit meant that Obama was a HOLOGRAM and ACTUALLY A LIZARD PERSON.

    • Altima NEO@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.

      They dont understand it, they only know that the results look good.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 years ago

        The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.

        Especially since it gets conflated with pop culture. Someone who hears that an AI app can “enhance” an image might think it works like something out of CSI using technosmarts, rather than just making stuff up out of whole cloth,

    • douglasg14b@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Of course, not everyone is technology literate enough to understand how it works.

      That should be the default assumption, that something should be explained so that others understand it and can make better, informed, decisions. .

  • AnUnusualRelic@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    Why not make it a fully AI court and save time if they were going to go that way. It would save so much time and money.

    Of course it wouldn’t be very just, but then regular courts aren’t either.

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      In the same vein Bloomberg just did a great study on ChatGPT 3.5 ranking resumes and it had an extremely noticeable bias of ranking black names lower than the average and Asian/white names far higher despite similar qualifications.

    • mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Honestly, an open-source auditable AI Judge/Justice would be preferable to Thomas, Alito, Gorsuch and Barrett any day.

    • BreakDecks@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      Me, testifying to the AI judge: “Your honor I am I am I am I am I am I am I am I am I am”

      AI Judge: “You are you are you are you are you are you…”

      Me: Escapes from courthouse while the LLM is stuck in a loop