• Todd Bonzalez@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      First of all, it’s absolutely crazy to link to a 6 month old thread just to complain that you go downvoted in it. You’re pretty clearly letting this site get under your skin if you’re still hanging onto these downvotes.

      Second, none of your 6 responses in that thread are logical, rational responses. You basically just assert that things that you find offensive enough should be illegal, and then just type in all caps at everyone who explains to you that this isn’t good logic.

      The only way we can consider child porn prohibition constitutional is to interpret it as a protection of victims. Since both the production and distribution of child porn hurt the children forced into it, we ban it outright, not because it is obscene, but because it does real damage. This fits the logic of many other forms of non-protected speech, such as the classic “shouting ‘fire’ in a crowded theatre” example, where those hurt in the inevitable panic are victims.

      Expanding the definition of child porn to include fully fictitious depictions, such as lolicon or AI porn, betrays this logic because there are no actual victims. This prohibition is rooted entirely in the perceived obscenity of the material, which is completely unconstitutional. We should never ban something because it is offensive, we should only ban it when it does real harm to actual victims.

      I would argue that rape and snuff film should be illegal for the same reason.

      The reason people disagree with you so strongly isn’t because they think AI generated pedo content is “art” in the sense that we appreciate it and defend it. We just strongly oppose your insistence that we should enforce obscenity laws. This logic is the same logic used as a cudgel against many other issues, including LGBTQ rights, as it basically argues that sexually disagreeable ideas should be treated as a criminal issue.

      I think we all agree that AI pedo content is gross, and the people who make it and consume it are sick. But nobody is with you on the idea that drawings and computer renderings should land anyone in prison.

    • SeattleRain@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Well yeah. Just because something makes you really uncomfortable doesn’t make it a crime. A crime has a victim.

      Also, the vast majority of children are victimized because of the US’ culture of authoritarianism and religious fundamentalism. That’s why far and away children are victimized by either a relative or in a church. But y’all ain’t ready to have that conversation.

      • sugartits@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        That thing over there being wrong doesn’t mean we can’t discuss this thing over here also being wrong.

        So perhaps pipe down with your dumb whataboutism.

        • SeattleRain@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          It’s not whataboutism, he’s being persecuted because of the idea that he’s hurting children all the while law enforcement refuses to truly persecute actual institutions victimizing children and are often colluding with traffickers. For instance LE throughout the country were well aware of the scale of the Catholic church’s crimes for generations.

          How is this whataboutism.

          • sugartits@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Because it’s two different things.

            We should absolutely go after the Catholic church for the crimes committed.

            But here we are talking about the creation of child porn.

            If you cannot understand this very simple premise, then we have nothing else to discuss.

            • SeattleRain@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              They’re not two different things. They’re both supposedly acts of pedophilia except one would take actual courage to prosecute (churches) and the other which doesn’t have any actual victims is easy and is a PR get because certain people find it really icky.

          • DarkThoughts@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Just to be clear here, he’s not actually persecuted for generating such imagery like the headline implies.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    This is tough, the goal should be to reduce child abuse. It’s unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don’t abuse children. Like everything else AI, we won’t know the real impact for many years.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        I suggest you actually download stable diffusion and try for yourself because it’s clear that you don’t have any clue what you’re talking about. You can already make tiny people, shaved, genitals, flat chests, child like faces, etc. etc. It’s all already there. Literally no need for any LoRAs or very specifically trained models.

        • LadyAutumn@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          It should be illegal either way, to be clear. But you think theyre not training models on CSAM? Youre trusting in the morality/ethics of people creating AI generated child pornography?

          • Greg Clarke@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            The use of CSAM in training generative AI models is an issue no matter how these models are being used.

            • L_Acacia@lemmy.one
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              The training doesn’t use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.

              • AdrianTheFrog@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Well, with models like SD at least, the datasets are large enough and the employees are few enough that it is impossible to have a human filter every image. They scrape them from the web and try to filter with AI, but there is still a chance of bad images getting through. This is why most companies install filters after the model as well as in the training process.

                • DarkThoughts@fedia.io
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  1 year ago

                  You make it sound like it is so easy to even find such content on the www. The point is, they do not need to be trained on such material. They are trained on regular kids, so they know their sizes, faces, etc. They’re trained on nude bodies, so they also know how hairless genitals or flat chests look like. You don’t need to specifically train a model on nude children to generate nude children.

  • Darkard@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    And the Stable diffusion team get no backlash from this for allowing it in the first place?

    Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?

    • macniel@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      You can run the SD model offline, so on what service would that User be flagged?

    • yukijoou@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      my main question is: how much csam was fed into the model for training so that it could recreate more

      i think it’d be worth investigating the training data usued for the model

      • Ragdoll X@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren’t actually available in the dataset because they had already been removed from the internet.

        You could still make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.

    • PirateJesus@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Stable Diffusion has been distancing themselves from this. The model that allows for this was leaked from a different company.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Because what prompts people enter on their own computer isn’t in their responsibility? Should pencil makers flag people writing bad words?

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Some places do lock up spray paint due to its use in graffiti, so that’s not without precedent.

      • Soggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        They lock it up because it’s frequently stolen. (Because of its use in graffiti, but still.)

    • PirateJesus@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Asked whether more funding will be provided for the anti-paint enforcement divisions: it’s such a big backlog, we’ll rather just wait for somebody to piss of a politician to focus our resources.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I’d usually agree with you, but it seems he sent them to an actual minor for “reasons”.

  • horncorn@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Article title is a bit misleading. Just glancing through I see he texted at least one minor in regards to this and distributed those generated pics in a few places. Putting it all together, yeah, arrest is kind of a no-brainer. Ethics of generating csam is the same as drawing it pretty much. Not much we can do about it aside from education.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Legally, a sufficiently detailed image depicting csam is csam, regardless of how it was produced. Sharing it is why he got caught, inevitably, but it’s still illegal even if he never brought a minor into it.

    • retrospectology@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Lemmy really needs to stop justifying CP. We can absolutely do more than “eDuCaTiOn”. AI is created by humans, the training data is gathered by humans, it needs regulation like any other industry.

      It’s absolutely insane to me how laissez-fair some people are about AI, it’s like a cult.

      • msage@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        While I agree with your attitude, the whole ‘laissez-fair’ thing is probably a misunderstanding:

        There is nothing we can do to stop the AI.

        Nothing.

        The genie is out of the bottle, the Pandora’s box has been opened, everything is out and it won’t ever return. The world will never be the same, and it’s irrelevant what people think.

        That’s why we need to better understand the post-AI world we created, and figure out what do to now.

        Also, to hell with CP. (feels weird to use the word ‘fuck’ here)

        • retrospectology@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Thats not the question, the question is not “can we stop AI entirely” it’s about regulating its development and yes, we can make efforts to do that.

          This attitude of “it’s inevitable, can’t do anything about it” is eerily similar logic to what is used in climate denial and other right-wing efforts. It’s a really poor attitude to have, especially about something as consequential as AI.

          We have the best opportunity right now to create rules about its uses and development. The answer is not “do nothing” as if it’s some force of nature, as opposed toa tool created by humans.

          • GBU_28@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 year ago

            Dude the amount of open source, untrackable, distributed ai models is off the charts. This isn’t just about the models offered by subscription from the big players.

            • retrospectology@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              This is still one of the weaker arguments. There is a lot of malware out there too, people are still prosecuted when they’re caught developing and distributing it, we don’t just throw up our hands and pretend there’s nothing that can be done.

              Like, yeah, some pedophile who also happens to be tech saavy might build his own AI model to make CP, that’s not some self-evident argument against attempting to stop them.

              • GBU_28@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 year ago

                No, like, the tools to do these things are common and readily available. It’s not malware, it’s generalized ai tools, completely embroiled with non image ai work.

                Pandora’s box is wide open. All of this work can be done trivially, completely offline with a basic PC. Anyone motivated can be offline and up and running in a weekend

                You’re asking to outlaw something like a spreadsheet.

                You download a general purpose image ai model, then train and prompt it completely offline

          • L_Acacia@lemmy.one
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 year ago

            The models used are not trained on CP. The models weight are distributed freely and anybody can train a LORA on his computer. Its already too late to ban open weight models.

          • msage@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            I hear you, and I don’t necessarily disagree with you, I just know that’s not how anything works.

            Regulations work for big companies, but there isn’t a big company behind this specific case. And those small-time users have run away and you can’t stop them.

            It’s like trying to regulate cameras to not store specific images. Like, I get the sentiment, but sorry, no. It’s not that I would not like that, it’s just not possible.

            • retrospectology@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              This argument could be applied to anything though. A lot of people get away with myrder, we should still try and do what we can to stop it from happening.

              You can’t sit in every car and force people to wear a seatbelt, we still have seatbelt laws and regulations for manufacturers.

              • msage@programming.dev
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Physical things are much easier to regulate than software, much less serverless.

                We already regulate certain images, and it matters very little.

                The bigger payoff will be from educating the public and accepting that we can’t win every war.

                • retrospectology@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  1 year ago

                  So accept defeat from the start, that’s really just a non-starter. AI models run on hardware, they are developed by specific people, their contents are distributed by specific individuals, code bases are hosted on hardware and on specific outlets.

                  It really does sound like you’re just trying to make excuses to avoid regulation, not that you genuinely have a good reason to think it’s not possible to try.

      • Autonomous User@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        You will never enslave us with anti-libre software, malware. You will never hijack our computing. Lemmy really needs to stop justifying subjugation.

        • retrospectology@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          The fuck are you talking about? No one’s “enslaving” you because they’re trying to stop you from generating child porn.

          Fucking libertarians dude.

      • Autonomous User@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        One of two classic excuses, virtue signalling to hijack control of our devices, our computing, an attack on libre software (they don’t care about CP). Next, they’ll be banning more math, encryption, again.

        It says gullible at the start of this thread, scroll up and see.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        You don’t need CSAM training data to create CSAM images. If your model knows how children looks like, how naked human bodies look like, then it can create naked children. That’s simply how generative models like this work and has absolutely nothing to do with specifically trained models for CSAM using actual CSAM material.

        So while I disagree with him, in that lack of education is the cause of CSAM or pedophilia… I’d say it could help with the general hysteria about LLMs, like the one’s coming from you, who just let their emotions run wild when those topics arise. You people need to understand that the goal should be the protection of potential victims, not the punishment of victimless thought crimes.

    • Frozengyro@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      That’s sickening to know there are bastards out there who will get away with it since they are only creating it.

      • NeoNachtwaechter@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I’m not sure. Let us assume that you generate it on your own PC at home (not using a public service) and don’t brag about it and never give it to anybody - what harm is done?

        • Frozengyro@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Even if the AI didn’t train itself on actual CSAM that is something that feels inherently wrong. Your mind is not right to think that’s acceptable IMO.

          • DarkThoughts@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Laws shouldn’t be about feelings though and we shouldn’t prosecute people for victimless thought crimes. How often did you think something violent when someone really pissed you off? Should you have been persecuted for that thought too?

              • DarkThoughts@fedia.io
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                Who are the victims of someone generating such images privately then? It’s on the same level as all the various fan fiction shit that was created manually over all the past decades.

                And do we apply this to other depictions of criminalized things too? Would we ban the depiction of violence & sexual violence on TV, in books, and in video games too?

        • GBU_28@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Society is not ok with the idea of someone cranking to CSAM, then just waking around town. It gives people wolf-in-sheep-clothing vibes.

          So the notion of there being “ok” CSAM-style ai content is a non starter for a huge fraction of people because it still suggests appeasing a predator.

          I’m definitely one of those people that simply can’t accept any version of it.

    • Ricky Rigatoni@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      You can get away with a lot of heinous crimes by simply not telling people and sharing the results.

  • Deceptichum@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    What an oddly written article.

    Additional evidence from the laptop indicates that he used extremely specific and explicit prompts to create these images. He likewise used specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.”

    They make it sound like the prompts are important and/or more important than the 13,000 images…

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      In many ways they are. The image generated from a prompt isn’t unique, and is actually semi random. It’s not entirely in the users control. The person could argue “I described what I like but I wasn’t asking it for children, and I didn’t think they were fake images of children” and based purely on the image it could be difficult to argue that the image is not only “child-like” but actually depicts a child.

      The prompt, however, very directly shows what the user was asking for in unambiguous terms, and the negative prompt removes any doubt that they thought they were getting depictions of adults.

      • PirateJesus@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        And also it’s an AI.

        13k images before AI involved a human with Photoshop or a child doing fucked up shit.

        13k images after AI is just forgetting to turn off the CSAM auto-generate button.

  • Ibaudia@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Isn’t there evidence that as artificial CSAM is made more available, the actual amount of abuse is reduced? I would research this but I’m at work.

  • peanuts4life@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    It’s worth mentioning that in this instance the guy did send porn to a minor. This isn’t exactly a cut and dry, “guy used stable diffusion wrong” case. He was distributing it and grooming a kid.

    The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.

    For example, websites like novelai make a business out of providing pornographic, anime-style image generation. The models they use deliberately tuned to provide abstract, “artistic” styles, but they can generate semi realistic images.

    Now, let’s say a criminal group uses novelai to produce CSAM of real people via the inpainting tools. Let’s say the FBI cast a wide net and begins surveillance of novelai’s userbase.

    Is every person who goes on there and types, “Loli” or “Anya from spy x family, realistic, NSFW” (that’s an underaged character) going to get a letter in the mail from the FBI? I feel like it’s within the realm of possibility. What about “teen girls gone wild, NSFW?” Or “young man, no facial body hair, naked, NSFW?”

    This is NOT a good scenario, imo. The systems used to produce harmful images being the same systems used to produce benign or borderline images. It’s a dangerous mix, and throws the whole enterprise into question.

    • PirateJesus@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.

      The Protect Act of 2003 means that any artistic depiction of CSAM is illegal. The guidance is pretty clear, FBI is gonna raid your house…eventually. We still haven’t properly funded the anti-CSAM departments.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      The major concern to me, is that there isn’t really any guidance from the FBI on what you can and can’t do, which may lead to some big issues.

      https://www.ic3.gov/Media/Y2024/PSA240329 https://www.justice.gov/criminal/criminal-ceos/citizens-guide-us-federal-law-child-pornography

      They’ve actually issued warnings and guidance, and the law itself is pretty concise regarding what’s allowed.

      (8) “child pornography” means any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit conduct, where-

      (A) the production of such visual depiction involves the use of a minor engaging in sexually explicit conduct;

      (B) such visual depiction is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct; or

      © such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.

      (11) the term “indistinguishable” used with respect to a depiction, means virtually indistinguishable, in that the depiction is such that an ordinary person viewing the depiction would conclude that the depiction is of an actual minor engaged in sexually explicit conduct. This definition does not apply to depictions that are drawings, cartoons, sculptures, or paintings depicting minors or adults.

      https://uscode.house.gov/view.xhtml?hl=false&edition=prelim&req=granuleid%3AUSC-prelim-title18-section2256&f=treesort&num=0

      If you’re going to be doing grey area things you should do more than the five minutes of searching I did to find those honestly.

      It was basically born out of a supreme Court case in the early 2000s regarding an earlier version of the law that went much further and banned anything that “appeared to be” or “was presented as” sexual content involving minors, regardless of context, and could have plausibly been used against young looking adult models, artistically significant paintings, or things like Romeo and Juliet, which are neither explicit nor vulgar but could be presented as involving child sexual activity. (Juliet’s 14 and it’s clearly labeled as a love story).
      After the relevant provisions were struck down, a new law was passed that factored in the justices rationale and commentary about what would be acceptable and gave us our current system of “it has to have some redeeming value, or not involve actual children and plausibly not look like it involves actual children”.

    • retrieval4558@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Is every person who goes on there and types, “Loli” or “Anya from spy x family, realistic, NSFW” (that’s an underaged character) going to get a letter in the mail from the FBI?

      I’ll throw that baby out with the bathwater to be honest.

      • Duamerthrax@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Simulated crimes aren’t crimes. Would you arrest every couple that finds health ways to simulate rape fetishes? Would you arrest every person who watches Fast and The Furious or The Godfather?

        It no one is being hurt, if no real CSAM is being fed into the model, if no pornographic images are being sent to minors, it shouldn’t be a crime. Just because it makes you uncomfortable, don’t make it immoral.

        • helpImTrappedOnline@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Simulated crimes aren’t crimes.

          If they were, any one who’s played games is fucked. I’m confident everyone who has played went on a total ramapage murdering the townfolk, pillaging their houses and blowing everything up…in Minecraft.

        • Maggoty@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          They would though. We know they would because conservatives already did the whole laws about how you can have sex in private thing.

        • gardylou@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          No it’s immoral because they are sexual gratifying themselves to pictures that look like children. Sexually desiring children or wanting to see them abused is immoral, full stop.

          • Maggoty@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Nobody is arguing that it’s moral. That’s not the line for government intervention. If it was then the entire private banking system would be in prison.

            • Meansalladknifehands@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              For now, if you read the article, it states that he shared the pictures to form like minded groups where they got emboldened and could support each other and legitimize/normalize their perverted thoughts. How about no thanks.

              • Duamerthrax@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Maybe you should focus your energy on normalized things that actually effect kids like banning full contact sports that cause CTE.

                • Meansalladknifehands@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  What do you mean focus your energy, how much energy do you think I spend on discussing perverts? And what should I spend my time discussing contact sports. It’s sound like you are deflecting.

                  Pedophiles get turned on abusing minors, they are mentally sick. It’s not like its a normal sexual desire, they will never stop at watching “victimless” images. Fuck pedophiles they don’t deserve shit, and hope they eat shit he rest of their lives.

              • Lowlee Kun@feddit.de
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 year ago

                wrong comment chain. people weren’t talking about the criminal shithead the article is about but about the scenario of someone using (not csam trained) models to create questionable content (thus it is implied that there would be no victim). we all know that there are bad actors out there, just like there are rapists and murderers. still we dont condemn true crime lovers or rape fetishists until they commit a crime. we could do the same with pedos but somehow we believe hating them into the shadows will stop them somehow from doing criminal stuff?

                • Meansalladknifehands@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  And I’m using the article as an example of that it doesn’t just stop at “victimless” images, because they are not fucking normal people. They are mentally sick, they are sexually turned on by the abuse of a minor, not by the minor but by abusing the minor, sexually.

                  In what world would a person like that stop at looking at images, they actively search for victims, create groups where they share and discuss abusing minors.

                  Yes dude, they are fucking dangerous bro, life is not fair. You wouldn’t say the same shit if some one close to you was a victim.

            • PotatoKat@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Real children are in training data regardless of if there is csam in the data or not (which there is a high chance there is considering how they get their training data) so real children are involved

              • Duamerthrax@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                I’ve already stated that I do not support using images of real children in the models. Even if the images are safe/legal, it’s a violation of privacy.

        • Ookami38@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Or, ya know, everyone who ever wanted to decapitate those stupid fucking Skyrim children. Crime requires damaged parties, and with this (idealized case, not the specific one in the article) there is none.

          • DarkThoughts@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Those were demon children from hell (with like 2 exceptions maybe). It was a crime by Bethesda to make them invulnerable / protected by default.

        • PirateJesus@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Simulated crimes aren’t crimes.

          Artistic CSAM is definitely a crime in the United States. PROTECT act of 2003.

          • Duamerthrax@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            People have only gotten in trouble for that when they’re already in trouble for real CSAM. I’m not terrible interested in sticking up for actual CSAM scum.

  • PirateJesus@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    OMG. Every other post is saying their disgusted about the images part but it’s a grey area, but he’s definitely in trouble for contacting a minor.

    Cartoon CSAM is illegal in the United States. AI images of CSAM fall into that category. It was illegal for him to make the images in the first place BEFORE he started sending them to a minor.

    https://www.thefederalcriminalattorneys.com/possession-of-lolicon

    https://en.wikipedia.org/wiki/PROTECT_Act_of_2003

    • Clbull@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I thought cartoons/illustrations of that nature were only illegal in the UK (Coroners and Justices Act 2008) and Switzerland. TIL about the PROTECT Act.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The thing about the PROTECT Act is that it relies on the Miller test, which has obvious holes, and is like depends on who is reviewing it and stuff. I have heard even the UK law has holes which can be exploited.

        • Rayspekt@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I wonder if there is significant migration happening into those countries where csam os legal.

          • ZILtoid1991@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Most people instead have a trip to a place where underage sex workers are common, one can just have an external hard drive and/or a USB stick for that material which they hide. "An"caps are actively trying to form their own countries, partly to legalize “recordings of crimes” as they like to call them, if not outright to legalize child rape and child sex trafficking.

    • Madison420@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Yeah that’s toothless. They decided there is no particular way to age a cartoon, they could be from another planet that simply seem younger but are in actuality older.

      It’s bunk, let them draw or generate whatever they want, totally fictional events and people are fair game and quite honestly I’d Rather they stay active doing that then get active actually abusing children.

      Outlaw shibari and I guarantee you’d have multiple serial killers btk-ing some unlucky souls.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        My main issue with generation is the ability of making it close enough to reality. Even with the more realistic art stuff, some outright referenced or even traced CSAM. The other issue is the lack of easy differentiation between reality and fiction, and it muddies the water. “I swear officer, I thought it was AI” would become the new “I swear officer, she said she was 18”.

        • Madison420@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          That is not an end user issue, that’s a dev issue. Can’t train on scam if it isn’t available and as such is tacit admission of actual possession.

        • RGB3x3@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          The problem with AI CSAM generation is that the AI has to be trained on something first. It has to somehow know what a naked minor looks like. And to do that, well… You need to feed it CSAM.

          So is it right to be using images of real children to train these AI? You’d be hard-pressed to find someone who thinks that’s okay.

          • I Cast Fist@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            It has to somehow know what a naked minor looks like.

            Not necessarily

            You need to feed it CSAM

            You don’t. You just need lists of other things, properly tagged. If you feed an AI a bunch of clothed adults and a bunch of naked adults, it will, in theory, “understand” the difference between being clothed and naked and create any of its clothed adults, naked.

            With that initial set above, you feed it a bunch of clothed children. When you ask for a naked child, it will either produce a child head with naked adult body, or a “weird” naked child. It “understands” that adult and child are different things, that clothed and naked are different things, and tries to infer what “naked child” looks like from what it “knows”.

            So is it right to be using images of real children to train these AI?

            This is the real question and one I don’t know the answer to, because it will boil down to consent to being part of a training model, whether your own as an adult, or a child’s parent, much like how it works for stock photos and videos.

            “I consent to having my likeness used for AI training models, except for any use that involves NSFW content” - Fair enough. Good luck enforcing that.

          • deathbird@mander.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            the AI has to be trained on something first. It has to somehow know what a naked minor looks like. And to do that, well… You need to feed it CSAM.

            First of all, not every image of a naked child is CSAM. This is actually been kind of a problem with automated CSAM detection systems triggering false positives on non-sexual images, and getting innocent people into trouble.

            But also, AI systems can blend multiple elements together. They don’t need CSAM training material to create CSAM, just the individual elements crafted into a prompt sufficient to create the image while avoiding any safeguards.

            • PotatoKat@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              You ignored the second part of their post. Even if it didn’t use any csam is it right to use pictures of real children to generate csam? I really don’t think it is.

          • Eezyville@sh.itjust.works
            cake
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            You make the assumption that the person generating the images also trained the AI model. You also make assumptions about how the AI was trained without knowing anything about the model.

            • RGB3x3@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 year ago

              Are there any guarantees that harmful images weren’t used in these AI models? Based on how image generation works now, it’s very likely that harmful images were used to train the data.

              And if a person is using a model based on harmful training data, they should be held responsible.

              However, the AI owner/trainer has even more responsibility in perpetuating harm to children and should be prosecuted appropriately.

              • Eezyville@sh.itjust.works
                cake
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                And if a person is using a model based on harmful training data, they should be held responsible.

                I will have to disagree with you for several reasons.

                • You are still making assumptions about a system you know absolutely nothing about.
                • By your logic anything born from something that caused suffering from others (this example is AI trained on CSAM) the users of that product should be held responsible for the crime committed to create that product.
                  • Does that apply to every product/result created from human suffering or just the things you don’t like?
                  • Will you apply that logic to the prosperity of Western Nations built on the suffering of indigenous and enslaved people? Should everyone who benefit from western prosperity be held responsible for the crimes committed against those people?
                  • What about medicine? Two examples are The Tuskegee Syphilis Study and the cancer cells of Henrietta Lacks. Medicine benefited greatly from these two examples but crimes were committed against the people involved. Should every patient from a cancer program that benefited from Ms. Lacks’ cancer cells also be subject to pay compensation to her family? The doctors that used her cells without permission didn’t.
                  • Should we also talk about the advances in medicine found by Nazis who experimented on Jews and others during WW2? We used that data in our manned space program paving the way to all the benefits we get from space technology.
                • PotatoKat@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  The difference between the things you’re listing and SAM is that those other things have actual utility outside of getting off. Were our phones made with human suffering? Probably but phones have many more uses than making someone cum. Are all those things wrong? Yea, but at least good came out of it outside of just giving people sexual gratification directly from the harm of others.

                • gardylou@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  LOL, that’s a lot of bullshit misdirection to defend AI child porn. Christ, can there be one social media like platform that just has normal fucking people.

              • aesthelete@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 year ago

                Are there any guarantees that harmful images weren’t used in these AI models?

                Lol, highly doubt it. These AI assholes pretend that all the training data randomly fell into the model (off the back of a truck) and that they cannot possibly be held responsible for that or know anything about it because they were too busy innovating.

                There’s no guarantee that most regular porn sites don’t contain csam or other exploitative imagery and video (sex trafficking victims). There’s absolutely zero chance that there’s any kind of guarantee.

            • PotatoKat@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              The images were created using photos of real children even if said photos weren’t CSAM (which can’t be guaranteed they weren’t). So the victims were are the children used to generate CSAM

              • dev_null@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Sure, but isn’t the the perpetrator the company that trained the model without their permission? If a doctor saves someone’s life using knowledge based on nazi medical experiments, then surely the doctor isn’t responsible for the crimes?

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 year ago

                Let’s do a thought experiment, and I’d look to to tell me at what point a victim was introduced:

                1. I legally acquire pictures of a child, fully clothed and everything
                2. I draw a picture based on those legal pictures, but the subject is nude or doing sexually explicit things
                3. I keep the picture for my own personal use and don’t distribute it

                Or with AI:

                1. I legally acquire pictures of children, fully clothed and everything
                2. I legally acquire pictures of nude adults, some doing sexually explicit things
                3. I train an AI on a mix of 1&2
                4. I generate images of nude children, some of them doing sexually explicit things
                5. I keep the pictures for my own personal use and don’t distribute any of them
                6. I distribute my model, using the right to distribute from the legal acquisition of those images

                At what point did my actions victimize someone?

                If I distributed those images and those images resemble a real person, then that real person is potentially a victim.

                I will say someone who does this creepy and I don’t want them anywhere near children (especially mine, and yes, I have kids), but I don’t think it should be illegal, provided the source material is legal. But as soon as I distribute it, there absolutely could be a victim. Being creepy shouldn’t be a crime.

                • PotatoKat@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  I think it should be illegal to make porn of a person without their permission regardless of if it was shared or not. Imagine the person it is based off of finds out someone is doing that. That causes mental strain on the person. Just like how revenge porn doesn’t actively harm a person but causes mental strafe (both the initial upload and continued use of it). For scenario 1 it would be at step 2 when the porn is made of the person. For scenario 2 it would be a mix between step 3 and 4.

              • dev_null@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Sure, but isn’t the the perpetrator the company that trained the model without their permission? If a doctor saves someone’s life using knowledge based on nazi medical experiments, then surely the doctor isn’t responsible for the crimes?

              • dev_null@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Sure, but isn’t the the perpetrator the company that trained the model without their permission? If a doctor saves someone’s life using knowledge based on nazi medical experiments, then surely the doctor isn’t responsible for the crimes?

              • dev_null@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Sure, but isn’t the the perpetrator the company that trained the model without their permission? If a doctor saves someone’s life using knowledge based on nazi medical experiments, then surely the doctor isn’t responsible for the crimes?

              • dev_null@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Sure, but isn’t the the perpetrator the company that trained the model without their permission? If a doctor saves someone’s life using knowledge based on nazi medical experiments, then surely the doctor isn’t responsible for the crimes?

                • PotatoKat@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  So is the car manufacturer responsible if someone drives their car into the sidewalk to kill some people?

                  Your analogy doesn’t match the premise. (Again assuming there is no csam in the training data which is unlikely) the training data is not the problem it is how the data is used. Using those same picture to generate photos of medieval kids eating ice cream with their family is fine. Using it to make CSAM is not.

                  It would be more like the doctor using the nazi experiments to do some other fucked up experiments.

                  (Also you posted your response like 5 times)

      • MDKAOD@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I think the challenge with Generative AI CSAM is the question of where did training data originate? There has to be some questionable data there.

        • scoobford@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          That would mean you need to enforce the law for whoever built the model. If the original creator has 100TB of cheese pizza, then they should be the one who gets arrested.

          Otherwise you’re busting random customers at a pizza shop for possession of the meth the cook smoked before his shift.

        • erwan@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          There is also the issue of determining if a given image is real or AI. If AI were legal, that means prosecution would need to prove images are real and not AI with the risk of letting go real offenders.

          The need to ban AI CSAM is even clearer than cartoon CSAM.

          • Madison420@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 year ago

            And in the process force non abusers to seek their thrill with actual abuse, good job I’m sure the next generation of children will appreciate your prudish factually inept effort. We’ve tried this with so much shit, prohibition doesn’t stop anything or just creates a black market and a abusive power system to go with it.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Big brain PDF tells the judge it is okay because the person in the picture is now an adult.

        • surewhynotlem@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          That’s the issue though. As far as I know it hasn’t been tested in court and it’s quite possible the law is useless and has no teeth.

          With AI porn you can point to real victims whose unconsented pictures were used to train the models, and say that’s abuse. But when it’s just a drawing, who is the victim? Is it just a thought crime? Can we prosecute those?

        • arefx@lemmy.ml
          cake
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          You can say pedophile… that “pdf file” stuff is so corny and childish. Hey guys lets talk about a serious topic by calling it things like “pdf files” and “Graping”. Jfc

          • RGB3x3@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 year ago

            Why do people say “graping?” I’ve never heard that.

            Please tell me it doesn’t have to do with “The Grapist” video that came out on early YouTube.

            • I Cast Fist@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Tiktok and Instagram are the main culprits, they’ll shadowban, or outright delist, any content that uses no-no words. Sex, rape, assault, drugs, die, suicide, it’s a rather big list

    • mechoman444@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Actually it’s not illegal… I think I read something about it a little while ago. It was some recent precedent set in court or something like that. The caveat is if it’s too realistic they can arrest you for it which is what might have happened here.

      I can’t fully verify what I’m saying because I don’t want to Google search for something like that.

    • gardylou@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Yikes at the responses ITT. This shit should definitely be illegal, and the people that want it probably want to abuse real children too. All of you parsing arguments to make goddamn representations of sexual child abuse legal should take a long hard look in the mirror and consider whether or not you yourself need therapy.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • Maggoty@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Sure, and then some judge starts making subjective decisions on drawn/painted art that didn’t hurt anyone and suddenly people are getting hurt.

        The justice system is supposed to protect society, not hurt people you don’t like.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

      • dev_null@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The discussion will never be resolved in your favour, if you shut down the discussion.

  • Kedly@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Ah yes, more bait articles rising to the top of Lemmy. The guy was arrested for grooming, he was sending these images to a minor. Outside of Digg, anyone have any suggestions for an alternative to Lemmy and Reddit? Lemmy’s moderation quality is shit, I think I’m starting to figure out where I lean on the success of my experimental stay with Lemmy

    • cum@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      You can go to an instance that follows your views closer and start blocking instances that post low quality content to you. Lemmy is a protocol, it’s not a single community. So the moderation and post quality is going to be determined by the instance you’re on and the community you’re with.

      • Armok: God of Blood@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        This is throwing a blanket over the problem. When the mods of a news community allow bait articles to stay up because they (presumably) further their views, it should be called out as a problem.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Lemmy as a whole does not have moderation. Moderators on Lemmy.today cannot moderate Lemmy.world or Lemmy.ml, they can only remove problematic posts as they come and as they see fit or block entire instances which is rare.

      If you want stricter content rules than any of the available federated instances then you’ll have to either:

      1. Use a centralized platform like Reddit but they’re going to sell you out for data profits and you’ll still have to occasionally deal with shit like “The Donald.”

      2. Start your own instance with a self hosted server and create your own code of conduct and hire moderators to enforce it.

      • Kedly@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        Yeah, I know, thats why I’m finding lemmy not for me. This new rage bait every week is tiring and not adding anything to my life except stress, and once I started looking at who the moderaters were when Lemmy’d find a new thing to rave about, I found that often there was 1-3 actual moderators, which, fuck that. With reddit, the shit subs were the exception, here it feels like they ALL (FEEL being a key word here) have a tendency to dive face first into rage bait

        Edit: Most of the reddit migration happened because Reddit fucked over their moderators, a lot of us were happy with well moderated discussions, and if we didnt care to have moderators, we could have just stayed with reddit after the moderators were pushed away

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Likely yes, and even commercial models have an issue with CSAM leaking into their datasets. The scummiest of all of them likelyget one offline model, then add their collection of CSAM to it.

    • deathbird@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It would not need to be trained on CP. It would just need to know what human bodies can look like and what sex is.

      AIs usually try not to allow certain content to be produced, but it seems people are always finding ways to work around those safeguards.

        • fidodo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          You can ask it to make an image of a man made of pizza. That doesn’t mean it was trained on images of that.

          • dustyData@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            But it means that it was trained on people and on pizza. If it can produce CSAM, it means it had access to pictures of naked minors. Even if it wasn’t in a sexual context.

            • bitwaba@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              Minors are people. It knows what clothed people of all ages look like. It also knows what naked adults look like. The whole point of AI is that it can fill in the gaps and create something it wasn’t trained on. Naked + child is just a simple equation for it to solve

        • MeanEYE@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          You can always tell when someone has no clue about AI but has read online about it.

        • herrvogel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          The whole point of those generative models that they are very good at blending different styles and concepts together to create coherent images. They’re also really good at editing images to add or remove entire objects.

        • mightyfoolish@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          I think @deathbird@mander.xyz meant was the AI could be trained on what sex is and what children are at different points. Then a user request could put those two concepts together.

          But as the replies I got show, there were multiple ways this could have got accomplished. All I know is AI needs to go to jail.

  • StaySquared@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I wonder if cartoonized animals in CSAM theme is also illegal… guess I can contact my local FBI office and provide them the web addresses of such content. Let them decide what is best.

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    He then allegedly communicated with a 15-year-old boy, describing his process for creating the images, and sent him several of the AI generated images of minors through Instagram direct messages. In some of the messages, Anderegg told Instagram users that he uses Telegram to distribute AI-generated CSAM. “He actively cultivated an online community of like-minded offenders—through Instagram and Telegram—in which he could show off his obscene depictions of minors and discuss with these other offenders their shared sexual interest in children,” the court records allege. “Put differently, he used these GenAI images to attract other offenders who could normalize and validate his sexual interest in children while simultaneously fueling these offenders’ interest—and his own—in seeing minors being sexually abused.”

    I think the fact that he was promoting child sexual abuse and was communicating with children and creating communities with them to distribute the content is the most damning thing, regardless of people’s take on the matter.

    Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.

      The image depicts mature women, not children.

      • BangCrash@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Correct. And OP’s not saying it is.

        But to place that sort of image on an article about CSAM is very poorly thought out

    • Maggoty@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Wait do you think all Hentai is CSAM?

      And sending the images to a 15 year old crosses the line no matter how he got the images.