• kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Generative AI has really become a poison. It’ll be worse once the generative AI is trained on its own output.

    • Simon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Here’s my prediction. Over the next couple decades the internet is going to be so saturated with fake shit and fake people, it’ll become impossible to use effectively, like cable television. After this happens for a while, someone is going to create a fast private internet, like a whole new protocol, and it’s going to require ID verification (fortunately automated by AI) to use. Your name, age, and country and state are all public to everybody else and embedded into the protocol.

      The new ‘humans only’ internet will be the new streaming and eventually it’ll take over the web (until they eventually figure out how to ruin that too). In the meantime, they’ll continue to exploit the infested hellscape internet because everybody’s grandma and grampa are still on it.

        • rottingleaf@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Yup. I have my own prediction - that humanity will finally understand the wisdom of PGP web of trust, and using that for friend-to-friend networks over Internet. After all, you can exchange public keys via scanning QR codes, it’s very intuitive now.

          That would be cool. No bots. Unfortunately, corps, govs and other such mythical demons really want to be able to automate influencing public opinion. So this won’t happen until the potential of the Web for such influence is sucked dry. That is, until nobody in their right mind would use it.

      • Baylahoo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        That sounds very reasonable as a prediction. I could see it being a pretty interesting black mirror episode. I would love it to stay as fiction though.

  • istanbullu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    You don’t get to blame AI for this. Reddit was already overrun by corporate and US gov trolls long before AI.

    • TheFriar@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      “New poison has been added to arsenic. Should you stop drinking it? Subscribe to find out.”

    • Rinox@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      The problem is the magnitude, but yeah, even before 2020 Google was becoming shit and being overrun by shitty blogspam trying to sell you stuff with articles clearly written by machines. The only difference is that it was easier to spot and harder to do. But they did it anyway

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        These things became shit around 2009. Or immediately after becoming sufficiently popular to press out LiveJournal and other such (the original Web 2.0, or maybe Web 1.9 one should call them) platforms.

        What does this have to do with search engines - well, when they existed alongside web directories and other alternative, more social and manual ways of finding information, you’d just go to that if search engines would become too direct in promotion and hiding what they don’t want you to see. You’d be able to compare one to another and feel that Google works bad in this case. You wouldn’t be influenced in the end result.

        Now when what Google gives you became the criterion for what you’re supposed to associate with such a request, and same for social media, then it was decided.

  • merthyr1831@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    This shit isnt new, companies have been exploiting reddit to push products as if they’re real people for years. The “put reddit after your search to fix it!!!” thing was a massive boon for these shady advertisers who no doubt benefitted from random people assuming product placements were genuine.

  • laverabe@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I just consider any comment after Jun 2023 to be compromised. Anyone who stayed after that date either doesn’t have a clue, or is sponsored content.

  • Mastengwe@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    AI Is Poisoning Reddit to Promote Products and Game Google With ‘Parasite SEO’

    FTFY

    • Aabbcc@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Ai is a tool. It can be used for good and it can be used for poison. Just because you see it being used for poison more often doesn’t mean you should be against ai. Maybe lay the blame on the people using it for poison

  • dumples@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    The only reason reddit was valuable was because it was from real people who weren’t paid off. Well that’s ruined now.

    • eronth@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Yeah, I’ve noticed that a bit lately anyways. Maybe I’m looking up stuff that has less of a community on Reddit, and thus has less discussion, but I have absolutely noticed some comments have a single product name-drop with little clarity for why they liked the product. It starts to feel like they’re just ads (generated or otherwise) meant to trick you into thinking Reddit users are liking the product.

      AI is going to just make it worse, and cause Reddit to not be a good goto for actual reviews and discussion on pros/cons.

      • Jordan117@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        There’s an excellent chance that even some of the “authentic” discussions you see are word-for-word reposts of old posts and comments, created by bots to build up karma in order to be sold to spammers and influence peddlers down the line.

      • dumples@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Exactly. Usually there’s a conversation or a quick consensus on one or two things. But I’ve been seeing lots of single answers or just ads

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        The first obvious wave of this stuff, to me, was the video conversion ripoff software and similar. They had people looking around for questions their software was possibly a solution for. Sometimes they would act like users, other times it was more neutral info, but still clear it was self promotion because of what was recommended.

    • glimse@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I wanted to figure out what game hosting sites were good and Google pointed me to reddit…every thread was full of boilerplate ads for different sites. The comments were the most obvious, marketing-approved sentences I’ve ever seen

      • dumples@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Everything I can find online seems to be advertisements or paid reviews (Also advertisements) when looking for anything anymore. Businesses are terrified of an open honest conversation about what is good and what is not

        • sudo42@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I so don’t understand how to run a business.

          • Spend $Billions shoving advertising down everyone’s throats? Absolutely!

          • Just make a good product and provide good customer support? It will never work!

          • Nikelui@piefed.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Option 1 is easy and any idiot can throw money at it to solve the problem. Option 2 requires talented people and real effort.

        • glimse@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          If you’re terrified of honest conversations, your product is probably shit.

          Marques Brownlee had a video recently about the question “do bad reviews kill products?” that highlights the issue well

          • dumples@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Exactly. Every company is terrified of honest conversation since it makes putting out shit harder.

  • Drinvictus@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    If only people moved to an open and federated platform. I mean I don’t have to say that I hate reddit since I’m here but still whenever I Google a problem reddit answers are one of the most useful places. Especially about something local.

    • circuscritic@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      This isn’t a problem that can be solved with a technical solution that isn’t itself extremely dystopian in nature.

      This is a problem that requires legislation and criminal liability, or genuine punitive civil liability that pierces the corporate legal shields.

      Don’t hold your breath for a serious solution to present itself.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        Do you think legislation and laws would be reasonable for trolls who ban evade and disrupt and destroy synchronous online social spaces too?

        The same issue happens there. Zero repercussions, ban evasion is almost always possible, and the only fool proof solutions seem to quickly turn dystopian too.

        Ban evasion and cheating are becoming a bigger and bigger issue in online games/social spaces. And all the nerds will agree it’s impossible to fix. And many feel it’s just normal culture. But it’s not sustainable, and with AI and an ever escalating cat and mouse game, it’s going to continue to get worse.

        Can anyone suggest a solution that is on the horizon?

        • circuscritic@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          No, I’m a free speech absolutist when it comes to private citizens. Be they communists, Nazis, Democrats, trolls, assholes or furries, the government should have no role in regulating their speech outside of reasonable exceptions i.e. yelling fire in a crowded theater, threats of physical violence, etc.

          My moral conviction on relative free speech absolutism ends at the articles of incorporation, or other nakedly profit driven speech e.g. market manipulation.

          So if the trolls and ban evaders are acting on behalf of a company, or for profit driven interests, their speech should be regulated. If they’re just assholes or trolls, that’s a problem for the website and mod teams.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Doesn’t mean that the fediverse is immune.

    News stories and narratives are still fought over by actors on all sides and sometimes by entities that might be bots. And there are a lot of auto-generating content bots that post stuff or repost old content from other sites like Reddit.

    • AggressivelyPassive@feddit.de
      cake
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Especially since being immune to censorship is kind of the point of the fediverse.

      If you’re even a tiny bit smart about it, you can start hundreds of sock puppet instances and flood other instances with bullshit.

      • old_machine_breaking_apart@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Can’t some instances make some sort of agreement and have a whitelist of instances to not block? People would need to register to add their instances to the list, and some common measures would be applied to restrict someone from registering several instances at once, and banning people who misuse the system.

        That wouldn’t solve the problem, but perhaps would make things more manageable.

        • AggressivelyPassive@feddit.de
          cake
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          You can’t block people. Who would you know, who registered the domain?

          What you’re proposing is pretty similar to the current state of email. It’s almost impossible to set up your own small mail server and have it communicate the “mailiverse” since everyone will just assume you’re spam. And that lead to a situation where 99% of people are with one of the huge mail providers.

            • AggressivelyPassive@feddit.de
              cake
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              It’s extremely complicated and I don’t really see a solution.

              You’d need gigantic resources and trust in those resources to vet accounts, comments, instances. Or very in depth verification processes, which in turn would limit privacy.

              What I actually found interesting was bluesky’s invite system. Each user got a limited number of invite links and if a certain amount of your invitees were banned, you’d be banned/flagged to. That creates a web of trust, but of course also makes anonymous accounts impossible.

      • IndescribablySad@threads.net@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        I try to avoid talking about how indefensibly terrible Lemmy’s anti-spam and anti-brigading measures are for fear of someone doing something with the information. I imagine the only thing keeping subtle disinfo and spam from completely overtaking Lemmy is how small its reach would be. Doing the same thing to Reddit is a hundred times more effective, and systemically accepted. Reddit’s admins like engagement.

        • IninewCrow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          It’s an arms race and Lemmy is only a small player right now so no one really pays attention to our little corner. But as soon as we get past a certain threshold, we’ll be dealing with the same problems as well.

        • MysticKetchup@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I feel the same about a lot of Fediverse apps right now. They’re kinda just coasting on the fact that they’re not big enough for most spammers to care about. But they need to put in solid defenses and moderation tools before that happens

      • FinishingDutch@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Probably.

        So, we complain to a regulatory body, they investigate, they tell a company to do better or, waaaay down the road, attempt to levy a fine. Which most companies happily pay, since the profits from he shady business practices tend to far outweigh the fines.

        Legal or illegal really only means something when dealing with an actual person. Can’t put a corporation in jail, sadly.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Like a built in brand dashboard where brands can monitor keywords for their brand and their competitors? And then deploy their sanctioned set of accounts to reply and make strategic product recommendations?

        Sounds like something that must already exist. But it would have been killed or hampered by API changes… so now Spez has a chance to bring it in-house.

        They will just call it brand image management. And claim that there are so many negative users online that this is the only way to fight misinformation about their brand.

        Or something. It’s all so tiring.

      • MelodiousFunk@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 year ago

        He’s got to get them from somewhere. They certainly aren’t coming from his little piggy brain.

      • Hubi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Reddit is past the point of no return. He might as well speed it up a little.

  • sirspate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    If the rumor is true that a reddit/google training deal is what led to reddit getting boosted in search results, this would be a direct result of reddit’s own actions.

  • Th4tGuyII@kbin.social
    cake
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    It’s gross, but also inevitable. If there’s an untapped niche to make money from, somebody’s going to try it – plus if they want to waste their money on generating accounts only to have them be banned, then so be it.

    Makes me kinda thankful that this community is smaller and less likely to be targeted by this sort of crap.

    • grrgyle@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      What’s funny is I think it would be profitable for maybe, like, a year, before everyone starts doing it and then even normal people stop trusting reddit comments.

      It’s like pissing in a pool to sell people soap. What’s the plan once people stop using the pool?

      • Croquette@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Buy a new pool and piss in again to sell new soaps.

        By the time that the cow is bled dry, someone is stuck holding the bag while some people made out like bandits.

        That is the stock market for you. Create no value, just wealth transfer.

        • grrgyle@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          Create no value, just wealth transfer.

          In this case it’s creating a kind of anti-value - harm, I guess.

          Also I bow to your superior and brazen use of mixed metaphors. You got double what I did. “Bleeding” a cow dry? It adds impact over the usual “milking” even!

          • Croquette@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            Milking assume that you don’t kill the cow, which isn’t the case here.

            Some people are specialized at being hired at startups to prop up the startup to be sold and make a quick buck.

            Then they move on to the next startup, wash rinse and repeat. It tells a lot about the state of innovation.