• the post of tom joad@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    2 hours ago

    The term “algorithm” in this context is simply a convenient term hiding the intentional right wing radicalization of users to push them towards pro-business policies, so can we please call this out more often?

    I’m quite tired of “algorithm” standing in for the intentions behind the owners who write and maintain it.

    It was also an “algorithm” that inflated rent around the country, right?

    An algorithm, yes. Written with the intention of inflating rent.

    It’s not an accident. Algorithm my hair-hole

  • nehal3m@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    3 hours ago

    The old thread I posted this in was deleted, but I wrote this:

    Okay so hear me out. I have this pet theory that might explain some of the divide between genders, but also political parties, causing paralysis which ultimately might lead to humanity’s extinction. Forgive me if I’m stating the obvious.

    I’m going to set up two axioms to arrive at an extrapolated conclusion.

    One: Human psychology tends to ascribe more weight to negative things than positive things in the short term. In the long term this generally balances out, but in the short term it’s more prudent in a biological sense to pay attention to the rustling in the bushes than the berries you might pick from them. This is known as the negativity bias.

    Two: The modern gatekeepers of social interaction, Big Tech, employ blind algorithms that attempt to steer your attention towards spending more time on their platforms. These companies are the arbiters of the content we experience daily and what you do and don’t see is mostly at their discretion. The techniques they employ, in simple terms, are designed to provoke what they call ‘engagement’. They do this because at the end of the day FAANG have not only a financial interest, but a fiduciary duty to sell advertisements at the behest of their shareholders. The more they can engage you, the more ads they can sell. They employ live A-B testing, divide people into cohorts and poke and prod them with psychological techniques to try and glue your eyeballs to their ads.

    Extrapolated conclusion: These companies have a financial and legally binding interest to divide the population against itself, obstructing politics and social interaction to the point where we might not be able to achieve any of the goals that we need to reach to prevent oblivion.

    Thank you for coming to my TED Talk.

    • sp3tr4l@lemmy.zip
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      53 minutes ago

      I don’t even think this is controversial in any way, in fact I used to assume this was just common knowledge after Cambridge Analytica…

      I deleted, as in permanently, totally deleted my FB presence when that came out… but everyone else I explained … basically what you’ve just explained … to, thought I was insane or overreacting and paranoid.

      Its simple.

      Engagement, usage, time on platform is being optimized for.

      What drives these things most effectively?

      Hatred, outrage, extremely offensive and divisive things.

      … And they know that they can, through exposing people to such things, make said people more extreme and hateful and anxious and depressed.

      So… from an ‘optimize for platform usage’ standpoint… perfect! It’s a reinforcing loop!

      Zuckerberg stated at one point that his goal with Facebook was to be able to profile (and manipulate, but he didn’t say that part) users so well that he’d be able to predict what they’d post next.

      He really did/does just view all social interaction as a very complex problem that can be ‘solved’, like a physics question can be solved, to make a predictive model.

      They literally know that their business model is to ruin social discourse, ruin peoples mental health and their lives, to polarize society.

      It should not be surprising in any way that, well now society is extremely polarized and mentally ill.

  • DiogenesOfMiami@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    25 minutes ago

    I disagree with OP’s editorialized title.

    As an avid video gamer, I find myself constantly encountering subtle and overt bigotry in most online games I play. I will always call them out for it, no matter how much whooping it incites from kids just eating their popcorn and enjoying the fight.

    Ignoring them is how you let the Andrew Tates of the world win, because they’re certainly not taking the high road by remaining silent about their beliefs.

  • pelespirit@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    2 hours ago

    They did a study around the 2020 elections and have found the following to work with trolls:

    Respond once with the facts (if you must), and then walk away. I have found Lemmy not needing that most of the time, just downvoting seems to work. But if you’re on the place that shall not be named, this works.

    • FundMECFSResearch@lemmy.blahaj.zoneOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      I wish lemmy had the feature where you can mute all replies to a comment.

      Also very much appreciated if you can share the study it sounds interesting.

          • pelespirit@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            1 hour ago

            Thanks, I didn’t realize that worked here. But I meant tagging a user as awesome or a troll or whatever. That way, when you kind of remember seeing the name and they seem like they’re trolling, I can tell right away if I’ve had previous interactions with them. RES was awesome for that.

            • FundMECFSResearch@lemmy.blahaj.zoneOP
              link
              fedilink
              arrow-up
              2
              ·
              1 hour ago

              Ah that makes sense. That would be nifty. Because my block list these days is based on “this person said fucked up shit and i vaguely remember them saying more fucked up shit a while ago but cant fully remember if its them”

        • FundMECFSResearch@lemmy.blahaj.zoneOP
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          Sorry I see you replied before I edited my comment. Don’t feel forced as it isn’t that important but if you have the original study handy I’d appreciate a link because it sounds interesting.

          • pelespirit@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            2 hours ago

            That was years ago. I’m sure I saved it on reddit, but I haven’t been back there since I switched. Sorry about that. It was a real study though, they were trying to figure out all of he social media trolling from Cambridge Analytica and all that. It might have been even earlier.

  • iorale@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 hours ago

    Old man rant:

    Back in the day, forum mods punished those who engaged along with the troll, the golden rule was to not feed the troll and it was enforced.
    When moderation was removed from users and small communities, trolls found a new protection, now their bait was rewarded and they could easily get what they craved, a reaction and spread their hate.

    I’d say we should apply something along those lines, ignore the trolls and the propaganda, just report them on sight.

  • AVincentInSpace@pawb.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    …as opposed to platforms like Lemmy, where the only political ideologies you’ll find are “leftists” who, when asked what they even believe, respond with “what are you, a cop?”

  • traches@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    6 hours ago

    They don’t have to, algorithms do whatever they are designed to do. Long division is an algorithm.

    Profit motives are the issue here.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    27
    ·
    6 hours ago

    Wasn’t this literally the shady research that Facebook got caught doing with Cambridge Analytica? Specifically tweaking a user’s feed to be more negative resulted in that user posting more negative things themselves and more engagement overall.

    • sp3tr4l@lemmy.zip
      link
      fedilink
      arrow-up
      2
      ·
      1 hour ago

      Yep!

      Facebook figured out how to monetize trolling.

      Over 10 years later, it’s destroyed society, but made them a lot of money.

    • TheReturnOfPEB@reddthat.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      4 hours ago

      I wonder exactly how much of Hawaii Zuckerberg has to own before people start to question what they are getting from facebook.

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    4 hours ago

    I’ve been participating in Threads (yeah, I know, should be ashamed) and I’m unfortunately a sucker for some of the ragebait, especially political.

    Guess what Threads pushes at me. A lot of the dumbest ragebait. Not people that actually want to have a conversation. My fault for being a sucker, but the algorithms work.

    Doesn’t really matter, I’m shadowbanned. Pissed off too many republican propagandists by refuting them, so as usual, the “report” button is their remedy.

  • ristoril_zip@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 hours ago

    What I don’t get about this is why in this day and age with all the analytics tools we have do companies continue to just happily pay for simple eyeball exposure?

    The only time they seem to have any pause at all on this model is if people post screenshots of ads for their products next to posts literally praising Nazis.

    These so called AIs (LLMs) can learn to tell the difference between positive/happy/uplifting posts, neutral posts, and angry/sad/disturbing posts. The advertisers should be asking for their products to be featured next to the first and second groups of posts.

    People engage based on anger, sure. They click posts and reply and whatnot. But do they click the ad next to a post that pisses them off and then buy the product?

    Or is this purely a subconscious intrusion effort? Do the advertisers just want their products in front of eyeballs regardless of what’s around the ad? It seems like the answer is “no” when they’re called out. But maybe it’s “yes” if they can get away with it?