• ITGuyLevi@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 months ago

    The argument could be made (and probably will be) that they promote those activities by allowing their algorithms to promote that content. Its’s a dangerous precedent to set, but not unlikely given the recent rulings.

    • FlyingSpaceCow@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      4 months ago

      Any precedent here regardless of outcome will have significant (and dangerous) impact, as the status quo is already causing significant harm.

      For example Meta/Facebook used to prioritize content that generates an angry face emoji (over that of a “like”) - - as it results in more engagement and revenue.

      However the problem still exists. If you combat problematic content with a reply of your own (because you want to push back against hatred, misinformation, or disinformation) then they have even more incentiive to show similar content. And they justify it by saying “if you engaged with content, then you’ve clearly indicated that you WANT to engage with content like that”.

      The financial incentives as they currently exist run counter to the public good

    • joel_feila@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      Yeah i have made that argument before. By pushing content via user recommended lists and auto play YouTube becomes a publisher and meeds to be held accountable

      • hybrid havoc@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        Not how it works. Also your use of “becomes a publisher” suggests to me that you are misinformed - as so many people are - that there is some sort of a publisher vs platform distinction in Section 230. There is not.