Not the best news in this report. We need to find ways to do more.

  • etrotta@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    It is not “in the whole fediverse”, it is out of approximately 325,000 posts analyzed over a two day period.
    And that is just for known images that matched the hash.

    Quoting the entire paragraph:

    Out of approximately 325,000 posts analyzed over a two day period, we detected
    112 instances of known CSAM, as well as 554 instances of content identified as
    sexually explicit with highest confidence by Google SafeSearch in posts that also
    matched hashtags or keywords commonly used by child exploitation communities.
    We also found 713 uses of the top 20 CSAM-related hashtags on the Fediverse
    on posts containing media, as well as 1,217 posts containing no media (the text
    content of which primarily related to off-site CSAM trading or grooming of minors).
    From post metadata, we observed the presence of emerging content categories
    including Computer-Generated CSAM (CG-CSAM) as well as Self-Generated CSAM
    (SG-CSAM).

    • Rivalarrival@lemmy.today
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      How are the authors distinguishing between posts made by actual pedophiles and posts by law enforcement agencies known to be operating honeypots?