The Fediverse, a decentralized social network with interconnected spaces that are each independently managed with unique rules and cultural norms, has seen a surge in popularity. Decentralization h...
Not the best news in this report. We need to find ways to do more.
It is not “in the whole fediverse”, it is out of approximately 325,000 posts analyzed over a two day period.
And that is just for known images that matched the hash.
Out of approximately 325,000 posts analyzed over a two day period, we detected
112 instances of known CSAM, as well as 554 instances of content identified as
sexually explicit with highest confidence by Google SafeSearch in posts that also
matched hashtags or keywords commonly used by child exploitation communities.
We also found 713 uses of the top 20 CSAM-related hashtags on the Fediverse
on posts containing media, as well as 1,217 posts containing no media (the text
content of which primarily related to off-site CSAM trading or grooming of minors).
From post metadata, we observed the presence of emerging content categories
including Computer-Generated CSAM (CG-CSAM) as well as Self-Generated CSAM
(SG-CSAM).
The report (if you can still find a working link) said that the vast majority of material that they found was drawn and animated, and hosted on one Mastodon instance out of Japan, where that shit is still legal.
Every time that little bit of truth comes up, someone reposts the broken link to the study, screaming about how it’s the entire Fediverse riddled with child porn.
It’s Pawoo, Pixiv’s (formerly) own instance, which is infamous for this kind of content, and those are still “just drawings” (unless some artists are using illegal real-life references).
The study doesn’t compare their findings to any other platform, so we can’t really tell if those numbers are good or bad. They just state the absolute numbers, without really going into to much detail about their searching process. So no, you can’t draw the conclusion that the Fediverse has a CSAM problem, at least not from this study.
Of course that makes you wonder why they bothered to publish such a lackluster and alarmistic study.
Why would someone downvote this post? We have a problem and it’s in our best interest to fix that.
Because it’s another “WON’T SOMEONE THINK OF THE CHILDREN” hysteria bait post.
They found 112 images of cp in the whole Fediverse. That’s a very small number. We’re doing pretty good.
It is not “in the whole fediverse”, it is out of approximately 325,000 posts analyzed over a two day period.
And that is just for known images that matched the hash.
Quoting the entire paragraph:
How are the authors distinguishing between posts made by actual pedophiles and posts by law enforcement agencies known to be operating honeypots?
The report (if you can still find a working link) said that the vast majority of material that they found was drawn and animated, and hosted on one Mastodon instance out of Japan, where that shit is still legal.
Every time that little bit of truth comes up, someone reposts the broken link to the study, screaming about how it’s the entire Fediverse riddled with child porn.
So basically we had a bad apple that was probably already defederated by everyone else.
It’s Pawoo, Pixiv’s (formerly) own instance, which is infamous for this kind of content, and those are still “just drawings” (unless some artists are using illegal real-life references).
They’re using Generative AI to create photo realistic renditions now, and causing everyone who finds out about it to have a moral crisis.
Well, that’s a very different and way more concerning thing…
… I mean … idk … If the argument is that the drawn version doesn’t harm kids and gives pedos an outlet, is a ai generated version any different?
imo, the dicey part of the matter is “what amount of the AI’s dataset is made up of actual images of children”
Shit that is a good point.
The study doesn’t compare their findings to any other platform, so we can’t really tell if those numbers are good or bad. They just state the absolute numbers, without really going into to much detail about their searching process. So no, you can’t draw the conclusion that the Fediverse has a CSAM problem, at least not from this study.
Of course that makes you wonder why they bothered to publish such a lackluster and alarmistic study.
Pretty sure any quantity of CSAM that isnt zero is bad…