Europol has supported authorities from 19 countries in a large-scale hit against child sexual exploitation that has led to 25 arrests worldwide. The suspects were part of a criminal group whose members were engaged in the distribution of images of minors fully generated by artificial intelligence (AI).
That’s exactly how they work. According to many articles I’ve seen in the past, one of the most common models used for this purpose is Stable Diffusion. For all we know, this model was never fed with any CSAM materials, but it seems to be good enough for people to get off - which is exactly what matters.
If you train a model on 1,000,000 images of dogs and 1,000,000 images of cats, your output isn’t going to be a 50/50 split of purely dogs and purely cats, it’s going to be (on average) somewhere between a cat and a dog. At no point did you have to feed in pictures of dog-cat hybrids to end up with that model.
You could probably make some semi-realistic drawings and feed those in, and then re-train the model with those same images over and over until the model is biased to use the child-like properties of the drawings but the realism of the adult pictures. You could also feed the most CP-looking images from a partially trained model as the training data of another model, which over time would make the outputs approach the desired result.
It doesn’t matter if it’s accurate or not as long as pedos can get off to it, so just keep going until they can. According to our definition of what a pedophile is, though, it would likely be accurate.
Much as all in modern AI - it’s able to train without much human intervention.
My point is, even if results are not perfectly accurate and resembling a child’s body, they work. They are widely used, in fact, so widely that Europol made a giant issue out of it. People get off to whatever it manages to produce, and that’s what matters.
I do not care about how accurate it is, because it’s not me who consumes this content. I care about how efficient it is at curbing worse desires in pedophiles, because I care about safety of children.
That’s exactly how they work. According to many articles I’ve seen in the past, one of the most common models used for this purpose is Stable Diffusion. For all we know, this model was never fed with any CSAM materials, but it seems to be good enough for people to get off - which is exactly what matters.
How can it be trained to produce something without human input.
To verify it’s models are indeed correct, some human has to sit and view it.
Will that be you?
It wasn’t trained to produce every specific image it produces. That would make it pointless. It “learns” concepts and then applies them.
No one trained AI on material of Donald Trump sucking on feet, but it can still generate it.
It was able to produce that because enough images of both feet and Donald Trump exist.
How would it know what young genitals look like?
If you train a model on 1,000,000 images of dogs and 1,000,000 images of cats, your output isn’t going to be a 50/50 split of purely dogs and purely cats, it’s going to be (on average) somewhere between a cat and a dog. At no point did you have to feed in pictures of dog-cat hybrids to end up with that model.
Yes but you start with the basics of a cat and a dog. So you start with adult genitals and…
Non-pornographic pictures of children and/or human-made pornographic drawings of children.
Okay, and those drawings are my problem.
https://www.icenews.is/2010/07/28/unsavoury-cartoon-ruling-sparks-debate-in-sweden/
It’s not clear cut that those are okay.
“Okay” in what sense? If you mean morally, then I think that’s pretty clear cut. If you mean legally, then that’s just a technicality.
You could probably make some semi-realistic drawings and feed those in, and then re-train the model with those same images over and over until the model is biased to use the child-like properties of the drawings but the realism of the adult pictures. You could also feed the most CP-looking images from a partially trained model as the training data of another model, which over time would make the outputs approach the desired result.
But to know if it’s accurate, someone has to view and compare…
It doesn’t matter if it’s accurate or not as long as pedos can get off to it, so just keep going until they can. According to our definition of what a pedophile is, though, it would likely be accurate.
But if it’s not accurate, will pedos jerk off to it?
Probably not, but that’s irrelevant. The point is that no one needs to harm a child to find out if the output is sufficiently arousing.
Much as all in modern AI - it’s able to train without much human intervention.
My point is, even if results are not perfectly accurate and resembling a child’s body, they work. They are widely used, in fact, so widely that Europol made a giant issue out of it. People get off to whatever it manages to produce, and that’s what matters.
I do not care about how accurate it is, because it’s not me who consumes this content. I care about how efficient it is at curbing worse desires in pedophiles, because I care about safety of children.