News
The principles were signed by around 10 companies so far. Skip to main content. The homepage ... The companies also promise not to release AI models until the models are evaluated for CSAM imagery.
Major AI companies have signed to implement a set of principles that aim to prevent the creation and spread of AI-generated child sexual abuse material (AIG-CSAM). Companies like Google, Meta ...
Tech giants have promised to make sure that their AI training datasets are free of any child sexual abuse material. In collaboration with nonprofit Thorn and All Tech Is Human, the firms—Amazon ...
Deepfake CSAM is costing this man a very real prison sentence. Fifty-seven-year-old James Smelko of Pittsburgh was sentenced to 14 years and seven months in prison for possessing deepfake child ...
A dataset used to train AI image generation tools such as Stable Diffusion has been pulled down after researchers confirmed the presence of CSAM among its 5 billion-plus images.
Late last year, California passed a law against the possession or distribution of child sex abuse material (CSAM) that has been generated by AI. The law went into effect on January 1, ...
LAION, the German nonprofit group behind the data set used to train Stable Diffusion, among other generative AI models, claims it's removed suspected CSAM from its training data sets.
During an exchange of views with the European Parliament’s civil rights, justice and home affairs (LIBE) committee this afternoon, Johansson admitted the EU’s executive is investigating the ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results