AI and youth media protection

Against the backdrop of rapidly developing AI applications, there are a number of questions about the future challenges in child and youth media protection. Where exactly these challenges lie is difficult to assess, as new products such as ChatGPT are constantly entering the market. Usually, without sufficient knowledge about possible dangers in advance.

Many of the AI systems are easily accessible and often a simple registration is sufficient. Even children can use them without any problems. Companies like Microsoft and Google are moving to integrate AI technologies into their products and platforms. This means that children and young people will almost inevitably come into contact with AI content in the future. Even if they don't use any dedicated AI tools themselves, such as My AI on Snapchat.

Experts from the field of child and youth media protection agree that there are already various risks that are particularly relevant for minors. Some of the problems related to generative AI are, for example:

  1. The age of users* is often not taken into account by AI, making age-appropriate use difficult.
  2. Some language models are difficult to distinguish from human communication. For example, children may mistakenly assume they are talking to a real person.
  3. AI technology allows users to generate image content without any special prior knowledge. This includes content that is harmful to minors or dangerous, such as depictions of abuse or extremist propaganda.
  4. Missing or incorrect training data can lead to false information. AI can also be intentionally used for disinformation.
  5. Generative AI facilitates the impersonation of a false identity. Especially Cybergrooming attacks could be even harder for children to detect in the future.
  6. It is possible to create pornographic images (deepnudes) or deepfakes with real children's photos.
  7. By manipulating images and videos, especially by faking faces or voices, children and young people can become victims of cyberbullying.
  8. Safeguards in AI texting programs can be easily circumvented in some cases by instructions from users*.

The use of generative AI therefore requires close scrutiny and attention, especially when it comes to protecting minors.

The information on this page is based on this text from jugendschutz.net.