Skip to main content

illegal content

Regulation (EU) 2022/2065 on the Digital Single Market and amending Directive 2000/31/EC (Digital Services Act), abbreviated as DSA, defines illegal content as information that is illegal under the law, either because of its nature or because of the activities it facilitates or promotes. The DSA does not define what constitutes illegal content per se, leaving this to individual sectoral laws at national level.

The categories of illegal content are set out, inter alia, in the Implementing Regulation on Transparency Reporting Templates for Online Platform Providers.

A more detailed breakdown of illegal content (source):

Code

Category

Subcategory

1

Animal welfare

(Animal welfare)

  • Harm to animals
  • Illegal sale of animals
  • Not covered by other subcategories

2

Consumer information infringements

  • Covert advertising or commercial communications (including by influencers)
  • Insufficient information on traders
  • Misleading information about the characteristics of goods and services
  • Misleading information on consumer rights
  • Non-compliance with pricing rules
  • Not covered by other subcategories

3

Cyber violence

(Cyber violence)

  • Cyber bullying and cyber harassment
  • Cyber harassment
  • Cyber incitement to hatred or violence
  • Cyber stalking
  • Sharing (intimate) material without consent, including sexual abuse (based on images) (excluding content depicting minors)
  • Sharing material containing deepfakes or similar technology without consent, using third-party features (except content depicting minors)
  • Not covered by other sub-categories

4

Cyber violence against women

(Cyber violence against women)

  • Cyber bullying and cyber harassment of girls
  • Cyber harassment of women
  • Cyber stalking of women
  • Gender-based misinformation
  • Illegal incitement to violence and hatred against women
  • Sharing (intimate) material against women without consent, including sexual abuse of women (based on images) (excluding content depicting minors)
  • Sharing of anti-women material containing deepfakes or similar technology without consent, using third party features (excluding content depicting minors)
  • Not covered by other sub-categories

5

Data protection and privacy violations

  • Biometric data breach
  • Data falsification
  • Missing basis for data processing
  • Right to be forgotten
  • Covered under other subcategories

6

Illegal or harmful speech

(Illegal or harmful speech)

 

  • Defamation
  • Discrimination
  • Illegal incitement to violence and hatred based on personal circumstances (hate speech)
  • Covered in other subcategories

7

Intellectual property infringements

  • Copyright infringements
  • Infringements of design rights
  • Infringements of geographical indications
  • Infringements of patent rights
  • Infringements of trade secrets
  • Infringements of trade mark rights
  • Not covered by other subcategories

8

Negative effects on civic discourse or elections

  • Misinformation & disinformation and foreign manipulation of information and interference
  • Breach of EU law in the area of civic discourse or elections
  • Breach of national law in the area of citizens' discourse or elections
  • Not covered by other subcategories

9

Protection of minors

(Protection of minors)

  • Age limits for minors
  • Child Sexual Abuse Material (CSAM)
  • Child sexual abuse material containing deepfakes or similar technology
  • Solicitation of minors for sexual purposes/sexual solicitation of minors
  • Dangerous challenges
  • Not covered by other subcategories

10

Risk to public security

(Risk for public security)

  • Illicit organisations
  • Risk of environmental damage
  • Risk to public health
  • Terrorist content
  • Not covered by other subcategories

11

Fraud and/or deception

(Scams and/or fraud)

  • Impersonation or account hijacking
  • Unauthentic accounts
  • Fraudulent listing
  • Unauthentic user reviews
  • Phishing
  • Pyramid schemes
  • Not covered by other subcategories

12

Self-harm

(Self-harm)

  • Content promoting eating disorders
  • More severe forms of self-harm
  • Suicide
  • Not covered by other subcategories

13

Unsafe, non-compliant or prohibited products

  • Prohibited or restricted products
  • Unsafe or non-compliant products
  • Not covered by other subcategories

14

Violence

(Violence)

  • Concerted action with intent to harm
  • General incitement or incitement to violence and/or hatred
  • Exploitation of people
  • Trafficking in human beings
  • Trafficking in women and girls
  • Not covered by other subcategories

15

Other breaches of the provider's terms and conditions

(Other violation of provider's terms and conditions)

  • Adult sexual material
  • Age restrictions
  • Geographical requirements
  • Goods/services that may not be offered on the platform
  • Language requirements
  • Nudity
  • What is not covered by other subcategories

16

Type of illegal content not defined by the public authority

/

17

Type of alleged illegal content not identified by the notifier

/

For explanations for users on how to report illegal content , please click here.

The aim is to protect users and to ensure that online platforms and other intermediary services responsibly manage content that could cause harm to individuals or society.

If users believe that information constituting illegal content has been published on an online platform, they should report it to the competent authorities or file a report directly with the provider. Reporting content to a trusted whistleblower is also a good way forward. The Agency will publish the list of organisations to which it will grant Trusted Whistleblower status on its website.

 

Accessibility(CTRL+F2)
color contrast
text size
highlighting content
zoom in