Italian flag next to scales of justice, symbolizing AI deepfake regulation and child protection policies
ArticleSeptember 23, 2025

Italy Takes Bold Steps on AI: New Deepfake Regulations and Enhanced Protections for Children Under 14

CN
@Zakariae BEN ALLALCreated on Tue Sep 23 2025

Italy Takes Bold Steps on AI: New Deepfake Regulations and Enhanced Protections for Children Under 14

Italy is rapidly advancing its approach to artificial intelligence with a new set of regulations aimed at reducing deepfakes and protecting children under 14 online. Here’s a breakdown of the key changes, the importance of these rules, and how businesses, creators, and families can take action.

Why This Matters Now

As AI-generated content becomes increasingly prevalent—ranging from fun face swaps to highly convincing deepfakes—it poses risks to public trust, reputation, and even consumer safety. Simultaneously, children are interacting with AI-driven apps and social media at ever-younger ages, often without adequate parental consent or age verification.

In response, Italy is taking decisive action focusing on two critical areas: ensuring transparency and accountability in AI-generated media, and implementing stronger protections for minors under 14. These new measures complement the European Union’s landmark AI Act and existing child privacy regulations, but they go even further in practice.

Key Targets of Italy’s New Rules

  • Deepfake Transparency and Accountability: Clear labeling obligations for synthetic media and penalties for malicious or undisclosed uses.
  • Child Protection: Stricter parental consent requirements, improved age verification expectations, and enhanced safety features for AI and social media.
  • Compliance with the EU AI Act: National measures designed to align with EU-wide obligations regarding risk, transparency, and oversight.

Italy’s policy reforms integrate national AI initiatives, the EU AI Act, and heightened enforcement by the Italian data protection authority (the Garante), resulting in a more robust regulatory environment for deepfakes and youth safety online.

Aligning with the EU AI Act

The EU AI Act outlines a comprehensive framework, including bans on specific high-risk AI practices, rigorous requirements for high-risk systems, and transparency obligations for AI capable of generating or manipulating content. Providers must disclose when their content is AI-generated and incorporate mechanisms to detect deepfakes and prevent misuse. Adopted by the European Parliament in 2024, the Act has received formal approval from EU governments and is set for phased implementation.

Italy’s new approach builds on this framework by emphasizing tangible measures to mitigate the risks associated with deepfakes and to ensure greater protection for minors, supported by strengthened local enforcement and tailored regulations for social platforms and AI tools accessed by children.

For more insights on the EU AI Act, check out the European Parliament’s overview and the Council’s updates on timelines and obligations (European Parliament; Council of the EU).

Changes Regarding Deepfakes in Italy

Under the new regulations, AI-generated content—be it images, audio, or video—that could mislead the public must be clearly labeled. Penalties will be enforced against those who use deepfakes to deceive, defame, or manipulate public opinion. Key enforcement areas will include:

  • Transparency: Mandatory visible notices and content provenance indicators for AI-generated media.
  • Accountability: Holding creators, platforms, or deployers liable when undisclosed deepfakes lead to harm.
  • Election Integrity: Enhanced scrutiny during campaigns, including expedited takedowns and clear labeling.

These national steps align with the obligations set out in the EU AI Act, reinforcing the need for transparency in AI-generated content while promoting safe creative expression.

Why Deepfakes Require Urgent Action

In recent years, deepfakes have started to significantly influence public discourse and exploit vulnerabilities globally. Instances include AI-generated media affecting elections or disseminating disinformation, along with voice cloning scams. Regulators across the EU and beyond are recognizing this as an urgent challenge to information integrity and consumer protection.

  • The EU AI Act mandates transparency for generative AI and synthetic media to help users identify AI-generated content (European Parliament).
  • In the U.S., telecom regulators acted against AI voice-cloning used in political robocalls following notable deepfake incidents during the 2024 election cycle (FCC).

Italy’s new measures reflect this global movement for clearer labeling, quicker responses to harmful content, and consequences for deceptive AI-generated media.

Strengthening Protections for Children Under 14

Italy has consistently recognized 14 as the minimum age for children to consent to data processing by online services. Children under this age require parental consent, in line with EU GDPR, which allows member states to set the consent age between 13 and 16. Italy opted for 14 and has been urging platforms to reinforce age verification processes.

What’s changing is the heightened focus on enforcement. Expect stricter requirements around:

  • Age Verification: Platforms and AI applications likely to be accessed by minors must ensure reliable age checks to prevent under-14 sign-ups without parental approval.
  • Parental Consent: Services need to obtain and verify consent from a parent or guardian before processing data from children under 14.
  • Safety by Design: Default settings should minimize data collection, disable precise location tracking, limit algorithmic profiling, and restrict contact with unknown adults.
  • Content Controls: Stronger filters for mature content and clearer algorithms for younger users.

The Garante has acted decisively in recent years to protect minors by intervening against platforms with inadequate age checks and even temporarily suspending a popular AI chatbot in 2023 due to privacy concerns. These actions have paved the way for more consistent protections for children under 14.

Practical Implications of the New Rules

For Platforms, AI Providers, and Developers

  • Label Synthetic Media: Clearly indicate when images, audio, and video are AI-generated, incorporating machine-readable provenance signals where feasible (e.g., through C2PA standards).
  • Add Provenance and Watermarking: Implement standards for content authenticity and watermarking whenever possible to support downstream detection and moderation (C2PA).
  • Audit and Log: Maintain internal documentation of labeling, detection, and moderation actions related to deepfakes.
  • Age Gating: Utilize reliable age estimation or verification for high-risk features, particularly generative tools capable of producing realistic depictions.
  • Parental Consent Flows: Ensure parental consent is verified for users under 14 and provide meaningful controls and transparency for parents.
  • Safety by Design: Implement child-appropriate defaults and limit profiling of minors in accordance with GDPR and the Digital Services Act (DSA overview).

For Media Companies and Creators

  • Disclose AI usage in visuals and audio, particularly in news, advertising, or political contexts.
  • Maintain a provenance trail for edits and generation processes to validate authenticity if questioned.
  • Avoid realistic portrayals of real individuals without consent, especially during elections or involving minors.

For Schools and Families

  • Activate parental controls and platform-specific safety settings by default for children under 14.
  • Engage in discussions about deepfakes—help children learn how to spot manipulated media and understand the importance of labeling.
  • Choose services that promote child safety audits, provide clear age gating, and facilitate straightforward parental consent.

Enforcement and Coordination in Italy

Italy’s regulatory strategy relies heavily on collaboration among national authorities and EU regulators:

  • Data Protection Enforcement: The Garante oversees adherence to privacy, consent, and profiling rules under GDPR, especially regarding minors.
  • AI Oversight: EU bodies established under the AI Act will coordinate with national authorities on high-risk AI, transparency duties, and systematic risk assessments.
  • Platform Obligations: Major online platforms must address systemic risks, including those affecting minors and tackling disinformation, with oversight from the European Commission.

Anticipate more collaborative investigations where deepfakes intersect with privacy and platform safety, particularly during elections or significant public events.

Challenges Ahead: Technology and Trade-offs

Despite the new regulations, challenges persist in deepfake detection and content authenticity. Watermarks may be removed, or their effectiveness can diminish when content is altered. Detection technologies can struggle to keep pace with advancements in generation techniques. Additionally, stringent age verification processes may create friction and raise privacy concerns if not executed carefully.

Italy’s strategy reflects the EU’s broader objective: balancing reasonable transparency requirements with targeted enforcement and industry standards to elevate the costs associated with deception while fostering innovation. The pressing questions are less about the need for action and more about consistent and proportionate enforcement.

Criticism and Ongoing Questions

  • Risk of Over-labeling: Excessive labeling may lead to warning fatigue among users, rendering notifications ineffective.
  • Chilling Effects: Creators may feel liability for experimental content, while platforms might worry about accountability for user uploads that are challenging to screen thoroughly.
  • Age Verification Trade-offs: Rigorous checks may reduce risks but must also be sensitive to privacy, especially for families without formal IDs or credit cards.
  • Cross-Border Enforcement: Digital content posted outside Italy can still be accessed domestically; coordination with EU and global platforms is essential.

Action Checklist

For Companies

  • Map your AI features and user paths that could generate or host synthetic media.
  • Implement layered disclosures: visual labels for users alongside machine-readable provenance for platforms.
  • Adopt a content authenticity framework (e.g., C2PA) for generated media and marketing materials.
  • Establish an age assurance mechanism that is privacy-conscious and accessible, with parental consent processes for under-14 users.
  • Ensure your policies align with the EU AI Act, GDPR, and the Digital Services Act. Document risk analyses and mitigation strategies.
  • Train moderators and support teams to recognize deepfake indicators and appropriate escalation procedures.

For Creators and Newsrooms

  • Disclose AI utilization in visuals and audio when realism might mislead.
  • Store original files and generation metadata as evidence of authenticity.
  • Avoid using individuals’ likenesses without clear consent, particularly minors.

For Parents and Educators

  • Activate parental controls and device-level restrictions for children under 14.
  • Utilize teachable moments to discuss how AI can modify images, voices, and videos.
  • Prioritize apps that clarify their AI features, publish safety reports, and provide verified parental consent processes.

The Bottom Line

Italy is taking proactive and significant measures to address two of the most pressing issues in AI: deepfakes and child protection. The new regulations aim to enhance transparency around synthetic content while ensuring that children under 14 are not ensnared by data-hungry, AI-powered products without authentic parental consent and safety measures. As the EU AI Act unfolds, Italy’s approach will serve as a critical case study for enforcing AI accountability while fostering innovation.

FAQs

Is Italy banning deepfakes?

No. The focus is on transparency and accountability. Deepfakes must be clearly disclosed, and harmful or undisclosed uses may incur penalties. Satire, artistic expression, and experimentation are still permitted, but the new rules increase the stakes for deception.

What qualifies as a deepfake under these rules?

Highly realistic AI-generated images, audio, or video that depict individuals or events and could mislead viewers if not labeled. Basic filters or obvious parodies typically present lower risks, though context is crucial.

Why is the age threshold set at 14 in Italy?

Under GDPR, EU countries can establish the age of digital consent between 13 and 16. Italy has set it at 14, requiring parental consent for online services to process a child’s data below that age.

How is this related to the EU AI Act?

The EU AI Act provides comprehensive EU-wide regulations, including transparency for generative AI. Italy’s measures enhance these, focusing on enforcement and child-specific protections at the national level.

What should companies prioritize first?

Start by implementing clear labeling and content provenance for AI-generated media, ensuring age verification and parental consent for users under 14, and aligning risk assessments with the EU AI Act and the Digital Services Act.

Sources

  1. European Parliament – Artificial Intelligence Act: MEPs Adopt Landmark Law
  2. Council of the EU – AI Act: Final Approval and Overview
  3. GDPR Text – Regulation (EU) 2016/679 (Article 8 on Children’s Data)
  4. European Commission – Digital Services Act Overview
  5. Reuters – Italy Approves Bill to Regulate the Use of AI (Context and National Approach)
  6. BBC – Italy Temporarily Bans ChatGPT Over Data Concerns (Garante Enforcement Context)
  7. BBC – Italy Orders Platforms to Improve Age Checks After Child Safety Concerns
  8. FCC – Enforcement Actions on AI Voice-Cloning in Robocalls
  9. C2PA – Content Authenticity and Provenance Standards

Thank You for Reading this Blog and See You Soon! 🙏 👋

Let's connect 🚀

Share this article

Stay Ahead of the Curve

Join our community of innovators. Get the latest AI insights, tutorials, and future-tech updates delivered directly to your inbox.

By subscribing you accept our Terms and Privacy Policy.