Introduction
AI is rapidly transforming marketing practice, enabling businesses to generate personalised content, automate customer engagement and produce creative assets at unprecedented scale. Notably, the legal framework governing marketing communications – spanning intellectual property, data protection, consumer protection and online safety – applies equally to AI-assisted activity.
A number of high-profile marketing missteps demonstrate the potential reputational and legal consequences of poorly governed AI deployment. For example, several major consumer brands have faced significant backlash after releasing AI-generated advertising perceived as inauthentic, misleading or inconsistent with brand values. For example, Coca-Cola’s AI-assisted holiday campaign attracted criticism from both consumers and creative professionals. Similarly, a luxury fashion campaign by Valentino featuring surreal AI-generated visuals was widely criticised online as “tacky” and misaligned with the brand’s heritage of craftsmanship.
These incidents underline a broader point: the use of AI in marketing is not merely a technological or creative question, but a legal and governance issue engaging multiple areas of English law.
Set out below are five of the most significant legal risks associated with the use of AI in marketing, together with practical steps organisations may take to mitigate exposure.
1. Copyright and Intellectual Property
Generative AI systems are commonly trained on vast datasets that may include copyrighted images, text and other creative material. Where AI-generated marketing content reproduces or closely imitates protected works, businesses risk infringement claims.
This risk is not merely hypothetical. For example, recent litigation between Getty Images and Stability AI in the UK centred on allegations that copyrighted images were used without permission to train an image-generation model and that outputs may reproduce protected content. Such disputes demonstrate that liability may arise both at the training stage and in downstream commercial use.
Risk mitigation:
- Conduct due diligence on AI vendors’ training data sources and licensing arrangements.
- Implement contractual indemnities and warranties addressing intellectual property compliance.
- Review AI-generated outputs for recognisable third-party material before publication.
2. Data protection and privacy breaches
AI-driven marketing frequently involves the processing of personal data, including profiling, behavioural targeting and the generation of synthetic or manipulated imagery. These activities engage obligations under the UK GDPR and the Data Protection Act 2018.
Regulatory scrutiny is intensifying. In February 2026, the Information Commissioner’s Office (ICO) fined MediaLab, owner of the image-sharing platform Imgur, after finding that children’s personal information had been processed unlawfully and without appropriate age-verification safeguards over a multi-year period. The regulator emphasised that organisations must implement suitable protections where children’s data is likely to be involved.
Risk mitigation:
- Undertake data-protection impact assessments before deploying AI marketing tools.
- Ensure lawful bases exist for profiling and targeted advertising.
- Apply data-minimisation, transparency and human-oversight measures.
3. Misleading or non-compliant advertising
UK advertising regulation is technology-neutral: the CAP and BCAP Codes apply regardless of whether content is created by humans or AI. Regulators have emphasised that disclosure of AI use will not cure a fundamentally misleading claim. For example, where an AI-generated image exaggerates the real-world effect of a cosmetic product.
Recent Advertising Standards Authority (ASA) rulings further illustrate the breadth of enforcement. An in-game advertisement for an AI photo-editing app was banned for sexualising women and being harmful and irresponsible, while other campaigns have been prohibited for causing serious or widespread offence.
Risk mitigation:
- Apply the same substantiation and truthfulness standards to AI-generated content as to traditional advertising.
- Assess whether omission of AI disclosure could mislead consumers.
- Maintain internal approval processes and legal review for marketing assets.
4. Defamation, deepfakes, and unlawful content
AI enables the rapid creation of realistic synthetic images, audio and video, increasing the risk of defamatory or otherwise unlawful marketing material. UK law is evolving quickly in response.
Government statements in early 2026 confirmed that sharing or threatening to share non-consensual deepfake intimate images constitutes a criminal offence under the Online Safety framework, reflecting the severe harm caused by such content. Legislative measures addressing deepfake image abuse have also been commenced to strengthen enforcement.
Parallel regulatory and law-enforcement action underscores the seriousness of the issue. Authorities in the UK and Europe are investigating AI systems linked to the creation of harmful sexualised imagery, demonstrating cross-border scrutiny of generative technologies.
Risk mitigation:
- Prohibit the use of identifiable individuals in AI-generated marketing without explicit consent.
- Implement content-moderation and escalation procedures.
- Train marketing teams on defamation, harassment and online-safety offences.
5. Consumer protection, fairness, and ethical risk
Beyond strict legality, AI-driven marketing raises broader concerns regarding manipulation, bias and consumer detriment. The scale and realism of deepfakes are increasing rapidly, with millions projected to circulate annually, intensifying the challenge of detection and public protection.
Regulators are therefore likely to scrutinise not only compliance with specific rules but also overall fairness and transparency in AI-enabled consumer engagement. Failures in this regard may trigger enforcement under consumer-protection legislation or reputational harm even where no clear statutory breach exists.
Risk mitigation:
- Adopt internal AI-governance frameworks aligned with emerging regulatory expectations.
- Conduct fairness and bias testing for AI-driven targeting or personalisation.
- Ensure clear consumer disclosures and accessible complaint mechanisms.
Final thoughts
AI presents transformative opportunities for marketing, yet it operates within an established, and increasingly active, legal landscape in England and Wales. Intellectual-property disputes, data-protection investigations, advertising enforcement and new criminal offences relating to deepfakes collectively demonstrate that regulatory risk is immediate and material.
Organisations deploying AI in marketing should therefore adopt a proactive compliance strategy encompassing contractual safeguards, governance controls, legal review and ethical oversight. Those that do so will be best positioned to realise AI’s commercial benefits while managing the evolving legal risks.

/Passle/5f3d6e345354880e28b1fb63/MediaLibrary/Images/2025-09-29-13-48-10-128-68da8e1af6347a2c4b96de4e.png)
/Passle/5f3d6e345354880e28b1fb63/SearchServiceImages/2026-02-09-15-38-13-155-6989ff65405539a4f6206f08.jpg)
/Passle/5f3d6e345354880e28b1fb63/MediaLibrary/Images/2024-08-23-11-31-07-354-66c872fb971eecc249d83d40.png)
/Passle/5f3d6e345354880e28b1fb63/MediaLibrary/Images/2024-08-01-13-10-42-472-66ab8952cb2110fd5cb6e568.png)