The software industry has seen more than its fair share of disruption in the last few years. Concerns about cyber security have led to a wave of new legislation regulating what was once (outside specific domains) a largely unregulated field. The key piece of legislation in the EU is the Cyber Resilience Act, which requires software vendors to ensure that their products are securely developed, supported, and properly documented.
At the same time, AI-powered coding tools have made it possible for developers to generate large amounts of code faster than ever before. At one end of the spectrum, these tools can be used under a developer’s close oversight to save time, with every suggestion reviewed line-by-line. At the other, tools like Cursor, Claude Code, Windsurf and Lovable can be used to output whole functions or even applications in response to prompts, without the user necessarily needing to review the generated code or even understand how it works. The latter approach is what has become known as “vibe-coding”.
We consider whether these trends are in direct conflict, and how businesses can make the most of generative AI coding assistants without potentially opening themselves up to compliance risks under the CRA.
What is the Cyber Resilience Act?
The CRA is the first piece of legislation imposing minimum standards on all software offered in the EU. From December 2027, all software must meet a set of “essential cybersecurity requirements”, which means that they will need to (amongst many other things):
- Be designed, developed and produced to ensure an appropriate level of cybersecurity;
- Not have any known exploitable vulnerabilities;
- Include appropriate access control mechanisms;
- Protect the integrity and confidentiality of all processed data (not just personal data); and
- Be designed, developed and produced to limit attack surfaces and to mitigate the impact of exploits.
Software vendors will also need to conduct cybersecurity risk assessments for products, perform due diligence on any third party software components, draw up a mandatory set of technical documents including a software bill of materials, and report vulnerabilities through a standardised process. Products must be conformity assessed, and those that meet the requirements will need to be CE marked (just like traditional physical products).
Convenience vs. compliance
It’s hard to see how a software vendor will be able to meet these requirements if a significant amount of their codebase is the result of vibe-coding. There is an obvious gap between the CRA’s expectations of careful design, documentation and due diligence, and a “vibes” based approach to software development. Vibe-coding stereotypically aims to achieve working software as quickly as possible, and may skip key steps like architecture design, security reviews and even code reviews.
For example, if a user’s request is broad enough, generated code is likely to introduce new third party libraries to a project. If nobody is performing due diligence on these third party libraries to consider any cybersecurity risk, documenting their use, and reviewing the calls that are made to them, then the essential cybersecurity requirements are unlikely to be met. The resulting product will therefore not be compliant and cannot be placed on the market in the EU.
Proponents of vibe-coding might object that this misses the point: the objective is to produce useful prototypes as quickly as possible rather than to create production-ready software to be offered to customers. However, anyone who has spent any time developing software knows that prototype code and temporary solutions have a habit of making their way into production and then staying there for the long term. If software works, even superficially, it can be hard to justify investing time and money in refactoring the code just to achieve the same functionality (albeit more securely and reliably).
It's true that “creating a whole app based on a prompt” is an extreme use case, and that most developers would not advocate having such little human oversight of code. But many AI-driven application development tools are advertising themselves as “no-code” platforms – it seems likely that at least some purely vibe-coded software is likely to be released onto the market. And in any case, a compliance risk can still arise even where developers intend to use AI tools in a more limited way and review their outputs carefully. There are plenty of threads online suggesting that AI coding tools can exhibit scope creep, making edits to the codebase across multiple files that go beyond what a user is expecting in response to their prompt.
If a business is hoping to rely on vibe-coding to reduce the need for secure design and human review of code, it is likely to be difficult to reconcile this with its obligations under the CRA.
How can use of AI coding assistants comply, or even enable, CRA compliance?
However, if the same tools are used more conservatively for AI-assisted (but not AI-dominated) development, the position is different. Many (if not most) organisations are adopting an “AI pair programming” model, which aims to capture the speed and quality improvements promised by AI without giving up the human oversight and code governance processes of traditional development. Even the best human developers will occasionally produce insecure or poor quality code. If managed properly, introducing AI into the mix and having two pairs of eyes (one human, one machine) on code may well improve its overall quality and security.
Businesses will need to decide how much of their development process they are willing to outsource to AI, and what level of oversight they will expect their developers to continue to exercise. At the people and process level, many of our clients already have in place specific guidelines on the responsible use of AI coding tools and associated governance and code review processes. At the technology level, many of these tools can be configured to prevent inadvertent changes to key files, and instructed to seek permission before adding third party dependencies or expanding the scope of their changes beyond what the user might expect.
Used in this way, there is unlikely to be any conflict with the requirements of the CRA.
As ever, good governance makes all the difference
The Cyber Resilience Act doesn’t outlaw vibe-coding (Betteridge’s Law). However, it will make it hard, if not impossible, to bring a software product to market in EU if it has been developed in an unstructured, undocumented or insecure way. Software that has been truly vibe-coded, without an organisation paying proper attention to secure design and programming practices, is unlikely to meet the essential cybersecurity requirements set out in the CRA.
We think that most mature businesses are unlikely to be using AI coding assistants in such an uncontrolled manner. Provided that a software provider can demonstrate that it uses these tools in a well-governed way that enhances rather than undermines the security of the product, it should be able to (and perhaps better able to) meet its obligations under the CRA.

/Passle/5f3d6e345354880e28b1fb63/MediaLibrary/Images/2025-09-29-13-48-10-128-68da8e1af6347a2c4b96de4e.png)
/Passle/5f3d6e345354880e28b1fb63/MediaLibrary/Images/2025-12-09-16-34-55-174-69384faf1b6076d9d899c3a5.png)
/Passle/5f3d6e345354880e28b1fb63/SearchServiceImages/2025-12-10-14-03-23-339-69397dab6a0d5791a6f9d0bc.jpg)
/Passle/5f3d6e345354880e28b1fb63/MediaLibrary/Images/2025-07-10-13-52-35-189-686fc5a39f23a993118ba1a0.png)