The EU Cyber Resilience Act (CRA) is a consumer protection regulation set to take effect in the second half of 2024. By 2027, products sold in the EU market must comply with it to carry the CE marking. The CRA aims to establish an adequate level of trust in digital products. After years of ineffective self-regulation and attempts to adapt existing industry guidelines, the lack of clear regulations resulted in billions of euros in damages—an outcome often seen when cybersecurity is neglected. In response, the EU opted to regulate. The guiding principle is straightforward: just as European consumers expect food purchased in the EU to be safe, they should also expect digital products to meet adequate cybersecurity standards. This regulation has a profound impact on what is currently understood by manufacturers, vendors, developers, or integrators, as “the norm”:
In this article, we will cover the seven critical aspects of the CRA. If you are involved in releasing, developing, or integrating software or hardware products for the EU market, these Critical 7 will be invaluable in the coming years.
Most of the vendors, integrators, and many clients and manufacturers we engage with mistakenly believe that this regulation is aimed solely at the typical low-cost IoT products. This assumption is not just far from accurate, it could not be more wrong. The regulation applies to any product in the common EU market that:
Specific categories of products already covered by existing regulations: medical (UE 2017/745), vehicles (UE 2019/2144) and radio equipment (UE 2022/30)) and open-source software are excluded.
To insist and further clarify, all the following “things” are examples of what is in scope for the CRA:
Additionally, digital products are categorized as either “non-critical” or “critical,” with the latter divided into Class I and Class II. Critical class systems face more stringent requirements, including external party certification and adherence to Common Criteria and/or other standards, labs, or certifications. Critical systems are beyond the scope of this article and are not covered.
The regulation mandates that a product’s security posture be “risk based.” This means that a risk assessment must be conducted early in the design process, ideally as soon as the functional requirements and general functional requirements are established. This assessment usually involves two interconnected analyses: (1) business and process risks, and (2) technical risks.
At Crimson7, we recommend approaching this assessment by using a recognized and industry-standard threat modeling methodology. We have successfully utilized Microsoft’s STRIDE in various projects. By applying STRIDE, we consider the product design, its components, and their interrelationships, and the internal data flows.
The CRA at Annex.I.1.3 states that the following domains, relevant to the system, must be covered:
The risk assessment exercise, from the threats and risks mapping to the decision process (accept, mitigate or transfer) and the related motivation must be documented.
Unless this is already a familiar process, it is wise to seek assistance with STRIDE modeling using a company experienced in modelling threats on software, devices and solutions. Alternatively, you can train your teams to use STRIDE, or another suitable framework of your choice, and do the exercise in-house.
During the implementation or integration phases, the manufacturer must ensure that the requirements and features defined during the design phase are implemented correctly. As security professionals, we have countless stories where the interpretation of “correctly” had a significant impact on the outcome.
The need for security testing is clearly outlined in the regulation (Annex I.2.3). However, it spans multiple domains. Here’s a more technical suggestion: if possible, implement both positive and negative testing of functions (and their associated mitigating measures) within the development process’s test framework. Additionally, the system should undergo at least an end-to-end penetration test in addition to the testing of its individual components. The regulation also requires the final release to be free from known vulnerabilities (Annex I.1.2). The results of these test and the proof of finding mitigation must be documented.
Worried about this critical issue? You'll need to establish a clear plan for secure design and engineering and seek specialized mitigation advice—whether from internal or external experts—to document your processes. A penetration test is essential; make sure to hire a specialist rather than a generalist, as your complex solution deserves competent professionals. Finally, consider delving into detection engineering, which, while not explicitly covered by the CRA, is worth exploring if you're serious about security being a key enabler of your solution (after all, the CRA requires logging and auditing of events, might as well put this to good use!).
The CE marking is essentially declarative, but the vendor must be able to provide the regulator with the documentation proving that CRA requirements have been fulfilled.
Good documentation not only covers results and traces of all the security efforts, and the activities undertaken during the design process but, must include something that is a usually a challenge for our clients: a maintained, machine-readable software bill of material (SBOM) encompassing the first level of dependencies.
Security testing reports (such as penetration tests) are an essential part of the security documentation you could have to provide the regulator with. We believe that, instead of seeing it as a checkbox test provided by a more or less reputable brand, the pentest report should clearly outline the methodology and specialized skills employed during the assessment. Remember that the purpose of the assessment is to facilitate remediation. Be sure to document the remediation process, prioritize findings, and identify the roles responsible for addressing critical and high-priority issues. Also, include details about any re-testing that takes place.
Depending on the criticality of your mission, you may want to consult a lawyer or a specialized law firm to ensure that your documentation effectively addresses the regulations and can be readily used to respond to regulatory inquiries. Well-prepared documentation will serve as valuable and credible support for a lawyer if needed.
These test reports will also feed the mandatory security handbook, complex and document as your solution security mission is relevant. A link to the SBOM can be present in the User security handbook. Handbooks are covered in the next Critical point.
Along with the usual usage instructions, guides, and quick deployment materials, a specific section must inform users about the product’s security features and their companion processes. The handbook should cover existing security features and design choices of the product, as well as provide guidance on how to use the device securely within the target ecosystem to prevent undermining or invalidating these security measures.
The manufacturer should describe:
Anyone involved in security testing or penetration testing, a partner or supplier, should have an idea of what a Security Handbook entails. Naturally, the product team is the most knowledgeable resource for this. Regardless of how skilled your security tester or expert is, collaborating with product management, the lead architect, and any security architects is crucial during the handbook's development.
The CRA mandates that products must be free of known vulnerabilities when entering the market. However, the time gap between a product’s manufacturing or release and its actual deployment can pose a significant challenge since vulnerabilities may emerge during that period.
Depending on the product, an automated update process can be ideal and ensure a satisfactory level of adoption of the security updates but, when selecting an update strategy, careful consideration should be given for sensitive products and/or products that are typically network isolated.
For physical products, the hardware must be designed to minimize the risks associated with un-patchable vulnerabilities or defects and allow the update of externally provided modules and firmware, as these can lead to significant risks and costs. Releasing different hardware versions is often undesirable; it may be unavoidable, but the product should be designed to reduce this risk to a minimum.
Under the CRA, vendors are obligated to keep their products secure by providing security updates and maintaining a vulnerability disclosure and management program with regular testing. This is essential, as the threat landscape is constantly evolving. What is secure today could be compromised tomorrow as new vulnerabilities are discovered or attack methods evolve.
Implementing a robust, secure, and automated update process—ideally over-the-air (OTA)—is hugely beneficial for maintaining a product security stance. However, depending on the business case, automated or forced updates for software or products can introduce significant operational risks. We believe the end user should have the means to control the update process and be fully informed of the potential consequences (on the product security stance or on its deployment), thereby shifting responsibility away from the manufacturer. The recent large-scale impacts of an insufficiently tested update are the perfect example of factors that must be considered with an automatic update strategy. Not only updates must be subject to rigorous functional testing, but particular care should be taken not to mix security and functional updates to avoid impacts on the client deployments.
We strongly emphasize the importance of a secure update process. In our experience testing and hacking systems, vulnerabilities in the update process are frequently identified. Exploiting these weaknesses is often one of the quickest ways to gain administrative access (especially with embedded systems), extract internal code, reverse engineer it, and leverage this information to launch attacks on the backend, yet unpatched products or other connected devices.
The vendors will have to put a vulnerability disclosure and management process in place. This means that not only they will need to create and maintain a process to allow someone that discovers an issue to securely disclose it, but be ready to analyze the disclosure, validate the finding, develop a corrective patch, validate, and test it, deploy it and communicate about the patch availability. All of this while maintaining communication with the disclosing party and potentially within a fixed period. Putting this process in place, staffing it with adequate skills and maintaining it can be a challenge.
The CRA requires the vendor to provide the user with a documented process to remove the entirety of its data (personal or not) from the system.
In 2024, launching any product—whether software or hardware—comes with significant cybersecurity challenges. While security is an essential enabler, adhering to regulations becomes just as crucial to enter specific markets.
In the EU, the complexity is increasing, and we anticipate that the rest of the world will follow suit. While I am excited to see robust regulations aimed at enhancing product cybersecurity, fi-nal-ly, we worry about the challenges posed by navigating multiple regulations.
The CRA is not the only regulation to consider; data regulations are also quite stringent. If you are eager to incorporate AI-driven solutions, I am sure you, your boss or your investors are, the AI Act is another critical factor to keep in mind. And let us not forget about privacy regulations like GDPR, which remain relevant.
Additionally, there is a unique intersection of principles-based regulations, and the technical hurdles involved in creating appealing connected products and SaaS solutions for today’s market. Cybersecurity is inherently technical, but it is increasingly intertwined with legal requirements. Successfully overcoming these challenges demands a dual perspective—both legal and technical.
The CRA will significantly transform how cybersecurity is approached in digital products. This is not just a European initiative; the regulation mandates a level of cybersecurity maturity for which product managers may be unfamiliar. It represents a major shift in the environment and alters the target addressable market (TAM), affecting how products can be commercialized.
In our extensive career we have learned that every time a lawyer gets looped in a cyber deal, it is a signal of trouble. Today, things are different. Success looks like an inevitable marriage where lawyers need to actively tackle and overcome challenges rather than just oppose them, while technical experts must deliver solutions that align with legal requirements and satisfy the “principles”.
It may sound complicated—and it is. If you are in product or software management, it is time to recognize that your key allies might be both a lawyer in a suit and a tech expert with a hacking background in a hoodie: