Skip to content
DOI

While computing systems rely on hardware as a "root-of-trust" for security, new vulnerabilities exist that cannot be patched in software alone. Security verification and the development of defense mechanisms must be pursued at the microarchitectural hardware level, utilizing formal methods specifically tailored for security verification.

Microarchitectures as Root-of-Trust in Computing Systems – Research Needs in Formal Security Analysis

by Wolfgang Kunz and Dominik Stoffel

System-on-Chips (SoCs) and embedded systems are ubiquitous in modern society. With their abundance of connectivity features they create a new attack surface for cyberattacks. Our trust in computing systems depends mainly on the provided safety and security features of the underlying computing hardware. Although the majority of the advanced security features, such as end-to-end encryption, are implemented at the software level, they rely on basic hardware primitives to deliver the intended functionalities. In common terminology, such hardware primitives form the “root-of-trust” of the computing system. They constitute a set of trusted functionalities to ensure the security of the system. Any security flaw in the hardware root-of-trust can affect virtually all applications deployed on the system.

Hardware systems are difficult or, in some cases, even impossible to patch, which exacerbates the challenge of dealing with hardware security flaws. Countless reports in recent years on system vulnerabilities at the hardware level, e.g., [1], [2], attest to the fact that hardware security flaws can pose a genuine threat to the overall system security. The Common Weakness Enumeration database (CWE) [3] has acknowledged this problem by including hardware vulnerabilities as a separate category of security weaknesses.

The role of hardware in system security is not limited to providing security-related features to support software functions. Weaknesses in the hardware design itself can introduce severe vulnerabilities to the computing system. At the microarchitectural level, these weaknesses mostly have two sources: the hardware circuit executing a security-critical software application may leak confidential information through side channels, in particular timing of the software execution [4], or (possibly very subtle) design bugs escape conventional verification procedures and cause security risks for the entire system.

David Patterson,
Keynote IEEE/ACM Design Automation Conference, 2018

This article explores using formal methods for verifying hardware security, a shift from the traditional software-centric approach. Historically, cybersecurity focused on software vulnerabilities, but the 2018 discovery of hardware-based attacks like Spectre and Meltdown, exploiting ISA-invisible side channels such as transient execution side channels (TES), revealed the limits of software-only defenses. Subsequent attacks (e.g., [3], [4]) have further challenged the effectiveness of existing solutions and underscore the need for more comprehensive hardware security strategies.

Martin Dixon, VP for Security at Intel, 2021

Today, re-establishing trust in the microarchitectures of computing systems has become one of the main goals in the computer industry and among chip makers. Microarchitectural descriptions at the Register Transfer Level (RTL) are the point of reference for sign-off verification before the tape-out of a chip for manufacturing. They typically serve as the golden model of an SoC and are the basis for all design refinements at lower levels as well as for manufacturing.

Security verification and the development of defense mechanisms at the microarchitectural level have become rapidly growing research fields. There is general conviction that the formidable patch-and-pray cycles can only be overcome if comprehensive security guarantees are already provided during design and before tape-out. Formal verification bears promise to provide such guarantees. However, currently available techniques are tailored towards general functional design aspects and suffer from severe limitations when targeting microarchitectural security and side channels.

Key insights

  • The immense attack surface exposed by microarchitectural security flaws and the vastness and diversity of SoC deployment domains necessitates extending formal methods from just functional correctness to systematic non-functional security verification. Research is vital to understand and formalize these diverse security risks and threats.

  • Conventional functional verification applied to individual design components cannot provide comprehensive security guarantees for extensive hardware systems interacting with software; vulnerabilities often arise from the integration of multiple components and specific hardware-software interactions.

  • The current solutions for mitigating security vulnerabilities often involve ad hoc processes, which require significant design alterations and necessitate communication with software developers. Moreover, while advanced security measures have been suggested to address timing side channels, they substantially escalate the manual design workload and introduce considerable hardware overhead, highlighting a pressing need for more refined, systematic, and comprehensive strategies.

Key recommendations

  • Invest in research and development of tools for exhaustive security analysis of SoC microarchitectures. Invest in the formalization of microarchitectural threat models as targets for a new generation of non-functional formal verification methods.

  • Devise, implement and adopt new design and verification methodologies based on these new tools, pursuing verification-driven, secure-by-construction design providing system-wide threat coverage.

  • Contribute to open-source and public-domain initiatives like the RISC-V ecosystem to demonstrate use of the new tools and methodologies and to facilitate their widespread adoption and dissemination.

Security Vulnerabilities

Security analysis must address a wide spectrum of potential hardware vulnerabilities. At the microarchitectural hardware level, there are two main categories: security-violating design bugs and microarchitectural timing side channels.

Security-violating design bugs are the subset of all design bugs that, besides violating the functional specification, violate a relevant security target. In principle, bugs can be detected by conventional functional verification. However, this requires the complete and correct specification of the entire design as well as of all its security mechanisms. Substantial effort is demanded from verification engineers to cover all security-relevant functional behaviors by a set of properties specified in a property specification language such as SVA (SystemVerilog Assertions). Even then, there may still be verification gaps remaining: security issues related to the communication between modules or to the interaction between hardware and firmware are easily missed [5].

A design that is bug-free may still be vulnerable to side channels. At the microarchitectural level, side channels are based on timing. Although a program may not have access rights to a certain set of data, depending on this data, one and the same program may behave slightly differently in terms of its own computation results, i.e., what data it stores in which registers and at which time points. These differences only affect the detailed timing of the microarchitectural implementation and have no impact at the level of the instruction set architecture (ISA), i.e., they do not affect the correct functioning of the program as observed by the programmer. However, if these subtle alterations of the program’s execution at the microarchitectural level are caused by secret data to which the program must not have access, this may open a “side channel”. An attacker, owning (and creating) such a program, may trigger and observe these alterations to infer secret information. This is called a “microarchitectural side channel attack”.

The following observation is key to classifying microarchitectural side channels and the corresponding verification targets (properties): In the conventional attack scenario the attacker process by itself is not capable of controlling both ends of a side channel. To steal secret information, it must interact with another process initiated by the system, the “victim process”, which manipulates the secret and “makes a noise”. Common defenses at the software level include constant-time encryption [6] and cache access pattern obfuscation [7]. They prohibit the information flow at the “sending end” of the channel, i.e., the one owned by the victim process.

Although securing encryption software against these attacks is challenging because it demands a deep understanding of microarchitectural details, in the past, the threat of microarchitectural side channels was generally perceived to be limited to a small set of software applications. This general intuition, however, was drastically changed by the discovery of transient execution side channel (TES) attacks. Even though they use similar channels for exfiltrating information, TES attacks are fundamentally different from classical microarchitectural side channels. TES attacks exploit side effects of transient instruction execution, a phenomenon not visible in the sequential execution semantics of the ISA. Transient execution occurs when the processor speculatively executes instructions ahead of time that it needs to roll back if the speculation turns out to be wrong. Such transient instruction execution may leak secret data through timing side channels and is the root cause for TES attacks. Without affecting the ISA-level results of the program, the attacker triggers transient executions of instructions that depend on secret data. In this way, the attacker does not rely on a vulnerability within a victim software to make a noise. In fact, the TES attacker controls both ends of the channel, the part that triggers the side effect and sends out the information as well as the part that observes it. This makes TES attacks more threatening than the earlier known timing side channels. In the TES scenario, a single user-level attacker program can establish a microarchitectural side channel leaking parts of the memory which are not accessed by any other program. Such HW covert channels not only can destroy the usefulness of encryption and secure authentication schemes but can steal data from essentially anywhere in the system. As a result, unlike classical side channel attacks, TES attacks threaten the overall security of the system and its root of trust.

Since the publication of the Spectre [8] and Meltdown [9] attacks in 2018, numerous new TES-based attacks have been discovered (e.g., MDS attacks [10], [11], [12]), speculative store bypass [13], speculative interference [14]), with many of them targeting a previously patched system (e.g., Fallout attack [10]), calling for new attention towards hardware security.

Needed: A Formalization of Threat Models

When considering both, the vast scope of the possible microarchitectural security threats, and the great variety of deployment domains for SoCs, it becomes apparent that the attack surface is huge. Such a challenge can only be met if formal methods are extended beyond traditional functional correctness checking to non-functional security verification. This must be done in a systematic way such that the relevant threats in their large diversity can be described, and many different use scenarios for computing systems can be covered. Research is needed to characterize security risks and to develop and formalize the threats. In other words, we need a taxonomy of threat models. A threat model for hardware captures the security targets for a system in combination with a profile of the attackers.

Security targets for hardware most commonly are requirements of confidentiality or integrity. Confidentiality of hardware is given if all information stored or processed in the system is protected against being retrieved by an unauthorized entity. Integrity means preventing an attacker from changing or influencing a part of the system that is specified as protected.

The attacker profile makes assumptions about how attackers can access the system and what methods they can use to exploit potential vulnerabilities of the system. For example, an attacker may access the system by running an unprivileged user process. Another threat model may consider access to the security-critical system through a third-party IP which is added to the system and which the attacker controls. Specific threat models are the basis for the specification of verifiable properties. The challenge consists in formulating these properties in such a way that a large spectrum of different threat models is covered by a manageable set of properties.

Figure 1 illustrates the space of threat models that must be analyzed for specifying verifiable properties. The points associated with important threat models are marked in green color. Firstly, it is meaningful to distinguish between security-violating functional bugs and non-functional vulnerabilities such as side channels. This is the blue dimension in the shown space.

Secondly, the vertical dimension (red color) of the cube in Figure 1 distinguishes between vulnerabilities that occur only in cores and those requiring a global analysis of the entire System-on-Chip (SoC). For example, TESs, such as Spectre and Meltdown, only require an analysis of the core while the root cause of other types of timing side channels can be distributed over several locations of the SoC.

Thirdly, the horizontal axis of the cube separates threat models related to the security target of confidentiality from those related to integrity.

We give some examples of threat models related to specific points of the cube in Figure 1:

Threat Model 010

  • Security target: Confidentiality of data in protected memory locations

  • Attacker Profile: Attacker can run any program on the core with user-level privileges.

  • Class of vulnerabilities: Transient Execution Side Channel in cores

Note that several threat models can belong to each point in the space of Figure 1, for example:

Threat Model 101a

  • Security target: Integrity of information and integrity of operation in security-critical parts of SoC

  • Attacker Profile: Attacker controls a third-party IP which communicates with the security-critical SoC domain.

  • Class of vulnerabilities: Design bugs (insufficient protection mechanisms)

Threat Model 101b

  • Security target: Integrity of information and integrity of operation in security-
    critical parts of SoC

  • Attacker Profile: Attacker can inject faults by laser light anywhere in the SoC.

  • Class of vulnerabilities: Design bugs (insufficient protection mechanisms)

Hardware security engineers conduct threat analyses to create relevant threat models for specific designs and deployment domains. These models form the basis for verification engineers to define verification targets as formal properties, specified in languages like SVA. It's crucial to globally formalize threat models, aiming to cover a broad spectrum of vulnerabilities and attacker profiles, including "unknown unknowns", i.e., yet undiscovered vulnerabilities. The overall goal is to create generic, re-usable verification IPs.

Needed: New Tools

Late detection of hardware security flaws can incur tremendous costs, calling for new functional verification techniques specifically for hardware security. Rather than being exhaustive with respect to a complete functional design specification, the new tools must be exhaustive with respect to a well-defined threat model and deliver well-defined security guarantees. The new tools must be scalable and, at the same time, amenable to adoption by current industrial hardware design flows.

While conventional functional verification of a design, in principle, also avoids security-critical bug escapes, the manual effort for such an exercise is prohibitively large. The state-of-the-art flow for security verification based on functional property checking is shown in the left part of Figure 2. The targeted security features (Box II) typically result from a high-level, architectural perspective. The design specification is extended by an additional functional specification (Box III) of these security features which guides their integration into the RTL implementation. This is followed by functional verification procedures (Box IV. It turns out, however, that this classic approach is not always sufficient. Not only does conventional functional verification miss side channels, but also the abstract security requirements can be extremely difficult to map to functional specifications, requiring a detailed, microarchitectural understanding of security threats. Therefore, the specification itself (Box II) may fail to cover relevant aspects of the global threat model. Experience shows that, even when choosing security features conservatively, the conventional design process can miss subtle, yet hazardous security gaps and gives rise to the widely spread complaint about a never-ending “patch-and-pray” cycle.

Figure 2: Functional vs. Non-functional verification flow for security

Therefore, new approaches to formal hardware security verification are needed that target security properties directly. Such a flow is shown on the right side of Figure 2. Instead of developing a detailed (and error-prone) functional specification, this approach starts from the applicable threat model (Box I) and formalizes security requirements rather than detailed functional behaviors. This leads to specifying non-functional security properties (Box VI) which are orthogonal to conventional functional specifications. Since these properties directly target global security requirements without the need of a functional specification for the intended defense mechanisms, these methods have the potential to cover security breaches (both functional breaches and side channels) which are easily missed by the conventional approach. Both conventional solvers for functional verification as well as specialized solvers, such as [15], can serve as a basis to extend formal hardware verification for such non-functional targets (Box VII).

In addition to functional or explicit information leakages that violate the security requirements, implicit information flows through timing side channels must also be considered. This complicates the verification process significantly because the functional specification, which is untimed, cannot make such a requirement. Therefore, there is a lack of proper specification techniques for security against timing side channels.

New formal tools are needed which target the threats by timing side channels in a systematic way. This research can build upon initial successes in academia to detect transient execution side channels but must extend the scope to all other side channels relevant at the microarchitectural level. The new methods must be capable of handling different architectures ranging from simple in-order processors without speculation to out-of-order processors with speculation.

Needed: New Flows and Methodologies

Running a single tool on a specific design or design component can never lead to global security guarantees formulated for a large hardware system and its interface with software. Instead, such guarantees must result from a security-driven flow that combines the results of different methods and tools across components and design layers. The needed extensions to today’s flows have a “horizontal” and a “vertical” dimension.

The horizontal dimension concerns the structural composition of hardware designs consisting of multiple interconnected modules. Many security issues are introduced into the system through the integration of these components, and a vulnerability in one component may only be exploited through its communication with other components. Detecting such vulnerabilities requires analyzing information flows across multiple components which is usually a computationally expensive task for formal verification techniques.

This calls for new scalable verification methodologies exploiting specific advantages of different methods to cover system-wide security for a given threat model. The new methodologies are required to combine the results of different tools to compose global guarantees on the entire system. Formal verification in such a setting can be based on sound abstraction techniques over different stages of the design flow.

The vertical dimension concerns the Hardware/Software Interface. Many hardware security issues are only exposed if they are triggered by a specific interaction between hardware and software. Formal verification of such mechanisms demands proper modeling of hardware/software interaction.

Needed: Verification-Driven/Secure-by-Construction Design

Experience in industry and academia shows that most SoC hardware designs suffer from numerous security flaws based on both microarchitectural timing side channels and functional design bugs. Fixing design bugs is usually an ad hoc process which solves the problem by making design changes and/or communicating possible restrictions for the software layer with the software developers. Fixing timing side channels is a more demanding procedure. Advanced security features have been proposed, such as those based on information flow tracking [16], [17], that promise effective measures against these vulnerabilities. However, this comes with significant costs: the manual RTL design effort increases drastically, and the new architectures come with a significant hardware overhead that so far has only been estimated based on abstract system simulations. Only a few RTL architectures with security guarantees for side channels have been proposed.

Clearly, research is needed to explore new security architectures at the RTL. Formal security analysis can make a major contribution to developing new design methodologies leading to new security architectures. Formal verification precisely and exhaustively determines all attack scenarios that are relevant under the specified threat model. This knowledge can be very valuable to i) determine the root causes of the security weakness and ii) to derive fixes that avoid excessive conservatism. Note that, without the knowledge of formal tools, current measures for security often employ “blanket fixes” that cover a large (but not fully understood) spectrum of weaknesses.

AUTHORS

Wolfgang Kunz and Dominik Stoffel are professors in the Department of Electrical and Computer Engineering at University of Kaiserslautern, Germany.

Sponsored by the German Agentur für Innovation in der Cybersicherheit GmbH.

REFERENCES

[1]: S. J. Vicarte, M. Flanders, R. Paccagnella, G. Garrett-Grossman, A. Morrison, C. Fletcher and D. Kohlbrenner, "Augury: Using data memory dependent prefetchers to leak data at rest," IEEE Symposium on Security and Privacy (SP), p. pp. 1518–1518, 2022.
[2]: M. Gross, N. Jacob, A. Zankl and G. Sigl, "Breaking trustzone memory isolation through malicious hardware on a modern FPGA-SoC," in Proceedings of the 3rd ACM Workshop on Attacks and Solutions in Hardware Security Workshop, 2019.
[3]: "Common Weakness Enumeration," [Online]. Available: https://cwe.mitre.org/.
[4]: Y. Y. a. K. Falkner, "FLUSH+ RELOAD: A High Resolution, Low Noise, L3 Cache Side-Channel Attack," USENIX Security Symposium, p. 22–25, Vol. 1, 2014.
[5]: G. Dessouky, D. Gens, P. Haney, G. Persyn, A. Kanuparthi, H. Khattri, J. M. Fung, A.-R. Sadeghi and J. Rajendran, "Hardfails: Insights into software-exploitable hardware bugs," in USENIX Security Symposium, 2019.
[6]: D. Jayasinghe, R. Ragel and D. Elkaduwe, "Constant time encryption as a countermeasure against remote cache timing attacks," in IEEE 6th International Conference on Information and Automation for Sustainability (ICIAfS), 2012.
[7]: J. Kong, O. Aciicmez, J.-P. Seifert and H. Zhou, "Deconstructing new cache designs for thwarting software cache-based side channel attacks," in Proceedings of the 2nd ACM workshop on Computer security architectures, 2008.
[8]: P. Kocher, D. Genkin, D. Gruss, W. Haas, M. Hamburg, M. Lipp, S. Mangard, T. Prescher, M. Schwarz and Y. Yarom, "Spectre attacks: Exploiting speculative execution," arXiv preprint arXiv:1801.01203, 2018.
[9]: M. Lipp, M. Schwarz, D. Gruss, T. Prescher, W. Haas, S. Mangard, P. Kocher, D. Genkin, Y. Yarom and M. Hamburg, "Meltdown," arXiv preprint arXiv:1801.01207, 2018.
[10]: C. Canella, D. Genkin, L. Giner, D. Gruss, M. Lipp, M. Minkin, D. Moghimi, F. Piessens, M. Schwarz, B. Sunar, J. Von Bulck and Y. Yarom, "Fallout: Leaking Data on Meltdown-resistant CPUs," in Proc. ACM Conference on Computer and Communications Security (CCS), 2019.
[11]: M. Schwarz, M. Lipp, D. Moghimi, J. Van Bulck, J. Stecklina, T. Prescher and D. Gruss, "ZombieLoad: Cross-privilege-boundary data sampling," arXiv preprint arXiv:1905.05726, 2019.
[12]: S. van Schaik, A. Milburn, S. Österlund, P. Frigo, G. Maisuradze, K. Razavi, H. Bos and C. Giuffrida, "RIDL: Rogue In-Flight Data Load," in IEEE Symposium on Security and Privacy (S&P), 2019.
[13]: V. Kiriansky and C. Waldspurger, "Speculative buffer overflows: Attacks and defenses," arXiv preprint arXiv:1807.03757, 2018.
[14]: M. Behnia, P. Sahu, R. Paccagnella, J. Yu, Z. Zhao, X. Zou, T. Unterluggauer, J. Torrellas, C. Rozas, A. Morrison and others, "Speculative interference attacks: Breaking invisible speculation schemes," arXiv preprint arXiv:2007.11818, 2020.
[15]: K. v. Gleissenthall, R. G. Kıcı, D. Stefan and R. Jhala, "IODINE: Verifying Constant-Time Execution of Hardware," in 28th USENIX Security Symposium (USENIX Security 19), 2019.
[16]: J. Yu, M. Yan, A. Khyzha, A. Morrison, J. Torrellas and C. W. Fletcher, "Speculative taint tracking (STT) a comprehensive protection for speculatively accessed data," in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019.
[17]: K. Loughlin, I. Neal, J. Ma, E. Tsai, O. Weisse, S. Narayanasamy and B. Kasikci, "DOLMA: Securing Speculation with the Principle of Transient Non-Observability," in 30th USENIX Security Symposium (USENIX Security 21), 2021.
[18]: Y. Yarom, D. Genkin and N. Heninger, "CacheBleed: a timing attack on OpenSSL constant-time RSA," Journal of Cryptographic Engineering, vol. 7, p. 99–112, 2017.
[19]: C. Percival, "Cache missing for fun and profit," in BSDCan, 2005.
[20]: D. Gullasch, E. Bangerter and S. Krenn, "Cache games–Bringing access-based cache attacks on AES to practice," in IEEE Symposium on Security and Privacy (SP), 2011.
[21]: A. Purnal, F. Turan and I. Verbauwhede, "Prime+ Scope: Overcoming the Observer Effect for High-Precision Cache Contention Attacks," in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021.
[22]: P. Pessl, D. Gruss, C. Maurice, M. Schwarz and S. Mangard, "DRAMA: Exploiting DRAM Addressing for Cross-CPU Attacks," in 25th USENIX Security Symposium (USENIX Security 16), 2016.
[23]: O. Aciicmez and J.-P. Seifert, "Cheap hardware parallelism implies cheap security," in Workshop on Fault Diagnosis and Tolerance in Cryptography (FDTC), 2007.
[24]: P. C. Kocher, J. Jaffe and B. Jun, "Differential Power Analysis," Advances in Cryptology, pp. 388-397, 1999.
[25]: Y. Liu, L. Wei, Z. Zhou, K. Zhang, W. Xu and Q. Xu, "On code execution tracking via power side-channel," Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, p. 1019–1031, 2016.
[26]: O. Reparaz, B. Bilgin, S. Nikova, B. Gierlichs and I. Verbauwhede, "Consolidating Masking," CRYPTO, 2015.
[27]: D. Jayasinghe, A. Ignjatovic, J. A. Ambrose, R. Ragel and S. Parameswaran, "QuadSeal: Quadruple algorithmic symmetrizing countermeasure against power based side-channel attacks," International Conference on Compilers, Architecture and Synthesis for Embedded Systems (CASES), pp. 21-30, 2015.
[28]: V. Arribas, S. Nikova and V. Rijmen, "VerMI: Verification Tool for Masked Implementations," IEEE International Conference on Electronics, Circuits and Systems (ICECS), pp. 381-384, 2018.
[29]: R. Bloem, H. Gross, R. Iusupov, B. Könighofer, S. Mangard and J. Winter, "Fromal Verification of Masked Hardware Implementations in the Presence of Glitches," Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 321-353, 2018.
[30]: ARM TrustZone Technology.
[31]: D. Lee, D. Kohlbrenner, S. Shinde, K. Asanović and D. Song, "Keystone: An Open Framework for Architecting Trusted Execution Environments," in EUROSYS, 2020.
[32]: "The seL4® Microkernel," [Online]. Available: https://sel4.systems/.
[33]: A. Ferraiuolo, M. Zhao, A. C. Myers and G. E. Suh, "HyperFlow: A processor architecture for nonmalleable, timing-safe information flow security," in ACM SIGSAC Conf. on Computer & Communications Security, 2018.
[34]: RISC-V Foundation, The RISC-V Instruction Set Manual, Volume II: Privileged Architecture, Version1.10, A. Waterman and K. Asanović, Eds., 2017.
[35]: R. Baranowski, M. A. Kochte and H.-J. Wunderlich, "Reconfigurable Scan Networks: Modeling, Verification, and Optimal Pattern Generation," ACM Trans. Des. Autom. Electron. Syst., vol. 20, pp. 30:1-30:27, March 2015.
[36]: G. Cabodi, P. Camurati, F. Finocchiaro and D. Vendraminetto, "Model Checking Speculation-Dependent Security Properties: Abstracting and Reducing Processor Models for Sound and Complete Verification," in Intl. Conf. on Codes, Cryptology, & Information Security, 2019.
[37]: G. Cabodi, P. Camurati, S. F. Finocchiaro, F. Savarese and D. Vendraminetto, "Embedded Systems Secure Path Verification at the HW/SW Interface," IEEE Design & Test, vol. 34, p. 38–46, 2017.
[38]: P. Subramanyan and D. Arora, "Formal verification of taint-propagation security properties in a commercial SoC design," in Design and Test In Europe (DATE), 2014.
[39]: M. R. Clarkson and F. B. Schneider, "Hyperproperties," Journal of Computer Security, vol. 18, p. 1157–1210, 2010.
[40]: M. R. Fadiheh, D. Stoffel, C. Barrett, S. Mitra and W. Kunz, "Processor Hardware Security Vulnerabilities and their Detection by Unique Program Execution Checking," in IEEE Design Automation and Test in Europe (DATE), 2019.
[41]: M. R. Fadiheh, A. Wezel, J. Mueller, J. Bormann, S. Ray, J. M. Fung, S. Mitra, D. Stoffel and W. Kunz, "An Exhaustive Approach to Detecting Transient Execution Side Channels in RTL Designs of Processors," in preview of IEEE Transactions on Computers, 2022.
[42]: J. Müller, M. R. Fadiheh, A. L. Duque Anton, T. Eisenbarth, D. Stoffel and W. Kunz, "A Formal Approach to Confidentiality Verification in SoCs at the Register Transfer Level," in IEEE/ACM Design Automation Conference (DAC), 2021.
[43]: M. Goli and R. Drechsler, "Early SoCs Information Flow Policies Validation Using SystemC-Based Virtual Prototypes at the ESL," ACM Transactions on Embedded Computing Systems, https://doi.org/10.1145/3544780, 2022.
[44]: R. Guanciale, M. Balliu and M. Dam, "InSpectre: Breaking and fixing microarchitectural vulnerabilities by formal analysis," in Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020.
[45]: S. A. Seshia and P. Subramanyan, "UCLID5: Integrating modeling, verification, synthesis and learning," in 2018 16th ACM/IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE), 2018.
[46]: C. Trippel, D. Lustig and M. Martonosi, "CheckMate: Automated synthesis of hardware exploits and security litmus tests," in 2018 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2018.

The HiPEAC project has received funding from the European Union's Horizon Europe research and innovation funding programme under grant agreement number 101069836. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.