Skip to content
DOI

Challenges in EDA: from operational research techniques to Artificial Intelligence strategies for chip design

by Alix Munier Kordon and Lilia Zaourar

Artificial Intelligence (AI) could be pivotal in reducing complex EDA flow by intelligently narrowing the exploration space in complement to combinatorial optimization approaches. One of its notable advantages is that it does not demand a high level of user tool expertise, making it accessible and user-friendly. This democratization of expertise ensures that individuals without specialized knowledge can benefit from AI-driven solutions. Moreover, AI could improve efficiency by reducing turnaround time through effective iterations and parallel runs. This acceleration in the workflow enhances productivity and allows for more thorough exploration and optimization, ultimately leading to more informed decision-making.

Key insights

  • Nowadays, chips are present everywhere and anywhere in our daily life ranging from various computing facilities, to interconnected objects. This phenomenon poses significant challenges in designing chips that need to be more and more sophisticated.

  • Managing and optimizing a myriad of possibilities and configurations involved in designing integrated circuits induce a combinatorial explosion in chip design due to the exponential increase in complexity as the number of elements and interactions grows. Addressing this complexity requires sophisticated tools and algorithms to navigate the vast design space, optimize performance, and ensure efficient chip functionality.

  • There is a demand and need for automated and high-performance tools to speed up the design process of chips while ensuring their quality.

  • AI inspired techniques in chip design could offers a more dynamic data-driven decision efficient approaches in addition to usual operational research and combinatorial methods. Thus contributing to advancements in performance, power efficiency, and overall design quality.

Key recommendations

  • Shift towards hybridization between AI and combinatorial optimization strategies to preserve and maintain explainability while taking advantage of available data in electronic design automation flow. This hybridization aims to combine the strengths of AI, which excels in handling complex patterns and large datasets, with combinatorial optimization strategies known for their transparency and interpretability. By striking this balance, the goal is to harness the benefits of AI's data-driven capabilities while ensuring that the decision-making process remains understandable and explainable, a crucial factor in chip design.

  • Facilitating shared data exchange between academic researchers and industry would foster collaboration in developing advanced methods tailored to address diverse challenges encountered during the EDA design phases. This cooperative approach allows academic and industrial entities to leverage their expertise and resources, collectively contributing to creating high-performance solutions for various design problems. By sharing common data, the collaborative ecosystem benefits from a rich pool of information and experiences, ultimately enhancing the efficiency and effectiveness of methods applied in the design processes.

  • Tools Interoperability in EDA flow is indispensable for achieving a cohesive, efficient, collaborative chip design process. It enables designers to harness the strengths of diverse tools while ensuring a seamless exchange of information, flexibility and tool diversity, holistic design exploration, and efficient data exchange, ultimately contributing to developing high-quality semiconductor products.

  • The shift towards multi-criteria considerations in chip design entails expanding the range of factors beyond traditional metrics such as surface area, power consumption, and time. This evolution involves incorporating additional criteria, including but not limited to the environmental impact, such as carbon footprint. This broader perspective reflects a growing awareness of the need to assess and optimize semiconductor designs for performance and efficiency, with a focus on sustainability and environmental considerations.

Introduction

Moore's Law[1] has been a guiding principle for the semiconductor industry for several decades. Even if it is an empirical trend based on historical observations, it has dictated the evolution of semiconductors and has been pushing the community to the extreme. This "law" forms the cornerstone of the computer industry, greatly impacting the revolution in chip design. As a result, the complexity of semiconductor chips follows a trend of almost doubling each year, adding substantial intricacy to their design.

Recent high-end processor chips integrate more than 100 billion transistors, and Cerebras even integrates 2.6 Trillion transistors on their 850,000 cores Wafer-Scale Engine.

Over time, sustaining this rate of transistor doubling has become increasingly challenging due to physical and technological limitations and leading to a considerable combinatorial explosion. Therefore, as semiconductor technology advances, it becomes more difficult and costly to continue the same rate of transistor density increase. It has led to innovations such as 3D stacking, new materials, and alternative computing architectures to extend the capabilities of microchips.

How about the complexity battle?

Meanwhile, as technology nodes advance and application demands become more intricate, a concomitant increase in constraints becomes apparent. These constraints encompass a spectrum of factors, including reliability, power efficiency, physical size, ageing characteristics, and yield optimization. Notably, the costs associated with technological advancements follow an exponential growth pattern, with the average expense of chip design reaching approximately 300 million dollars at the 7-nanometer scale—six times more than that at the 28-nanometer scale. It also highlights the financial investments required to develop chips at increasingly advanced technological nodes.

What are our solutions Today?

On one hand, large and experienced engineering teams are required and distributed to the various continents worldwide to meet the growing need for high quality design. For example: there were 116,000 technical employees working at Intel by 2021. The US government is expecting a growth of 89,000 US-based design workers by 2030. This will contribute significantly to the advancements in semiconductor technology. On the other hand, the utilization of Electronic Design Automation (EDA) tools plays a key role in streamlining the design process, reducing complexity, and saving time.

Furthermore, the industry has witnessed a notable emphasis on the reuse of validated and packaged functions. This trend underscores the growing efficiency and optimization strategies within the semiconductor design landscape [8].

Figure 1 How to deal with complexity?

How EDA tools deal with complexity?

The evolution of tools faces considerable challenges in keeping pace with the escalating complexity inherent in semiconductor design, thereby contributing to a notable surge in design costs. The limitations of Electronic Design Automation (EDA) tools become more pronounced as design complexity intensifies, introducing a level of unpredictability to the quality of the design.

The solutions generated by these tools often need to be better and near optimal. The associated margins remain unknown, fostering a landscape where trade-offs between design technology, quality, cost, and time become increasingly intricate. Particularly with large chips, the influence of variability and technological phenomena adds layers of complexity, rendering designs more reliant on the experiences and insights of individual designers. The feedback loop on design choices, critical for achieving functional silicon, requires frequent manual interventions in the design flow, further complicating the process.

The tools themselves, though powerful, are complex to use and to tune, requiring the employment of numerous strategies to navigate the intricacies of design complexity.

Consequently, the design time becomes a variable that is difficult to predict, and convergence with hard constraints becomes a challenging task. The sheer volume of data generated in the design process adds an additional layer of complexity, presenting difficulties in storing and parsing the data and selecting the most pertinent information. As a result, larger investments are required, not only in the tools themselves but also in hardware emulators and FPGA prototypes, underscoring the multifaceted challenges posed by the evolving landscape of semiconductor design.

Figure 2 Electronic Design Automation Flow.

In that scope, AI techniques for Electronic Design Automation (EDA) enable facing those challenges on various aspects such as runtime and computing resources. To alleviate these challenges, Machine Learning methods are incorporated into the design process of EDA tools leveraging traditional strategies [9] as explained in next sections.

Revisiting the design Flow

The Very Large-Scale Integration (VLSI) design flow is made up of a series of steps, each of which constitutes a problem in its own right. Most of the conventional combinatorial optimization challenges emerge at various phases in VLSI design. In that field, methods based on Machine Learning techniques have demonstrated some improvement compared with traditional combinatorial optimisation methods. The Design flow of digital systems can be roughly divided in six steps as depicted in ded in six steps as depicted in Figure 3 Electronic Design Automation Flow.

  1. Architectural design: this first step involves transforming the Integrated Circuit's High-Level description to Register Transfer Level One Description (RTL). It includes System-level Design Space Exploration (DSE) and High-Level Synthesis (HLS). The main outcomes are IC abstract specifications. The two steps are complementary.

  2. Functional design and logic design: this part transforms the RTL description of a circuit to a gate-level representation in the target technology and ensure its functionality. In this step a number of transformations are applied to the design for logic optimization and minimization. The outcome is a design, represented as a netlist that is, typically, visualized as a graph of components and connections.

  3. Circuit design: is the phase in the design process that transitions the initial graph-based representation—comprising components and connections—from logic synthesis into a geometrical representation characterized by the shapes of materials. Once again, the use of Graph Neural Networks (Graph-NNs) is well-suited for interpreting and processing graph-based representations. This geometrical representation is often visualized as images. Leveraging the advancements in computer vision, particularly in image classification and transformation, becomes integral to the physical design process.

  4. Physical design: The physical design (usually called floorplan step) phase plays a crucial role in the transformation of a design, transitioning from a graph-based representation formed during logic synthesis—comprising components and connections—to a geometrical representation consisting of material shapes called layout.

  5. Fabrication: lithography and mask synthesis: In contemporary VLSI manufacturing, lithography holds a pivotal role, influencing both the printing resolution and the overall robustness of the manufacturing process. This step turns the designed circuit and layout into real objects. It involves two essential stages: mask synthesis and lithography simulation. Mask synthesis takes a layout design as input and generates a mask design with enhanced printability. Subsequently, lithography simulation utilizes the mask design to compute the printed pattern using lithography models.
    Since mask designs can be inherently portrayed as visual images, Machine Learning techniques, particularly Convolutional Neural Networks (CNN), are well-suited for addressing lithography challenges such as mask synthesis, modelling, and lithography hotspot detection. Furthermore, it could explore the application of machine learning in diverse manufacturing tasks, including yield estimation. Various speeds can be reached as presented in [4]. Surrogate model to estimate the yield for given design parameters is also common.

  6. Physical verification and test: Verification and testing of a circuit are complicated and expensive due to the high complexity of coverage requirements. Verification is conducted in each stage of the EDA flow to ensure that the designed chip has the correct function. However, testing is necessary for a fabricated chip. Therefore, verification and testing share common ideas and strategies while facing similar challenges from different perspectives. For instance, with the diversity of applications and the complexity of the design, traditional formal/specification verification, and testing may only meet some demands. Mostly, verification is performed using simulations. The design is exercised with input stimuli, and its outputs are compared to golden outcomes. High coverage is reached, i.e., the fraction of functions exercised in the test. High coverage can only be achieved by many simulations with various stimuli. Two challenges arise from this. First, the required simulation time is high, and second, creating stimuli to achieve a high coverage is difficult. ML has been employed for both of these challenges.

Key insight

Artificial Intelligence could play a pivotal role in reducing complex EDA flow by cleverly narrowing down the exploration space. One of its notable advantages is that it does not require a high level of tool expertise, making it accessible and more user-friendly. This democratization of expertise ensures that individuals without specialized knowledge can benefit from AI-driven solutions. Moreover, AI could contribute to efficiency by reducing turnaround time through effective iterations and parallel runs. This acceleration in the workflow not only enhances productivity but also allows for more thorough exploration and optimization, ultimately leading to more informed decision-making.

Hybridization between combinatorial optimization and machine learning

The community of combinatorial optimization researchers is interested in developing exact or approximate methods to solve discrete problems that can be formulated with integer or binary variables. A cost function must then be optimized in a fixed space described using a set of equations.

The problems considered are usually NP-hard; thus, they are in theory impossible to solve exactly and efficiently. This is why approximate algorithms are usually considered to handle large instances, it is necessary to consider heuristics which build approximate solutions.

Metaheuristics were developed from the 1960s and perfected to concretely and easily solve large classes of combinatorial optimization problems: among them, genetic algorithms, simulated annealing, ant colonies, taboo methods, etc.

In 2022, researchers in combinatorial optimization and machine learning met in Dagstuhl [3] to act on the emergence of data-driven combinatorial optimization: the goal was to proceed with a hybridization between machine learning methods and those of combinatorial optimization to improve both approaches. The idea is to develop a mixed approach that will achieve the best of both worlds.

Researchers in combinatorial optimization have identified several problems when transposing their methods to the field of machine learning: among them, they identified the scalability of machine learning methods. Indeed, the optimization methods generally do not support large data sets as in machine learning. Another problem is that the evaluation of objective functions using machine learning methods does not guarantee a distance to the optimum of the solutions obtained; moreover, the results obtained are not explainable.

On the other hand, a set of techniques from machine learning should in the future benefit the solution of combinatorial optimization problems and integrated directly into Mixed Integer Linear Programming (MILP) solvers.

For example, machine learning oriented processing should make it possible to improve the choice of variables to consider in the exploration phases for the resolution of a problem expressed using a MILP solver. Likewise, data preprocessing should also make it possible to fix the values of a set of secondary parameters and thus reduce the size of the systems to be solved [4].

Using generative AI for the design flow

The tools developed by artificial intelligence researchers make it possible to completely free us from models. The existence of LLM (Large Language Models) conversational robots such as ChatGPT, available free of charge for a large audience, has a huge impact in the academic field, health, employment, etc.

The impact of this technology in the field of optimization of embedded systems is to be studied. LLMs work best when used interactively. Thus, MyCrunchGPT [5] is an overlay to ChatGPT that provides an interactive natural language-based environment to guide the user through the process of designing an optimized solution.

The authors show how this class of tools can solve various engineering problems. We find in the literature some examples of circuit creation based on an LLM: for example, Blocklove et al. [2] have created a new microprocessor architecture based on an 8-bit accumulator based and on real hardware constraints.

What Changed?

Some studies leverage AI to automate design tasks that rely heavily on human expertise and efforts. Such evolution represents a workforce development that it needed to support the semiconductor design and EDA industries. In 2023, there was a remarkable progress on ML. Moreover, the availability of much more computational power made it possible to learn ("train") on much more data and to use bigger and "deeper" models that are more capable to solve problem.

Main Obstacles

The main obstacle identified is the lack of collaboration between the researchers and the engineers from industry and from academia. Sharing methods and results is often impossible because the software used is often seen as black boxes to protect intellectual property; the data also remains the property of the companies, which refuse their distribution, even restricted, for competitiveness reasons.

Interoperability of models and tools

Sharing CAD tools codes is currently impossible. Existing design tools come from various sources: some are open-source, and others are black boxes. The partners come from the industry or are academic researchers in optimization or ML—the tools they use come from different scientific cultures, complicating their communications. More than simply interfacing these tools with libraries dedicated to Machine Learning is also often impossible. In addition, the models used, and the results obtained are rarely disseminated by the industry for intellectual property reasons.

Availability of Training Data

One of the current difficulties is the generation of training data. Indeed, deep neural networks require significant data for their training. Furthermore, this data must come from various designers to obtain good solutions in inference.

The technical difficulty of sharing these data is a direct consequence of the lack of interoperability of the models: the models used differ depending on the customers. Moreover, industry often are reluctant to exchange numerical values for confidentiality reasons. The lack of data constitutes then an obstacle for academic research, which needs access to real-life industrial data, and for the development of new methods in the industrial context associated with the lack of knowledge of optimization methods and ML tools.

Conclusion

Artificial Intelligence could be pivotal in reducing complex EDA flow by intelligently narrowing the exploration space in addition to combinatorial optimization approaches. One of its notable advantages is that it does not demand a high level of user tool expertise, making it accessible and user-friendly. This democratization of expertise ensures that individuals without specialized knowledge can benefit from AI-driven solutions. Moreover, AI could improve efficiency by reducing turnaround time through effective iterations and parallel runs. This acceleration in the workflow enhances productivity and allows for more thorough exploration and optimization, ultimately leading to more informed decision-making.

As presented here, general algorithms from the combinatorial optimization field plus a growing collection of new models from machine learning and AI optimization methods, as well as more computational power, made it possible to learn (train) on much data and use bigger and deeper models contributes on one side to reduce design cycle time to be smaller and more automatic and on the other side being able to build more customized chip easily. Thus, it represents an important potential for hardware design.

However, many challenges arise. One of them is the Environmental footprint of AI, which is not negligible. For example, training GPT-3 (which has 175 billion parameters, consumed 1 287 megawatt hours of electricity and generated 552 tons of carbon dioxide [7].

Open questions and ongoing challenges

These Decision-aid strategies rely on three essential ingredients:

  1. streamlined algorithms, to provide optimal or good solution.

  2. the accessibility of data (particularly pertinent for AI-driven methodologies). Note that availability of data is a major step forward for the research community and will be gratefully acknowledged.

  3. the lack of interoperability between software of EDA and those used in ML, as well as their opacity. This obstacle could be removed by making available open-source software whose performance would be comparable to those used in the industrial environment.

  4. hardware architecture coupled with accelerators, which need to be fast enough to train data in reasonable time. Thus, facilitating the timely execution of models.

  5. Chowdary, D., & Sudhakar, M. S. (2023). Multi-objective Floorplanning optimization engaging dynamic programming for system on chip. Microelectronics Journal, 140, 105942.

  6. Blocklove, J., Garg, S., Karri, R., & Pearce, H. (2023). Chip-Chat: Challenges and Opportunities in Conversational Hardware Design. arXiv preprint arXiv:2305.13243.

  7. Frejinger, E., Lodi, A., Lombardi, M., & Yorke-Smith, N. (2023). Data-Driven Combinatorial Optimisation (Dagstuhl Seminar 22431). In Dagstuhl Reports (Vol. 12, No. 10). Schloss Dagstuhl-Leibniz-Zentrum für Informatik.

  8. Kalla, D., & Smith, N. (2023). Study and Analysis of Chat GPT and its Impact on Different Fields of Study. International Journal of Innovative Science and Research Technology, 8(3).

  9. Varun Kumar, Leonard Gleyzer, Adar Kahana, Khemraj Shukla, and George Em Karniadakis. MyCrunchGPT: a LLM assisted framework for scientific machine learning. Journal of Machine Learning for Modelling and Computing, 4(4):41–72, 2023

  10. Yibo Lin, Shounak Dhar, Wuxi Li, Haoxing Ren, Brucek Khailany, and David Z Pan. Dreamplace: Deep learning toolkit-enabled gpu acceleration for modern VLSI placement. In Proceedings of the 56th Annual Design Automation Conference 2019, pages 1–6, 2019.

  11. Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L. M., Rothchild, D., & Dean, J. (2021). Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350.

  12. L. Zaourar & N. Ventroux, Machine Learning for Design Space Exploration of CPS”, Workshop Machine Learning for CAD, DATE 2019, Florence, Italy.

  13. [9]  Rapp, M., Amrouch, H., Lin, Y., Yu, B., Pan, D. Z., Wolf, M., & Henkel, J. (2021). MLCAD: A survey of research in machine learning for CAD keynote paper. IEEE Transactions on Com- puter-Aided Design of Integrated Circuits and Systems, 41(10), 3162-3181.

AUTHORS

Alix Munier Kordon is a full professor of Computer Science of at Sorbonne Université and member of LIP6, Paris, France.

Lilia Zaourar is an expert in the research and technology department at CEA (the French Atomic Energy Commission).

REFERENCES

[1]: Chowdary, D., & Sudhakar, M. S. (2023). Multi-objective Floorplanning optimization engaging dynamic programming for system on chip. Microelectronics Journal, 140, 105942.
[2]: Blocklove, J., Garg, S., Karri, R., & Pearce, H. (2023). Chip-Chat: Challenges and Opportunities in Conversational Hardware Design. arXiv preprint arXiv:2305.13243.
[3]: Frejinger, E., Lodi, A., Lombardi, M., & Yorke-Smith, N. (2023). Data-Driven Combinatorial Optimisation (Dagstuhl Seminar 22431). In Dagstuhl Reports (Vol. 12, No. 10). Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
[4]: Kalla, D., & Smith, N. (2023). Study and Analysis of Chat GPT and its Impact on Different Fields of Study. International Journal of Innova-tive Science and Research Technology, 8(3).
[5]: Varun Kumar, Leonard Gleyzer, Adar Kahana, Khemraj Shukla, and George Em Karniadakis. MyCrunchGPT: a LLM assisted framework for scientific machine learning. Journal of Machine Learning for Modelling and Computing, 4(4):41–72, 2023
[6]: Yibo Lin, Shounak Dhar, Wuxi Li, Haoxing Ren, Brucek Khailany, and David Z Pan. Dreamplace: Deep learning toolkit-enabled gpu accel-eration for modern VLSI placement. In Proceedings of the 56th Annual Design Automation Conference 2019, pages 1–6, 2019.
[7]: Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L. M., Rothchild, D., & Dean, J. (2021). Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350.
[8]: L. Zaourar & N. Ventroux, Machine Learning for Design Space Exploration of CPS”, Workshop Machine Learning for CAD, DATE 2019, Florence, Italy.
[9]: Rapp, M., Amrouch, H., Lin, Y., Yu, B., Pan, D. Z., Wolf, M., & Henkel, J. (2021). MLCAD: A survey of research in machine learning for CAD keynote paper. IEEE Transactions on Com- puter-Aided Design of Integrated Circuits and Systems, 41(10), 3162-3181.


  1. Gordon Earle Moore 3 January 1929 – 24 March 2023 was an American businessman, engineer, and the co-founder and emeritus chairman of Intel Corporation) ↩︎

The HiPEAC project has received funding from the European Union's Horizon Europe research and innovation funding programme under grant agreement number 101069836. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.