Skip to content

Artificial intelligence (AI) and especially generative AI represents a new frontier for innovation and has become a central point of attention since the public release of ChatGPT in November 2022. It is a pivotal point of strategic competition globally. As Europe navigates the challenges and opportunities presented by AI, particularly large language models (LLMs), it is imperative to adopt a coherent and proactive strategy that ensures growth, leadership, and the ethical deployment of these technologies. HiPEAC outlines key recommendations for EU's approach to AI, with a focus on developing AI models aligned with “European” values, which can be executed on edge devices or by a federation of edge devices. The HiPEAC community should develop always more efficient AI accelerators and software but helped by AI-based tools for better productivity in software and hardware developments.

The very fast evolution of artificial intelligence

by various AI-based tools, including ChatGPT from OpenAI[1], Marc Duranton

It is hard to believe that just a few years ago, the reach and accessibility of artificial intelligence were much more limited. Today, AI based on neural networks have made it easier and cheaper to create various content, like texts, pictures and even code. These neural networks come with features like writing, summarizing, text-to-speech, speech-to-text, picture and video creating and editing, transcription and translation, etc. These features are particularly beneficial for creators, enabling them to produce content quickly. AI technologies can convert scripts into speech, enhance text to engage viewers, and even co-create original content. Users can also create avatars from their photos to narrate texts, with the technology ensuring natural lip-syncing and expressive speech. There are hundredth of various tools that emerged in less than a year[2]. Even the domains covered by the HiPEAC community can benefit from the increase of productivity: there are AI tools that helps developing programs (from the specification to debug) and even some of them can help for developing hardware. These two aspects will be covered in separate articles of this HiPEAC Vision.

The easiness of use of AI-based tools also has drawbacks as it can be used to create fake news, pictures or video quite undisguisable from real ones. They can also be used by hackers to develop new, easy-to-use ways to exploit vulnerabilities (or find how to make bombs, etc). As most AI (using Large Language Models or LLMs) have protections against malevolent use, this leads to hackers trying to find ways to bypass the LLMs' protections, “jailbreaking” them by using more and more sophisticated prompt injection. As the nature of AI based on Neural Networks means that the information is not explicit in an identifiable part of the model, it is very difficult to “erase” sensitive information. Of course, they can be trained with curated datasets without “dangerous” data, but the size of the dataset, and the curation, which is mainly done by humans, make this virtually impossible or too expensive. Fine tuning and other more explicit approaches to sort the input and output are being developed, but they cannot 100% guarantee that the AI are “safe” and that they will not generate output that can be used by malevolent people.

Figure 1: structure of transformers, from [1]

But these recent AI based on the “transformer” model [1] (the goal of these systems is to “predict” the next “tokens” from a series of input “tokens”) can generate possible outputs, but not “real” ones. They are called “hallucinations” when the outputs of such transformer based LLMs (or other large models) don’t reflect the reality. This is why it is very important to check the veracity of the results of those AI. It is easier in some fields (but not always obvious) such as in code generation: the code can be compiled and executed to check if its syntax is correct and if the results are according to the expectations. It can be also inspected manually, or by the help of other AI that can analyse it.

AI technology has become one of the fastest-growing fields, particularly since the introduction of GPT, which first appeared in 2018 but didn't gain significant momentum until November 2022 with the release of its chatbot version, ChatGPT, offering almost human-like communication and quickly becoming the fastest-growing user application in history. Since then, it has transformed the world, for better or worse. Its success can be explained because it was the first computing system that could do a lot of things without explicit programming or choosing a particular specialized application, making its access easy for anybody who can write (in English mainly) and read.

This era of generative AI also has some impact in the perception of computing systems by people:

  • Enhanced Capability Perception: Generative AI showcases the ability of computers to not only process data but also to create new content, be it text, images, or even music. This represents a shift from the traditional view of computers as tools for calculation and data processing. This was a shock for some artists and other people in creative jobs who were thinking that they will be shielded from the impact of computing systems.

  • Blurring the Lines Between Human and Machine: Generative AI challenges our understanding of creativity, traditionally seen as a uniquely human trait. By producing texts, pictures, and other “creative works,” these systems blur the line between human and machine-generated content, leading to new discussions about the nature of creativity and originality. The fact that the AI understands human languages and writes it correctly was also a significant part of the impact of generative AI, as language is also seen as a human trait. The advanced capabilities of generative AI have led to new discussions about what constitutes intelligence, prompting philosophical and scientific discussions about the nature of intelligence itself (and a redesign of IQ tests and other tests supposed to quantify human capabilities).

  • Results from computers (running generative AI programs) are not always correct: for the first time, people experienced results of (AI-based) programs that are not reflecting the reality, which are the results of “hallucinations” and no more trustable as humans. Before, computers were supposed to always give “good” results (apart from bugs or programming errors, but they “are executing what they are told” by humans).

  • Interactivity and Personalization: With AI systems capable of generating personalized content in response to user inputs, the perception of computing systems has evolved from static machines to interactive, responsive entities. This personalization makes technology more accessible and appealing to a broader range of users.

  • Raising Ethical and Societal Questions: The capabilities of generative AI have sparked conversations about ethics, privacy, authorship, and the potential for misuse. This has led to a perception of computing systems as not just mere tools. As generative AI takes on tasks that were traditionally performed by humans, there is a growing perception of computing systems as potential substitutes or supplements to human labour. This impacts how people view their career paths and the skills they need to develop. People who may not have had the skills or resources to create certain types of content can now do so with the aid of these AI systems, changing the perception of computing from specialist tools to general-purpose enablers of creativity.

The HiPEAC Vision 2023 stated that “today’s ‘large’ models will be optimized and will be able to run on edge devices in the future due to algorithm improvements, optimization tools (pruning and quantization) and optimized hardware.” At that time, we expected that it would be achieved in several years, but it is already realized at the end of 2023 with models with similar performance[3] of ChatGPT 3.5 running on consumer grade computers[4], and announcements at the end of 2023 from providers of chipsets for smartphones, indicate that those models will run in (high-end) smartphones in 2024. It is clear that thanks to the new algorithms/structures of AI models that get performance comparable to very large models (of 100s of billions of parameters) with models will only 10s of billions of parameters, thanks also to the improvement of AI accelerators, the technology of generative AI can be run on embedded devices, today smartphones, soon smaller systems. We see also ideas of combining more specialized (or fined-tuned) neural network architectures together to get better performance. This is exemplified by the Mixture of Experts[5] approach, but it might be extended to having several small AI working closely together (in federation, in swarm of AI, similar to the infrastructure proposed in the next Computing paradigm – NCP – in this HiPEAC Vision 2024). It is important to note that the smaller models which are in competition with the “larger” ones (such as GPT 3.5) are generally fined tuned version of “open parameters” models, i.e. foundation models where the parameters are disclosed in accessible and open repositories such as HuggingFace. Compared to January 2023, nobody would have guessed that these open models would have been competitive and so numerous (HuggingFace offers more than 450 000 models[6] at the time of writing). This was mainly triggered by Meta which released in February 2023 to researchers the foundation models LLaMA, and the pioneering work of Stanford researchers who showed that these foundation models can be fine-tuned for a rather low cost [2].

Figure 2: image created by Dall-E 3

Key insights

  • On November 30, 2022, OpenAI released ChatGPT. It is uncommon for a single product to create as much impact on the tech industry as ChatGPT has in just one year.

  • Together with other techniques to generate text or images (Midjourney, Dall-E), generative AI made the buzz in 2023, generated a lot of expectations and raised a lot of ethical and societal questions.

  • These tools are amplifiers of productivity, and can be used in the HiPEAC fields, such as software development and even for helping creating hardware.

  • Open parameter foundations models, often available on HuggingFace, triggered a large development of research and experimentations by various groups, leading to smaller models having competitive performance compared to closed source models such as GPT 3.5.

  • These smaller models can be executed already in 2023 on consumer grade devices (PC) with good performance, thanks to open sources developments[7]. LLMs will be in smartphones by 2024, driven by advancements in SoC processors like Qualcomm Snapdragon 8 Gen 3 [3] and MediaTek Dimensity 9300 [4].

Key recommendations

Invest in AI core technologies and development across application domains to support growth for Europe

Europe must invest in research and infrastructure to support multimodal (by integrating text, image, and sound) AI development, ensuring its applicability across sectors such as healthcare, education, and public services, thereby fostering an environment of innovation and practical AI utility.

Develop and provide access to foundational models that support "European" values

Europe should lead the development of foundational AI models. This involves creating and sharing methodologies and datasets for fine-tuning these models to suit specific regional needs. By doing so, Europe can secure its sovereignty in AI technologies and promote a digital economy that reflects its standards and ethics.

Promote open-source models

Europe should encourage the growth of Open-source (open parameters) models which serve as the backbone of a collaborative AI ecosystem, ensuring open access to AI resources and facilitating a culture of shared progress.

Develop local solutions and specialized accelerators for the integration of large models in smart devices

Europe should lead the developments of making AI based on large models suitable for integration into smart devices at the edge. This empowers real-time AI applications on devices, reducing dependency on centralized data centers, and enhancing privacy and efficiency. Europe's strategy should include support both for start-ups and for established companies in developing edge AI capabilities, fostering a decentralized and resilient AI infrastructure.

Use AI to aid software and hardware development

Europe should improve the productivity of current engineers and researchers by upskilling them with AI, leveraging these technologies to address complex computational problems and accelerate the development cycle. The concept of "centaurs", or partnerships between AI and developers, is a promising approach to enhance productivity and code quality. Europe should invest in training, tools and platforms that facilitate this symbiosis, enabling developers to harness AI for more efficient and creative development processes.

Continue to develop policies around accessibility and societal impact

The societal impact of AI cannot be overstated, and its accessibility and ethics is a cornerstone of digital inclusivity. AI technologies should be available to all European citizens, ensuring that the benefits of digital transformation are equitably distributed.

Ensure "correctness by construction" in AI models

Making sure that AI models generate sound and validated answers (and do not "hallucinate") is key to ensure that AI can be effectively used to carry out the previous recommendation on accessibility and social impact. Europe should lead the way in developing methods that automatically verify the correctness of AI-generated outputs, thus reducing the need for extensive human oversight and increasing the trustworthiness of AI systems.

The rationale of all these recommendations will be detailed in a set of articles. The first one will describe the rapid evolution of generative AI in 2023. The second one will focus on AI assisted software engineering. The third will focus more on the hardware side with the use of AI to help EDA (Electronic Design Automation). The fourth one will deal with the position of Europe in this field and on the on-going development of regulations about AI, including the European AI Act.


The strategic approach to AI outlined in these recommendations presents a roadmap for Europe to navigate the AI landscape. By embracing these initiatives – for example, developing “made in Europe” foundational models and promoting open science or developing a complete ecosystem of models, hardware accelerators and applications of AI running at the edge - Europe can foster an AI ecosystem that is not only competitive but also reflective of the EU’s commitment to open innovation, ethical standards, and societal wellbeing. By upskilling current engineers and researchers, Europe can amplify their productivity with AI, leveraging these technologies to accelerate the development cycle and improve the position of Europe in the international economic landscape.

To achieve this, and thereby to realize the full potential of AI as a force for good in European society, it will be necessary to coordinate action across all levels of governments, industry, and academia.



Marc Duranton is a researcher in the research and technology department at CEA (the French Atomic Energy Commission) and the coordinator of the HiPEAC Vision 2024.


[1]: Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, " Attention Is All You Need”, [Online]. [Accessed 29 November 2023].
[2]: Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, Tatsunori B. Hashimoto, " Alpaca: A Strong, Replicable Instruction-Following Model", Stanford University, [Online]. Available: [Accessed 29 November 2023].
[3]: "Snapdragon 8 Gen 3 Mobile Platform," [Online]. Available: [Accessed 14 December 2023].
[4]: "MediaTek Dimensity 9300 deep dive: A true Snapdragon rival?," 6 November 2023. [Online]. Available: . [Accessed 14 December 2023].

  1. Slightly helped, prompted and organized by Marc Duranton. ↩︎

  2. For example, the web site lists over 2400 of such tools in various domains ↩︎

  3. For example Solar 10.7B – see ↩︎

  4. The LLMs models such as Solar 10.7B, LLama-2 13B, Mistral 7B or their fine-tuned version can run on a Mac mini with a power consumption of 20W while executing the model. ↩︎

  5. Like’s Mixtral 8x7B - see ↩︎

  6. See for the actual numbers ↩︎

  7. Especially thanks to the work of Georgi Gerganov who developed the initial LLaMa.cpp. It implements the Meta’s LLaMa architecture in efficient C/C++, and it is one of the most dynamic open-source communities around the LLM inference with more than 390 contributors – it is available at ↩︎

The HiPEAC project has received funding from the European Union's Horizon Europe research and innovation funding programme under grant agreement number 101069836. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.