Skip to content
DOI

The AI scene in Europe is dynamic and evolving.

State of artificial intelligence in Europe

by Jack Vernon

It is important to position Europe in the race of artificial intelligence. This article gives some hints on this topic and is based on surveys and analysis done essentially in 2023. It is by no means an exhaustive view of the position of Europe in the field of artificial intelligence but does offer significant detail and insight into each of the topics. The data used in this report comes from three IDC surveys [1],[2],[3], and two IDC forecasts [4],[5].

Market Developments and Dynamics

European AI Adoption

The adoption of AI technologies is already making good progress across Europe, with many organisations reporting the introduction of various forms of AI. IDC considers AI to fall into four categories, which are as follows:

  • Intelligent Process Automation: is a class of software designed to automate or augment manual repetitive tasks.

  • Predictive AI: Analysis of large data sets to identify long term patterns in behaviour and detect changes (e.g., digital twins and threat detection).

  • Descriptive AI: Analysis of images or event data streams so people and things can detect, analyze, and act (e.g., machine vision).

  • Generative AI: Creates new content/code using previously created content/code (e.g., ChatGPT and developer co-pilots).

As seen in Figure 1, Generative AI leads in terms of the number of organisations saying they already use the technology, followed closely by IPA. Descriptive and Predictive AI have similar volumes of respondents stating they are already using them. Given the ease of use and low cost of general-purpose AI-driven conversational systems like ChatGPT and Bard, it is perhaps not surprising to see Generative AI leading in terms of the AI technology most organisations already use. Staff within an organisation can use generative AI assistants free of charge, and so in many cases, businesses might use generative AI informally, yet significantly.

In contrast, deploying forms of Descriptive or Predictive AI requires a degree of configuration specific to a business use case. This requires expertise and, unlike a generative AI assistant, can involve using technology that isn't free. So, although other forms of AI have been available to businesses for a longer period, respondents to the survey report lower levels of adoption.

The drive to expand the use of AI technologies is strong across all categories. Descriptive and predictive AI are set to grow faster than both IPA and generative AI, although they are helped by starting from a lower base of existing adoption. In all categories, respondents planning to introduce AI technologies in the next 2 years are at least 150% larger than the group already using the AI technology.

In terms of adoption by countries and subregions, several key differences exist between countries. Nordic respondents lead in current Generative AI usage but have more limited plans for the future introduction of Intelligent Process Automation. Spain showcases consistency in its interest across all AI technologies, with high current usage and robust future adoption plans. Poland, despite lower current usage rates in several categories, shows a strong ambition for the future, notably in Intelligent Process Automation and Generative AI. Czechia stands out with high current usage of Predictive AI but appears more reserved in its future adoption plans compared to other nations. Germany and France feature moderate adoption without drastic highs or lows. However, France's current usage in Intelligent Process Automation is notably higher. The UK is set to increase its focus on Predictive and Descriptive AI in the coming years. Germany, while having lower current usage in Intelligent Process Automation, has plans suggesting a rise in the upcoming years.

When asked about their perception concerning the transformative impact of emerging technologies, respondents rated Generative AI as the most impactful by a significant margin, followed closely by other AI-related technologies, see Figure 2. The combined potential impact of AI technologies overshadows many other categories, showcasing the growing emphasis on data-driven decision-making in the modern business landscape. Technologies like Blockchain, Quantum technologies, and Web3 received relatively less attention, with scores around the 9-11% range, possibly due to their niche applications or a lesser understanding of their potential impact. Virtual Reality (VR) and Robotics are parallel in significance to AI and IPA. Descriptive AI ranks lowest of the AI categories, potentially due to its technological limitations and the challenges in deploying it in live environments without substantial computational resources.

Influencing Factors

Considering the factors most likely to influence the adoption of emerging technologies within organisations, respondents highlighted a combination of driving and inhibiting factors. Top inhibitors included cybersecurity, economic stability, and the skills shortage. Key drivers included digital innovation, sustainability requirements, and the development of new digital business models. Among these concerns, cybersecurity alone garnered over 30% agreement from respondents. Six influencing factors received agreement from between 30–20% of respondents, and a further six factors received agreement from between 20–15% of respondents. The lower tier of responses tended to be relatively condensed, with no single factor standing out in terms of relatively poor performance. The question addresses emerging technology generally, so the specific influencing factors may vary slightly for AI.

AI Spending

The latest Worldwide Artificial Intelligence Spending Guide (V2 2023) published by IDC shows that artificial intelligence (AI) spending in Europe will reach $34.2 billion in 2023, representing 20.6% of the worldwide AI market (see Figure 4). AI spending in Europe will post a 29.6% compound annual growth rate (CAGR) between 2022 and 2027, slightly higher than the worldwide CAGR of 26.9% for the same period, with spending expected to exceed $96.1 billion in 2027.

European AI Strategy - Investment intentions

European organisations have indicated in a survey [1] about the use and adoption of Generative AI their investment expectations for different AI technologies. A smaller proportion of investment is anticipated for generative AI compared to other categories; however, this still represents a significant increase from the previous year, when this figure was significantly lower. Descriptive AI is expected to receive the largest share of investment. Predictive AI will also see significant investment, representing over a third of the anticipated AI investment allocations.

The data provides an interesting contrast with the indications of adoption gathered from the emerging technology survey in the previous section. Although respondents to the emerging tech survey suggested generative AI adoption would be similar to or greater than that of other AI technologies, spending intentions differ. Companies in the Generative Arc survey indicate they anticipate allocating significantly larger amounts to more mature AI technologies like Predictive and Descriptive AI.

Top Use Cases

The emerging technologies survey [1] also included a question covering the popularity of AI use cases in terms of adoption, and planned adoption in the next 24 months. Considering the top 10 use cases, conversational AI platforms for automated customer service are the most adopted AI technology among the listed use cases. There is a strong emphasis on automation across various domains, such as IT operations, sales processes, and knowledge work. AI use cases that have predictive capabilities, such as asset operations and threat intelligence, indicate a trend towards investing in the anticipation of challenges and opportunities.

Measured Benefits

The measured benefits of AI reflect the business outcomes of popular use cases. The data indicates that organisations are reaping substantial benefits from implementing Artificial Intelligence systems, with significant improvements notably in process quality and efficiency. A significant majority report enhanced time management, increased customer satisfaction, and revenue growth. Furthermore, AI's impact on cost reduction is equally significant as its role in reducing environmental footprints, underscoring its influence on operational efficiency and sustainability. The advancements have also spurred the launch of new products and services, affirming AI's role as a driver of innovation. A minimal number of respondents report not seeing benefits yet, indicating a broadly positive impact across various sectors.

Mature Enterprises

The gap between organisations regarding their strategic maturity on AI is also distinguished by factors concerning how their AI operations are organized, and not exclusively reporting of its introduction. A crucial indicator of maturity is how AI work is distributed within a business. If most AI activity is centralised to a single team within the IT department, then typically, these businesses are less mature than those that have AI practitioners and data scientists distributed throughout distinct business teams and departments. Centralised AI teams often face resistance from business leaders and may have resource constraints. They may need to bid for work internally, which often leads only to Proof of Concept (PoC) projects instead of live deployments. Leaders of business teams should be empowered to hire data science and AI practitioners directly, allowing them to interact with key decision-makers and have proximity to business problems.

European organisations need internal policies governing the development and deployment of AI technology. AI presents business and ethical risks which are increasingly transforming from reputationally to legally damaging. Without informed internal policy on AI, executive team members responsible for managing AI policy, and a working group reviewing policy improvements, an organisation may struggle to keep pace with AI's regulatory and ethical developments. Technology systems can support companies in rationalising AI development and implementation, ensuring best practice adherence. Systems related to AI explainability, placing humans in the loop, organising data science activity, and MLOps can all support organisations in meeting AI ethics standards.

When the European Emerging Technologies survey was conducted in 2021, those respondents already using AI were asked about their strategies on AI ethics issues. Some organisations had executive oversight in place and potentially a governance framework for developing AI. Very few respondents undertook more involved steps such as conformity assessments or ethical auditing of AI systems.

Conclusion

The landscape of AI adoption and trends in Europe reflects a dynamic and evolving environment, marked by a strong drive to integrate various forms of AI across multiple sectors. European organisations are actively embracing AI technologies, with a notable lean towards Generative AI, as evidenced by widespread use and interest. Although as we’ll see in the following section, Generative AI is not as popular in Europe as other global regions. This is followed closely by Intelligent Process Automation (IPA), Predictive AI, and Descriptive AI, with each having its distinct application and growth trajectory.

Investment intentions in Europe show a cautious but strategic approach, favouring more mature AI technologies like Predictive and Descriptive AI over Generative AI. This trend likely stems from a combination of factors, including data privacy concerns and the anticipation of future regulations. Despite this, there's a clear recognition of AI's transformative impact, particularly in Generative AI, which is viewed as the most impactful emerging technology.

Differences in adoption rates and focus areas are evident across various European countries and subregions. The measured benefits of AI in Europe include enhanced process quality and efficiency, improved time management, increased customer satisfaction, and revenue growth. Notably, AI's role in cost reduction is as significant as its contributions to reducing environmental footprints, highlighting its dual impact on operational efficiency and sustainability.

Strategic maturity in AI varies across European organisations, with more mature entities distributing AI roles across different business teams, as opposed to centralizing them within IT departments. This maturity is not just about the adoption of technology but also involves internal policies and ethical considerations. European organisations are increasingly focusing on AI governance, ethical standards, and compliance with regulatory developments.

Overall, Europe's approach to AI adoption and its trends demonstrate a balanced blend of enthusiasm and caution, with a strong emphasis on ethical considerations, regulatory compliance, and strategic investment in various AI technologies.

The European position in AI, a comparison between regions

The following section will compare the position of the European AI market with that of other global regions. European businesses are following similar trends to those displayed in other regions. However Europe is more cautious in terms of AI investments and approach to generative AI.

Revenue Comparison

As shown by Figure 9, according to IDC's AI Tracker, EMEA is predicted to be the fastest-growing global region in terms of CAGR during the 2022–2027 period, slightly surpassing the APJ and Americas subregions. In terms of EMEA’s share of the overall AI market, it will lag behind the Americas but is expected to gain some ground on the region throughout the period. Europe, as the largest revenue contributor in the EMEA region, exhibits growth and market share trends similar to those seen across Europe.

Investment Intentions

When asked about their investment intentions for AI in the Generative AI Arc survey, European respondents expected to invest more in predictive AI than their counterparts in North America and Asia Pacific, but less in generative AI. Europe’s investment in Descriptive AI is comparable to that of North America but lower than that of META (Middle East, Turkey, and Africa). European respondents' more restrained investment expectations in generative AI indicate a more cautious approach than other regions to the category, possibly influenced by data privacy concerns and the anticipation of future regulation.

Business Objectives

Figure 12 comes from a Global 2022 survey of AI users; respondents were asked which business objectives are the highest priority for AI projects. The objectives that resonated most with EMEA respondents were aligned with those of respondents from the Asia Pacific (AP). EMEA respondents were less enthusiastic about objectives beyond improving operational efficiency than those in other regions. In contrast with EMEA and AP, North American respondents ranked improving employee productivity as the second top priority, a factor EMEA businesses may be more cautious about openly communicating or prioritizing in AI projects.

AI project Failures

Considering companies that experienced AI failures, EMEA respondents indicated that they experienced higher rates of project failure than other regions. Globally, companies across all regions cited ‘AI technology not performing as expected or as promised’ as the primary reason for project failure. EMEA respondents also identified ‘lack of skilled personnel and staff’ as the second main reason for project failure, whereas this reason did not rank in the top three in other regions. The survey results suggest that EMEA may be facing a more acute skills shortage than other global regions, potentially leading to higher rates of AI project failure.

Ecosystem

Europe’s position on AI is similar to its stance on other digital technologies. It consistently provides industry talent, thanks to numerous top academic institutions, and benefits from several high-income countries, often powered by mature financial services and industrial sectors. However, Europe has had limited success in retaining local ownership of its most promising AI companies. Google’s acquisition of DeepMind in 2014 set a precedent, and since then, Europe's most promising AI software businesses often ended up under the ownership of US-based hyperscalers.

There are exceptions. The AI platform vendor Dataiku continues to grow an impressive business portfolio, with commitments from the founding team to maintain independent. Advanced AI consultancy InstaDeep also bucked the trend by signing an acquisition deal with German pharmaceutical company BioNTech.

Recent developments in the Generative AI space have created a new market category where European businesses can compete. Several European startups have launched in this space, from generative language model providers like Aleph Alpha and Mistral AI to image generation enterprises such as Stability AI. It remains to be seen whether European vendors will take a commanding lead in generative AI technologies, although specialization in certain localized languages could help them become prominent players in specific European markets.

Generative AI Preferences

Europe appears to be adopting a different approach to generative AI technology compared to other regions. This section will explore data points from [1], highlighting how European respondents compare with those from other regions. The survey suggests that Europe is neither significantly behind nor diverging greatly in its approach compared to other global regions.

When asked about their organisation's ‘current state of evaluation or use of Generative AI’ technology, a larger proportion of European respondents than in other regions indicated that their company was currently not engaged in AI. Europe was also 5% below the global average in terms of respondents who agreed their company was working on a generative PoC project. Although the number of respondents who agreed their company had already implemented a Generative AI investment plan was in line with the global average, the survey results indicate European respondents are less advanced in developing Generative AI investment plans than other global regions.

Considering Figure 15 and the question, “What types of Generative AI models are organisations using or testing?” 81.9% of European respondents are using private versions of models for experimentation. Companies in Europe exhibit a clear inclination to safeguard their intellectual property. As a result, Europe is more likely than other regions to overlook third-party generative AI applications and public versions of generative AI models.

European organisations show a preference for experimenting with GenAI models that are pre-trained on public datasets. Websites such as HuggingFace and PapersWithCode have provided the data science community with well-labelled public datasets that can be conveniently incorporated into a training pipeline. The cumulative total of generative AI approaches being tested by respondents is 135%, indicating that many are experimenting with multiple generative AI approaches.

Conclusion

The European AI market exhibits unique characteristics and trends that differentiate it from other global regions. While the EMEA region is projected to experience the fastest growth in terms of Compound Annual Revenue Growth (CARG) during 2022-2027, it still lags behind the Americas in overall AI market share. Investment patterns in Europe reflect a cautious approach, particularly in generative AI, likely influenced by data privacy concerns and anticipated future regulations. This contrasts with more aggressive investments in predictive and descriptive AI technologies.

In terms of business objectives, European organisations prioritise operational efficiency but show less enthusiasm for goals like improving employee productivity, a priority in North America. This reflects a more reserved approach to AI's role in workforce management. Additionally, the EMEA region faces a pronounced skills shortage, which has led to higher rates of AI project failures compared to other regions, indicating a critical area for improvement.

The European AI ecosystem, while rich in talent and resources, struggles to retain local ownership of its most promising AI companies, often losing them to US-based hyperscalers. However, there are notable exceptions, and recent developments in generative AI have opened new opportunities for European businesses. European organisations are more inclined to use private AI models for experimentation, safeguarding intellectual property, and showing a preference for GenAI models pre-trained on public datasets. This indicates a trend towards a more controlled and proprietary approach to AI development in Europe.

In summary, while Europe aligns with global trends in some respects, its cautious investment strategy, focus on operational efficiency, challenges in talent retention, and distinct approach to generative AI set it apart from other regions. These factors underscore Europe's unique position in the global AI landscape.

AI Regulation in the EU and other world regions

In recent years, the field of AI has grown fast, shaping industries and affecting our ways of living and working. IDC expects the European AI market to reach $72 billion by 2026 (IDC Worldwide Artificial Intelligence Spending Guide - Forecast 2023). As AI technologies and systems become progressively more integrated into people's lives, concerns regarding their ethical implications, potential risks, and misuse have started emerging.

Key Issues Around AI

  • AI may provide lower-cost solutions and enhance productivity, but due to its “black box” nature, it has garnered much attention for its ethical implications.

  • Privacy and Consent: Concerns over whether appropriate consent has been obtained in training datasets.

  • Biases and Toxicity: AI may produce biases such as racial or gender-based discrimination due to incomplete datasets or reproduction of human biases within the data.

  • Harmful Content: AI may provide explicit content, propaganda, or misinformation.

  • Security: AI may not be safeguarded against manipulation by third parties, may be susceptible to “Shadow AI,” or may be more vulnerable to data breaches.

  • Accountability: AI may not have a “human in the loop” to sufficiently monitor, test, and update the system.

  • Lack of Explainability: AI model may be a “black box,” meaning even its developers don’t understand how certain outputs were produced, causing security concerns.

The Broader AI Risk Landscape

On a larger scale, these questions around ethics, accountability, and transparency will have vast implications on society.

  • Amplification of bias or discrimination

  • Misinformation and disinformation

  • Erosion of privacy

  • Misuse for political or geopolitical aims

IDC predicts that by 2028, taking cue from the EU’s AI policy, 60% of worldwide national governments will adopt a risk management approach in framing their AI and GenAI policies. (IDC FutureScape 2024: Worldwide National Government)

European Union: Overview of AI Regulatory Landscape – The EU AI Act

The European Union embarked on an ambitious journey to regulate the deployment and development of AI systems through the introduction of the EU AI Act in 2019, highlighting the need for a thorough regulatory framework.

In April 2021, the European Commission submitted a detailed proposal of its plan. After the proposal, the Commission adopted a "general approach" on a set of harmonized rules on artificial intelligence. The latest developments of the technology, particularly generative AI, caused some delays in the final discussion of the legislation as new amendments were deemed necessary, but on May 11, 2023, the European Commission committees approved the proposed amendments in what was considered the first milestone vote on the EU AI Act. The plenary vote on June 14, 2023, approved with a large majority by the European Parliament, signals the beginning of the final phase of the legislative process — the "trialogue." Here, high-level negotiations between the EU Parliament, Council, and Commission are expected to last until the end of 2023[1]. If negotiations are successful, the EU AI Act will come into effect in June 2024, with a two-year transition period for AI developers and providers to adjust. Ultimately, it will be Member States who will enforce the regulation.

The AI Act

A comprehensive regulatory framework designed to maintain a balance between innovation and ethical use of AI technologies. It aims to safeguard EU fundamental rights and values while fostering advancements in AI research and application.

The legislation emphasizes the improvement of data quality, promoting transparency in AI operations, and enforcing human oversight to ensure accountability and ethical use. It also addresses liability concerns, particularly in critical sectors such as healthcare, finance, education, and energy, emphasizing the need for responsible AI integration in these domains.

Scope and targets: The AI Act targets providers placing AI systems in the EU markets, irrespective of whether those providers are based in the region or in a third country; users of AI systems physically established within the Union; providers and users of AI systems based in a third country, but whose systems produced outputs based in the Union; importers and distributors of AI systems; manufacturers placing AI systems on the market with their product and under their trademark; authorized representatives of providers based in the EU.

Risk-based approach: The regulation identifies four risk categories for AI applications, and applies different restrictions and obligations on system providers and users, depending on the category of the application in question:

  • Unacceptable risk. This category targets applications that involve subliminal practices, exploitative activity, or social scoring systems by public authorities. It covers cognitive behavioural manipulation of people or specific groups; people classification based on behaviour, socioeconomic status or personal features; and real-time and remote biometric identification, such as facial recognition. Such applications will be banned. This risk category allows for some exceptions, creating a sort of grey area. For example, "post" remote biometric identification is permitted via court approval if it occurs with a significant delay to prosecute serious crimes.

  • High risk. Applications related to education, healthcare, and employment (such as CV scanning, ranking job applicants) will be subject to specific legal requirements (e.g., to ensure transparency and safety of the systems and comply with the Commission's mandatory conformity requirements). It covers AI systems used in products falling under the EU's product safety legislations (toys, aviation, cars, medical devices, and lifts). It also identifies eight specific categories of AI systems to be registered into an EU database. Providers of "high risk" systems have obligations to establish quality management systems, keep technical documentation up to date, undergo conformity assessments (and re-assessments), conduct post-market monitoring, and collaborate with surveillance authorities.

  • Limited risk. AI systems such as chatbots will be subject to minimal specific transparency obligations, for example, disclosing that interactions are performed by a machine, so that users can take informed decisions. This category covers AI systems that generate or manipulate image, audio, or video content, such as deepfakes.

  • Minimal risk. Applications that are not listed as risky nor explicitly banned are left largely unregulated (e.g., AI-enabled video games). Currently, this category covers most AI systems used in the EU.

GenAI Focus: The surge of Generative AI systems caused the addition of specific amendments to the AI Act. The focus here is on transparency requirements: Generative AI systems are required to disclose that the content was generated by AI; systems and models need to be designed to prevent them generating illegal content; summarizing copyrighted data use for training.

Trajectory of Regulation:

  • The member states and Commission will establish a database of high risk “AI systems.” The database will hold information on those AI systems being used or sold in European markets.

  • AI ‘providers’ will be tasked with the submission of any high-risk AI system to a European database. Submissions will include information covering, training data, traceability, transparency, accuracy, potential risks presented by the system, details concerning the intended application, bias assessments.

  • Member states will be tasked with designating national authorities/existing ministries to enforce the regulation and perform market surveillance.

  • Fines for non-compliance range from €30m or 6% of global turnover, whichever figure is higher.

Technology and Business Impact:

  • The AI Act will support AI innovation and adoption by reducing legal uncertainty, creating a level playing field for businesses, and improving the quality and safety of AI systems.

  • Certain use cases or industries will be more impacted by the AI Act than others, depending on the risk level and the potential benefits of the AI application.

  • The use of biometric identification technologies will fall into the ‘unacceptable’ or ‘high risk’ categories and therefore be imposed with stringent limitations. This will limit the use of these technologies for some applications, such as security or fraud prevention, but will also provide greater protection for individual privacy and rights.

  • However, the majority of current AI-related use cases will not fall under the high-risk categories. Consequently, any potential decrease in value for high-risk use cases would not lead to a dramatic drop in the overall market size.

  • Larger companies have fewer concerns as they possess the financial resources to cover necessary audits and compliance costs. They will continue investing in building relationships with regulators, which could lead to long-term changes in the technology landscape.

  • VCs and AI startups express apprehension that the new regulation will limit their ability to innovate and compete. The challenges they anticipate primarily involve technical complexities, compliance costs, and additional obligations associated with high-risk AI systems.

United States: Overview of AI Regulatory Landscape

Comprehensive US legislation governing use of AI is current pending discussion. A number of voluntary frameworks have also emerged. Agency-specific and state-specific legislations also exist.

  • Executive Order on Safe, Secure, and Trustworthy AI (2023, published): Executive order announced in October 2023. The order requires developers of the most powerful AI systems to share safety test results and other critical info with the US government; requires that the National Institute of Standards and Technology (NIST) develop of standards, tools, and tests to ensure safety of AI systems; calls on Congress to enact bipartisan privacy protection legislation, as well as AI-fraud protection; and more.

  • Bipartisan Framework on AI Legislation (2023, draft): Blueprint for a future AI Act proposing: (1) a licensing regime for sophisticated or high-risk AI, (2) an independent oversight body, (3) legal protections to tech firms for third party content, (4) transparency obligations, and (5) other protections (e.g. consumer, child, national safety).

  • Digital Platform Commission Act (2023, draft): Draft bill proposing a new Federal body to oversee and regulate digital platforms, which are understood to include social media platforms and their associated algorithms and databases.

  • Ensuring Safe, Secure and Trustworthy AI (2023, published): A list of 8 voluntary commitments that companies will make to promote safe, secure, and transparent development and use of AI. Recently, Adobe, Cohere, IBM, NVIDIA, Palantir, Salesforce, Scale AI, and Stability AI have made this voluntary commitment.

  • Blueprint for AI Bill of Rights (2022, published): A voluntary blueprint identifying 5 principles for “automated system” design, use, and deployment: (1) safe and effective systems, (2) algorithmic discrimination protections, (3) data privacy, (4) notice and explanation, and (5) human alternatives, consideration, and fallback.

Current Key Characteristics of the US’s AI Regulation Regime

  • Voluntary Frameworks vs. Full Regulation: Voluntary frameworks help pave the way for standards of conduct until legislation is fully passed, which is a lengthy process in the US. However, this is obviously dependent on companies upholding these frameworks, but given customer expectations around this, will be crucial and can offer competitive advantages to companies that are more transparent about their AI strategies.

  • State-level Solutions vs. National Law: Legislation already exists, especially on a state level, though this report does not go in-depth on these. For example, Connecticut has its Artificial Intelligence Law that establishes a task force to research and make recommendations on AI. This has led to a patchwork landscape as certain states implement AI regulation more strictly than others. State-level regulations also tend to focus more on very specific use cases, such as HR programs (e.g., New York and Illinois laws).

  • Innovation vs. Regulation: The US regulatory landscape is overall less developed than the EU, China, and other select countries. It is patchwork and drafts remain suspended in debate (and often embroiled in political strife). Yet, the US is home to some of the fastest-paced innovation in the AI field - from Microsoft and Google to AI startups. Walking the fine line between innovation and regulation is critical to the US, especially given how geopolitically-conscious regulators are.

United Kingdom: Overview of Regulatory Landscape

Though there is no defined regulation yet, the UK aims to establish a flexible regime that would regulate the sectorial applications of AI, rather than the underlining software and systems.

The UK government first published a White Paper in March 2023, then updated in August, detailing the country’s plan to implement a guideline for regulating AI. This is still very much work-in-progress, and no draft legislation has been published yet.

  • AI applications and development are currently regulated by existing set of rules and laws. The current regulatory framework raises concerns across industries over the lack of a unified set of rules.

  • The aim of the new regulation is to lead the international landscape on AI governance with a pragmatic and proportional regulatory approach, providing a clear, pro-innovation regulatory environment for foundational AI companies and systems. The regulatory framework will be based on five principles: Safety, security and robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; Contestability and redress.

  • The scope of the regulation will focus on establishing a regulatory framework based on context, outcomes and the use of AI, rather than the technology itself.

  • It will not put immediately into enforce legislation, as it is believed to hinder innovation in the sector. The regulation will be first issued on a non-statutory basis.

UK AI Safety Summit

The UK hosted the first global AI Safety Summit in November 2023, where the main topic will be how international co-operation in regulating AI systems can help mitigate risk and will see both global leaders and AI companies attending.

  • The summit also has geopolitical implications: the UK can use its position outside the EU to promote its more flexible, less strict approach on AI, also considering that several EU Member States attended the summit.

  • UK Government officials are also currently in negotiations with large AI system developers such as DeepMind for permission to examine their LLMs, which would be unprecedented, as they seek to understand more in depth the technology.

Trajectory of AI Regulation

Current regulation on AI applications in UK is viewed as too fragmented. The government is structuring a flexible regulatory framework to regulate AI use and development by use cases and industry, rather than the software behind.

Current Status:

  • Scattered current rules on AI use and development criticized by industry as hindering innovation.

  • UK is developing a new regulatory framework, use case-based, which will not be enforced straight away.

  • The British government is trying to establish the country’s leading position for a flexible environment for AI developers and users.

Future Status:

  • Official regulation still not on the table. The drafted white paper provides an overview on the approach, which is positively seen by industries.

  • UK is currently looking not to ban the use of biometric recognition technologies, opposite of the EU. But privacy concerns are rising.

  • The November Summit will serve as a starting point for the UK to start drafting an official regulatory framework, which however will not be legally binding at first.

China: Overview of AI Regulatory Landscape

China is implementing some of the strictest AI laws around the globe. In contrast to the EU, however, China is focusing primarily on specific applications and classifying risk based more on national security.

  • Administrative Measures for Generative AI Intelligence Services (2023, published): Rules that came into effect on August 15, 2023 that regulate generative AI services. Regulations include requirement for licenses to operate, regular security assessments, and adhering to Chinese values.

  • Administration of Deep Synthesis of Information Services (2022, published): Governs “deep synthesis technology” (AI/ML, algorithmic processing systems). It stipulates, among others, that users must consent to their image being used in outputs, disclosure of the use of deepfakes via labels, and prohibition of content that endangers national security and interests.

  • Internet Information Service Algorithmic Recommendation Management Provisions (2022, published): This law regulates algorithms in apps and websites used in China. Key provisions range from requiring an online database of algorithms with “public opinion properties” or “social mobilization capabilities,” requiring regular reports, and transparency surrounding how the algorithm works, including its database.

  • Data Security Law (2021, published): This law requires data localization of any data on Chinese citizens, meaning that all domestic and foreign enterprises alike must be housed in mainland China. Export of data is forbidden without a “technology review.”

  • Cybersecurity Law (2017, published): In the context of AI software, this law has a number of strict provisions around data. For example, Article 37 also requires that all data be stored in servers in mainland China. There are other strict security obligations of “network operators,” which includes social media platforms, app developers, and tech companies.

Current Key Characteristics of China’s AI Regulation Regime

  • Innovation vs. Regulation: Previously, China vigorously tackled AI and algorithms among other technology-related sectors. Now, however, China finds itself in a balancing act of courting private sector AI investment and maintaining the established order, targeting its sagging economic growth in particular. This means that there may be some limited opportunities for businesses in this field, but they are subject to stringent requirements.

  • Emphasis on AI Categories: Unlike the EU, China does not broadly define AI based on a risk system. Its current legislation rather defines AI based on specific application category or technology, such as generative AI versus algorithms versus deep learning. This division is possible due to the structure of the government allowing for speed of legislation and implementation. Legislation is therefore fast, responsive, and vertical.

  • Security and Political Concerns: China emphasizes security issues as a key point of regulation of AI and draws upon many previous laws in order to ensure AI systems do not pose any perceived threat to security. Given the lack of foreseeability and occasional “black box” nature of AI, however, this may prove a considerable challenge.

China’s regulations are of specific types of technology and algorithms and not necessarily tiered into risk categories. Legislation tends to be vertical – that is, it addresses problems as they arise. However, the rules being legislated are strict, and this is likely a long-term trend.

The Hiroshima Process

On October 30, 2023, members of the G7 issued a statement around commitment to ensuring both innovation and development in AI as well as proper human rights protections. They highlighted two documents that provide voluntary frameworks for AI ethics among G7 nations: the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems and the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.

  • International Guiding Principles for Organizations Developing Advanced AI Systems: Aims to promote safe, secure, and trustworthy AI worldwide and provides guidance for organizations developing and using the most advanced AI systems, including the most advanced foundation models and generative AI systems. Sets forth a non-exhaustive list of principles for organizations, including proper risk mitigation, transparency, and more.

  • International Code of Conduct for Organizations Developing Advanced AI Systems: Aims to provide voluntary guidance to organizations by advocating actions such as taking appropriate measures for risk identification and mitigation, public disclosure of AI capabilities, measures for data protection, and more.

  • Significance and Challenges: These two documents are non-binding international guides and thus do not have any enforceability or accountability mechanisms. They rely on the volition of companies to implement these principles. However, these frameworks also have the potential to influence trajectory of legislation in these G7 countries, most notably in Japan which currently has none. G7 economies are also highly influential and this legislation can set the stage for other economies to follow.

References

AUTHOR

Jack Vernon is a senior analyst at IDC.

REFERENCES

[1]: IDC’s The Generative AI Arc Survey, August 2023.
[2]: IDC’s European Emerging Technologies Survey September 2023.
[3]: IDC’s AI Strategies View Survey, December 2022.
[4]: IDC’s AI Tracker Forecast 2023H2, October 2023.
[5]: IDC’s AI Spending Guide Forecast 2023H2, October 2023.


  1. This article was written mid November 2023. ↩︎

The HiPEAC project has received funding from the European Union's Horizon Europe research and innovation funding programme under grant agreement number 101069836. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.