Skip to content

We stand on the brink of a technological revolution where the web as we know it will break out from behind the screen and exist in the physical world along with us. Sometimes referred to as Web 3.0, the spatial web will seamlessly blend our physical and digital worlds together.

The Spatial Web: interconnecting people, places, things and AI for a smarter world

by Philippe Sayegh, Safae Essafi Tremblay, Dan Richardson and Chase Pletts

The Spatial Web is a concept in the evolution of the internet that envisions a multi-dimensional online space intertwined with the physical world. Unlike the traditional two-dimensional web, the Spatial Web integrates digital information with physical locations and objects, creating a seamless blend between the digital realm of ones and zeroes, and the physical realm of places and spaces. This unifying web marks the evolution from a network of pages to one of spaces – cyber-physical and conceptual alike – and will interconnect activities, people, places and things, as well as AI to form a smarter world.

AI will join with existing and new Cyber Physical Systems (CPS), e.g., sensors and actuators, IoT devices and appliances, and autonomous vehicles, to become Autonomous Intelligent Systems (AIS), stemming from the fusion of AI, CPS and the IoT.

This paradigm shift will impact massively on the fabric of our professional, social, and personal life, creating smarter urban environments, advancing personalized healthcare and immersive educational experiences. Embedded computing devices will enable feedback loops where physical processes affect computational learning and vice versa.

The Spatial Web brings with it the possibility of creating a smarter world with new realms of possibility for individuals, organizations, and civilization as a whole.

Key insights

  • We're entering a new era of the internet: the “Spatial Web” – where the digital and physical worlds merge seamlessly, where compute and data knowledge will come out of our screens and into the world. This Spatial Web is sometimes called web 3.0, industry 4.0, the metaverse, or society 5.0, depending on your vantage point. It integrates AI with Cyber Physical Systems, creating automated and autonomous Intelligent Systems. It will revolutionize everything, from urban living to healthcare and education, making the world smarter and rich with new possibilities.

  • This next web will be a network of distributed intelligent agents (both human and machines) working together. To ensure that these autonomous systems understand, operate and connect with each other within the boundaries of safety, privacy, law, and ethics, new types of orchestration and governance will be needed.

  • The underlying infrastructure will need to be augmented and will require new socio-technical standards that are designed to provide governance capabilities, transparency, and auditability for autonomous agents and ecosystems of agents.

  • Furthermore, a shift in AI methodologies towards an approach based on Active Inference will enable AI that is transparent and explainable. AI agents that leverage the Active Inference framework are able to continuously sense, understand, predict, and act. This ongoing cycle produces AI that can adapt and evolve over time. They are explainable and self-learning by design.

Key recommendations

To propel the Spatial Web towards its maximal potential, we propose the following strategic initiatives:

  • World Model Creation: Develop models based on standards and spatial and multidimensional data to simulate impactful scenarios.

  • Promotion of socio-technical standards: Advocate for standardization in adaptive computing and metadata.

  • Upskilling: Encourage talents to master the implementation and methodology of spatial web standards.

  • Join Collaborative Bodies: Participate in groups like the Spatial Web Foundation and the IEEE Spatial Web Working Group (P2874).

  • Awareness and Collaboration: Engage in hackathons and competence networks to drive adoption of Active Inference based AI.

  • Funding: Allocate funds to support startups and applications using these standards and active inference based AI.

  • Industry-wide Implementation: Collaboratively work on broad-scale implementation of socio-technical standards.

Challenges and requirements

With all this power for change comes great responsibility. The convergence of exponential technologies has the potential to test civilization as much as it can help it. This shift necessitates responsible stewardship and serious ethical consideration. As industries are transformed by AI and our quality of life improves, it will be critical that governance keeps pace with innovation.

The emergence of Autonomous Intelligent Systems (AIS) introduces an entirely new set of challenges. Advanced entities, such as autonomous drones, sophisticated manufacturing robots, and interactive companion devices, will operate autonomously, learn from their surroundings, and effect tangible changes in the real world. Ensuring universal interoperability becomes essential as the current 8 billion humans and 30 billion devices are expected to come online by 2030 to the Spatial Web.

Promoting this vision begs the following questions:

  • How do we get all these diverse systems to talk to each other?

  • And once they can talk to each other, how do we govern these systems that are on the path to become self-governing?

The answer lies in the foundation of the Spatial Web, i.e., an augmentation of the current internet infrastructure with new standards, a new approach to modeling data and to AI. The new standard infrastructure will become the fabric that we use to connect to AI, and that AI agents will use to connect with each other. These innovations will allow controlling how knowledge is structured and how information is shared on the network, also making it possible to build governance directly into the web itself. For this to happen, we need to upgrade the standards and protocols that are the backbone of the current Web 2.0.

Socio-technical Standards enable shared understanding between humans and machines

The web as we know it today runs on a suite of technical standards, where HTML and HTTP have become predominant. The web technologies were not explicitly tailored to handle the demands for transactions, interoperability, security, and privacy, of contemporary complex systems and the connected smart world. However, they were open by design, which allowed for sufficient though laborious and creative adaptations.

As AI turns into an online commodity and becomes networked, the privacy and security challenges that are inherent in Web 2.0 technologies will grow exponentially. Ultimately, it may become impossible to course-correct as AI gets more and more powerful. It is therefore the case that we should turn our attention to the fundamental infrastructure of the web: the standards that define it.

Historically, society has deployed technical standards to foster safety and interoperability in the use of technology.

Considering the power of AI to alter virtually every sector of the world economy, technical standards aren’t enough. A new generation of web standards will also need to address social requirements around transparency, explainability, accountability, safety, and other societal or human-centered values.

A hybrid approach of socio-technical standards can bridge the gap between technology and society. Socio-technical standards could enable AI and AIS to be technically sound, socially beneficial, safe, compliant with laws, and able to be aligned with societal norms and values.

In 2020, The IEEE P2874 Spatial Web Working Group was formed to lead the development of socio-technical standards for AI and AIS alignment, interoperability, and governance. These standards are informed by IEEE’s Ethically-Aligned Design P7000 Series of standards that address human rights, well-being, accountability, and transparency for AI and AIS.

The IEEE P2874’s Spatial Web Standards are being developed to address the following:

  1. Shared understanding of meaning and context between humans and AIs.

  2. Explainability of AI systems, enabled by the explicit modeling of their decision-making processes.

  3. Interoperability of data and models that enable universal interaction and collaboration across organizations, networks, and borders.

  4. Compliance features that are built to adapt by design with diverse local, regional, national, and international regulatory demands, cultural norms, and ethics.

  5. Authentication and credentialing, driving compliance and control over critical activities, with privacy, security, and explainability built-in by design as well.

These standards lay the foundations for the efficient integration and adoption of AI technologies while minimizing the risk inherent in AI. In the sequel, we highlight a few essential components of the Spatial Web Standards.

Socio-technical Standards enable comprehensive world modeling

World modeling in AI involves creating internal representations of the external environment, utilizing abstract symbols to understand and interact effectively. However, this process encounters a crucial challenge known as the grounding problem. This challenge emerges when translating symbolic representations into a meaningful reflection of real-world entities, requiring a bridge between the abstract and the concrete. Successful world modeling addresses the grounding problem by integrating sensory input and learning from real-world interactions. The resolution of the grounding problem enhances the accuracy and context-awareness of AI systems, enabling them to navigate diverse environments with a deeper understanding.

The successful implementation of AIS is therefore dependent on their ability to create comprehensive and dynamic world models. AI and AIS systems will need hyperdimensional world modeling to enhance performance and explainability. These systems must be adept at understanding and interpreting complex models of the world from as many perspectives and sensory inputs as required by the problem they are trying to solve or the activity they are trying to predict and optimize. For IoT and cyber-physical systems to stay pertinent and adjust to shifting use cases and scenarios, data must be integrated within a broad world model.

World modeling is multi-dimensional. It encapsulates identities, activities, environments, policies and credentials, which need to be expressed in a coherent and shared manner in different contexts:

  • Semantic (meaning and logic)

  • Spatial (physical and situational)

  • Societal (values and value)

  • Systems (networks and ecosystems)

Comprehensive world modeling needs to:

  • Be stateful

  • Be multi-modal / multi-dimensional

  • Be interpretable and actionable by machines

  • Be shareable between heterogeneous networks, devices and applications, and humans

  • Maintain coherence over time and space for all the actors/edges involved in a use case

The Spatial Web socio-technical standards, Hyper-Spatial Modeling Language (HSML) and Hyper-Spatial Transaction Protocol (HSTP), enable world modeling by structuring spatial information and securing efficient transactions.


Hyper-Spatial Modeling Language is a knowledge modeling language that enables systems to encode properties of physical objects, logical concepts, and contextual activities. HSML facilitates multimodal world modeling and knowledge sharing among machines and humans, encompassing ethical, moral, economic, and societal considerations. HSML models relationships and activities, addressing the Who, What, When, Where, How, and Why.

HSML allows for the detailed description of entities and their relationships with other entities within the Spatial Web. It serves as a modeling language and semantic data ontology schema, crucial for creating complex and accurate models of spatial environments and contracts. By providing a framework for encoding these models in a way that is both human- and machine-readable, HSML facilitates the construction of dynamic, interactive world models.


Hyper-Spatial Transaction Protocol provides the methods for passing HSML messages in the Spatial Web. It provides a universal, secure, and verifiable protocol for communication between digital or physical systems, ensuring seamless interaction and cooperation between diverse AI systems. It incorporates a zero-trust architecture and strict authentication measures for secure data exchange and control over AI operations.

Figure 3. A simplified view of the HSTP query language.

HSTP manages the transactional aspect of models of spatial environments and contracts. It is designed to support automated contracting, ensuring decentralized, secure, and privacy-respecting interactions within the Spatial Web. By providing APIs for distributed computing platforms, HSTP enables smooth and secure exchange of information and resources within the modeled world, thereby supporting dynamic interactions and operations in world modeling.

HSTP’s zero-trust architecture ensures that data sharing across environments is done so with security and privacy principles embedded by design, in particular as it mandates verifiable credentials for any interaction between systems. This rigorous credential-based authentication process is particularly crucial for AI activities, as it allows them to operate within a secure and compliant framework, protecting against unauthorized access, ensuring the integrity of data and operations, and significantly enhancing the security and trustworthiness of all operations across the Spatial Web.

In contrast to the open structure of the Internet and the World Wide Web, the Spatial Web, based on HSTP, is designed as a permissioned network by design. This fundamental shift in architecture not only enhances security but also increases the reliability and predictability of AI operations within this environment.

A new Approach to AI

The field of AI is at a critical juncture where traditional methods, often based on narrow, task-specific algorithms, are reaching their limits in terms of adaptability, generalizability, and understanding complex, real-world environments. This limitation calls for a new approach that can bridge the gap between highly specialized AI and the more versatile, adaptive intelligence seen in natural systems. Active Inference emerges as a promising answer to this challenge.

Active Inference

The framework of Active Inference decodes biological intelligence by analyzing how the human brain creates mental models and makes predictions based on those models. VERSES is now applying this framework to the fields of computer science and AI.

Originally developed by Karl Friston—most highly cited neuroscientist[1], theoretician at University College London, and Chief Science Officer at VERSES—active inference has the potential to completely transform the field of artificial intelligence by creating intelligent agents that can model the world, and use those models to think, plan, predict, and act.

Active inference defines in mathematical terms the process by which agents, whether living organisms or digital systems, learn by interacting with their environment. It posits that all intelligent agents are fundamentally engaged in minimizing the uncertainty between expected sensory inputs and actual sensory inputs. Agents make predictions about the world and then act to make those predictions come true. The goal is to reduce the level of surprise an agent experiences. The less uncertainty an agent must navigate, the better its chance of survival.

Active inference involves creating an internal representation of the external world. This world model enables active inference-based agents to make predictions about the causes of sensory inputs and the likely outcomes of actions.

As the agent takes action in the world, it may learn something new about its environment. It can then update its world model with this new information. Agents can also access a shared world model that is continuously updated by multiple agents, resulting in a world model that is shared and always up to date.

For AI and robotics, active inference offers a blueprint for creating systems that can autonomously learn and make informed decisions. These agents use predictions based on an always-up-to-date world model to guide their behavior, constantly adjusting their actions based on new data from the environment. This creates a feedback loop where the AI's actions are both informed by and inform its predictions, leading to a self-correcting system that can become more sophisticated over time.

The shift towards self-learning and agentive AI necessarily poses crucial regulatory questions – How do we effectively regulate a system inherently designed for self-regulation? The evolving nature of AIS and its potential autonomy raise urgent considerations for regulatory frameworks, prompting a need to strike a balance between fostering innovation and ensuring responsible governance.

Governance and Regulations

Historically, laws have primarily focused on human-to-human interactions, where actors and subjects are humans or human-controlled entities. With the arrival of AI, however, there has been a shift in focus to human-to-AI interactions. This includes issues such as data privacy, algorithmic bias, intellectual property rights in AI-generated content, and questions around liability for decisions made by AI systems. Laws are being updated and new ones created to address these unique challenges, where the lines between the creator (human) and the creation (AI) are often blurred.

Looking ahead, there’s an anticipation that the legal system will need to evolve further to govern AI-to-AI interactions. This emerging field is likely to present unprecedented challenges. Key issues may include the autonomous decision-making by AI entities, interactions between different AI systems without direct human oversight, and the consequences of these interactions. For instance, two AI systems might negotiate contracts, conduct transactions, or even engage in conflict resolution without human intervention.

For AI systems to understand and apply laws, these laws must be converted into a format that machines can understand and process. This involves translating legal texts into structured data that can be easily understood in all contextual dimensions by computer algorithms. This would mean coding laws in a way that captures their essence and directives without ambiguity, which is a significant challenge given the complexity and nuanced nature of legal language. Beyond just being readable, laws need to be interpretable by AI. This means that the AI must be able to understand the intent, context, and application of the law. Developing a universally accepted socio-technical standard for how laws are encoded is therefore crucial. This ensures consistency in how different AI systems interpret and apply the law. Without standardization, there could be significant discrepancies in legal interpretations, leading to unpredictability and potential injustices. Moreover, laws evolve over time, responding to societal changes, new understandings, and precedents. AI systems will need to be adaptable to these changes, requiring mechanisms for continuous learning and updating of legal knowledge bases.

Additionally, machine-readable and interpretable laws must be accessible to those who are subject to them, including humans. This means that while laws need to be encoded for machines, they also need to be understandable by humans in a transparent manner, ensuring that the legal process remains open and fair. These laws also need to be explainable, in order to respond to auditability and liability concerns. It is crucial that the rationale behind automated legal decisions is clearly outlined and can be scrutinized. This ensures that in cases where disputes arise, or errors occur, there is a traceable decision-making process. This not only aids in holding systems and their creators accountable but also fosters trust in the technology by demonstrating that decisions are made based on logical and fair principles, and that there are mechanisms in place to rectify any mistakes or biases.

AIS International Rating System (AIRS)

Intelligent machines may need to operate optimally across a range of governance frameworks, from centralized to federated to distributed. Standards that facilitate interoperability across the spectrum will be essential.

The chart below illustrates how AI systems that become increasingly intelligent gain the potential for greater autonomy, which is reflected in the corresponding governance framework that becomes available, along with all other governance frameworks that came before it.

Figure 4: AIRS Chart sourced from: The Future of Global AI Governance


Philippe Sayegh is chief adoption officer at

Safae Essafi Tremblay is senior grant manager and researcher at

Dan Richardson is director of market analysis at the Spatial Web Foundation.

Chase Pletts is senior editor at

  1. AD Scientific Index 2024 ↩︎

The HiPEAC project has received funding from the European Union's Horizon Europe research and innovation funding programme under grant agreement number 101069836. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.