Skip to main content
  1. Blog
  2. Article

ijlal-loutfi
on 6 March 2026

Sovereign clouds: enhanced data security with confidential computing 


Increasingly, enterprises are interested in improving their level of control over their data, achieving digital sovereignty, and even building their own sovereign cloud. However, this means moving beyond thinking about just where your data is stored to thinking about the entire data lifecycle. 

In this blog, we cover the differences between data residency and data sovereignty, how confidential computing works to enhance the security of your data, and can support you in achieving digital sovereignty. 

Data residency is not data sovereignty.

Many people confuse knowing where your data is stored (data residency) with achieving data sovereignty. However, just because you know where your data is stored, and that it’s protected while being stored, doesn’t mean that you have data sovereignty. In other words,  it doesn’t mean you have complete control over it.

Storage is only one state of data. Achieving true sovereignty in your systems means considering everywhere that plaintext exists: in memory during computation, in registers during execution, in GPU memory during inference, and in intermediate buffers during training. If those states are visible to the host – the underlying physical machine, or the hypervisor running the workloads – then your data protection depends entirely on operator behavior. Whether that’s a public cloud operator, third party managed service provider, or your own IT department, you’re not enforcing a boundary. You are hoping one holds.

Confidential computing closes that gap. It is a hardware-level capability that encrypts data while it is being processed, not just when it is stored on disk or moving across a network. The processor itself enforces isolation using what are called Trusted Execution Environments (TEEs): protected regions of memory that the hypervisor cannot inspect, the host operating system cannot read, and administrators cannot dump. 

This is what makes it structurally different from conventional cloud security. Disk encryption protects data at rest. TLS protects data in transit. Confidential computing protects data in use, the one state where, until recently, defenders had almost no tools. And for sovereign clouds, that is the state that matters most.

From identity-based trust to state-based trust

Traditional cloud security is anchored to identity. IAM policies, role separation, access logs, conditional access, all of it answers one question: who is requesting access?

Confidential computing introduces a different question: what is the state of the environment where data is being used?

A workload proves the hardware class it runs on, the firmware version, the boot chain integrity, the measurement of its own code, and whether debug capabilities are active. Secrets are released based on verified state, not organizational assurances. You are no longer asking whether you trust this operator. You are asking whether this execution environment satisfies your cryptographic conditions.

Control under privilege

Cloud systems are layered privilege machines. Firmware controls hardware. Hypervisors control guests. Operators control infrastructure. Providers control updates. Supply chains control binaries. To achieve data sovereignty, you need to ask a difficult question that cuts across all of them: what happens when the layer with privilege is not the layer you want to trust?

Without confidential computing, the answer is uncomfortable. You rely on contracts, governance frameworks, and organizational separation; mechanisms that constrain behavior, but cannot constrain capability. With confidential computing, the answer becomes structural: privilege does not automatically grant visibility.

TEEs encrypt memory in use and restrict host introspection. The hypervisor may schedule a workload, but it cannot read its memory. An administrator may control the host, but cannot dump secrets from the guest. Debug pathways are restricted or disabled at the silicon level.

This does not negate trust. It narrows it. And narrowing trust surfaces is the core engineering discipline behind any sovereign cloud that deserves the name.

Programmable sovereignty

The different levels of sovereignty – data, operational, and software, which we’ve discussed at greater length on our webpage – are typically treated as static requirements. Data must stay here. Operators must be local. Legal control must be bounded by jurisdiction. Software must be accessible without lock-in.

Confidential computing makes both data and operational sovereignty programmable. Attestation policies can define which hardware generations are acceptable, which firmware versions are approved, which configurations are disallowed, and which geographic attestation roots are trusted. Those policies can evolve without re-architecting infrastructure.

This matters most where regulatory frameworks move faster than infrastructure lifecycles. Instead of redesigning systems to meet new requirements, you rotate attestation baselines. That agility is not a convenience; it is a strategic capability that most sovereign deployments do not yet appreciate.

Blast radius, not distrust

There is a persistent misunderstanding that confidential computing implies an adversarial relationship with operators. That framing is shallow. 

The real issue is blast radius. Even trusted operators make mistakes, get phished, inherit compromised dependencies, become subject to legal orders from foreign jurisdictions, and rotate personnel. Infrastructure systems cannot assume that privileged access will always align with national or organizational intent, not because operators are necessarily malicious, but because assuming perpetual alignment is an architectural error.

Confidential computing reduces what privileged access can see. That is risk minimization applied to the one attack surface sovereignty frameworks care most about: administrative and systemic pathways, not internet-facing exploits.

Sovereign AI without runtime protection is strategically exposed

As sovereign cloud strategies absorb AI workloads, national language models, public-sector analytics, and defense applications, the problem intensifies along every axis.

Foundation models represent intellectual capital, and inference pipelines process high-value inputs in real time. Without runtime protection, model weights can be extracted from host memory, training data is visible during processing, debug hooks become exfiltration vectors, and multi-tenant infrastructure becomes a leakage surface.

The more valuable the model, the less acceptable hypervisor visibility becomes. In sovereign AI, confidential computing is not an enhancement, but a necessity. Learn more on our webpage

The central role of open source in confidential computing

There is a dependency that is easy to overlook. Attestation proves that a workload matches a known measurement. But if you cannot audit the kernel, reproduce the build, verify the signing process, or inspect the virtualization stack, then the measurement proves very little. You have cryptographic evidence of an opaque system. That is not sovereignty.

Confidential computing becomes meaningful only when the measured components are themselves transparent and reproducible. Operating system integrity, supply-chain provenance, and verifiable builds are not peripheral concerns. They are the substrate on which attestation credibility rests. 

This is why open source is not ideological in sovereign cloud architectures, it is structural. Learn more about why open source is necessary to achieve a truly sovereign cloud in “Sovereign cloud: the essential guide for enterprises.” 

Ubuntu powering the world’s public and private confidential clouds

Confidential computing is a strategic investment area that Canonical has been building toward for years. Today, Ubuntu makes confidential computing operational across both public cloud and on-premise enterprise environments. 

On major public cloud providers, including Azure, AWS, and Google Cloud, Ubuntu powers confidential virtual machines with first-class guest support for AMD SEV-SNP and Intel TDX, enabling hardware-isolated workloads without proprietary guest images. 

At the same time, Ubuntu provides the necessary host and hypervisor integration for enterprises deploying confidential computing on-prem, including kernel, QEMU/KVM, and virtualization stack support aligned with upstream TDX and SNP enablement. 

This dual capability is important: sovereign and regulated environments rarely operate in a single domain. They span public cloud and private infrastructure. Ubuntu’s open, upstream implementation across both host and guest layers allows organizations to deploy confidential workloads consistently, without bifurcating their operating system strategy or relying on opaque vendor-specific stacks. Confidential computing is not a feature bolted onto Ubuntu; it is integrated into the core platform that enterprises already standardize on.

What confidential computing actually does

Confidential computing does three things for sovereign cloud architectures: 

  1. It minimizes reliance on privileged infrastructure actors, 
  2. It converts sovereignty requirements into cryptographic conditions, 
  3. It makes assurance demonstrable rather than rhetorical.

It does not replace governance, remove the need for identity controls, solve application-layer vulnerabilities, or resolve geopolitical complexity. It does something narrower and more precise. Confidential computing enforces confidentiality at the only moment that truly matters: while computation is happening.

In cloud systems, computation is where data is most vulnerable. It is also where, until recently, defenders had the fewest tools. Confidential computing changes that. Not through policy, or through contract. Through architecture.

The question for every sovereign cloud strategy is whether you design around the reality that infrastructure privilege always exists, or whether you ignore it and hope governance holds.

Confidential computing is how you design around it.

Further resources:

Sovereign Cloud: the essential guide for enterprises

This guide will help you to understand key concepts, requirements, and options to build cloud sovereignty in your organization. 

Security in depth with Ubuntu: Mapping security primitives to attacker capabilities

Cybersecurity is not about perfection. In fact, it’s more like a game of chess: predicting your opponent’s moves and making the game unwinnable for them. The best defense isn’t a single unbreakable barrier, but instead a layered strategy that forces your adversary into a losing position at every turn. Learn more about Ubuntu’s security strategy in this blog. 

Related posts


Benjamin Ryzman
24 February 2026

Building quantum-safe telecom infrastructure for 5G and beyond

private mobile network Article

coRAN Labs and Canonical at MWC Barcelona 2026 At MWC Barcelona 2026, coRAN Labs and Canonical are presenting a working demonstration of a cloud-native, quantum-safe telecom platform for 5G and beyond 5G networks. This is not a conceptual exercise. It is a full 5G System (5GS) deployment with post-quantum cryptography embedded across the ...


Lidia Luna Puerta
14 January 2026

How to build DORA-ready infrastructure with verifiable provenance and reliable support

Ubuntu Article

DORA requires organizations to know what they run, where it came from, and how it’s maintained. Learn how to build infrastructure with verifiable provenance. ...


Florencia Cabral Berenfus
17 December 2025

Extending ROS Noetic Support with ESM-Enabled Content Snaps

Robotics Article

Canonical has now extended its ESM (Expanded Security Maintenance) for ROS coverage to ROS Noetic content-sharing snaps. With ESM for ROS now available in both deb and snap formats, Ubuntu continues to be the trusted foundation for secure, long-term robotics innovation. ...