As AI adoption outpaces public trust, Ahmad Shadid of ORGN makes the case that confidential computing and verifiable execution offer the cryptographic proof thatAs AI adoption outpaces public trust, Ahmad Shadid of ORGN makes the case that confidential computing and verifiable execution offer the cryptographic proof that

Confidential Computing Is How AI Earns Back The Trust It Has Already Lost — And Why It Needs To Become The New Standard

2026/03/20 18:50
8 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com
Why AI's Trust Problem Will Not Be Solved By Better Privacy Policies — And What Cryptographic Proof Can Do Instead

AI systems are moving fast into sensitive workflows — writing code, handling customer data, and supporting decisions in regulated sectors such as finance and healthcare. The speed of that integration has created a structural problem that the industry has yet to adequately address.

The challenge is trust. A study conducted by the University of Melbourne in collaboration with KPMG, surveying more than 48,000 people across 47 countries, found that while 66% of respondents use AI regularly, fewer than half — just 46% — say they are willing to trust AI systems. Usage and confidence are moving in opposite directions, and the gap between them is widening.

The data privacy dimension of this trust deficit is particularly acute. According to Stanford’s 2025 AI Index, global confidence that AI companies protect personal data fell from 50% in 2023 to 47% in 2024, while fewer people now believe that AI systems are unbiased and free from discrimination compared to the previous year. That decline is taking place precisely as AI becomes more deeply embedded in daily life and professional environments, making the stakes of misplaced trust considerably higher.

Ahmad Shadid, CEO of ORGN, the world’s first confidential development environment, argues that the next phase of AI will not be built on trust — it will be built on proof. Confidential computing and verifiable execution are making it possible to demonstrate exactly how data is processed, rather than simply promise that it is safe. 

In a conversation with MPost, he explained how these technologies address the privacy and trust gaps that conventional security measures leave open in AI workflows, and what it would take for them to become mainstream.

How AI Companies Typically Protect Data Today — And Why It Is Not Enough

Most AI companies currently rely on a combination of encryption, access controls, and governance policies to protect sensitive data. Encryption is applied to data at rest and in transit using established algorithms, while role-based access controls, logging, and anomaly detection govern who can interact with systems and under what conditions. These measures represent the industry baseline, and for many use cases, they are sufficient.

The problem arises at a specific and largely overlooked moment: when data is decrypted inside memory for model training or inference. At that point, a window of exposure opens. Confidential computing addresses this directly by encrypting data while it is actively being processed, within the hardware itself, so that even the infrastructure operator cannot see what is happening inside the machine.

Shadid identifies a structural vulnerability that standard security approaches do not fully close. When data is decrypted on a server that a customer does not directly control — a public cloud environment or a third-party AI platform, for instance — the customer has no technical means of verifying what actually happens to it. They are, in practice, relying on the vendor’s word.

This concern is not limited to end users. In regulated environments, CISOs, compliance auditors, and regulators face the same problem. They typically rely on ISO 27001 certificates, SOC 2 reports, and policy documents — instruments that, as Shadid puts it, prove intent more than they prove what actually happens to data in use. Confidential computing with attestation changes that equation by providing tamper-resistant cryptographic evidence that a specific model version ran inside an approved trusted execution environment with an approved software stack. The assurance shifts from documented intention to verifiable technical fact.

The regulatory momentum behind this shift is already visible. According to the IDC’s July 2025 Confidential Computing Study, the introduction of the EU’s Digital Operational Resilience Act led 77% of organisations to become more likely to consider confidential computing, with 75% already adopting it in some form. The primary benefits reported were improved data integrity, proven confidentiality assurances, and stronger regulatory compliance.

What Verifiable Execution Means In Practice

For a non-technical audience, Shadid describes verifiable execution as receiving a cryptographic receipt after an AI system processes data. That receipt demonstrates, in a mathematically verifiable way, that the AI ran on genuine certified hardware, that it executed the expected version of the software and nothing else alongside it, and that the environment was appropriately secured before any sensitive data was unlocked. The integrity of the process no longer rests on trusting the provider’s assurances — it rests on verifying the evidence.

At a technical level, this is achieved through three interconnected mechanisms. Trusted execution environments, or TEEs, allow the processor to carve out a sealed enclave — memory and execution isolated at the silicon level — so that neither the operating system, the hypervisor, nor the cloud operator can read what is happening inside. Remote attestation then allows an external party to verify that a genuine TEE is running an approved software stack before any decryption keys or sensitive inputs are released. Finally, verifiable outputs allow some systems to sign their results with an attestation-linked certificate, so that anyone receiving the output can confirm it came from the expected application inside a protected environment and was not altered in transit.

Shadid argues that the advantages of confidential computing extend across the entire AI value chain. AI developers gain the ability to train and run models on sensitive or regulated datasets in shared cloud environments without exposing raw data to the platform operator. For enterprises, the technology reduces legal and reputational exposure by providing demonstrable proof that personal data remains protected during AI processing — supporting GDPR-class privacy requirements and sector-specific regulations. It also opens the door to cross-organisational data collaboration, because each party can verify that its data is only processed inside attested, policy-compliant environments, removing one of the principal barriers to joint AI projects.

For end users, the benefit is stronger and more tangible assurance that their personal data cannot be accessed by operators, insiders, or other cloud tenants while AI systems are running. It also makes higher-value services viable — personalised healthcare guidance or detailed financial advice, for instance — that were previously considered too sensitive to deliver via cloud infrastructure.

Shadid draws on his own experience as a software engineer to illustrate one of the less-discussed risks. Developers routinely paste proprietary code, configuration files, API keys, and tokens into AI coding tools, often with limited visibility into how that data is stored or used. The pace of the industry makes these tools difficult to avoid. It was precisely this tension — needing to move quickly while being acutely aware of the IP exposure — that led him to build ORGN, a confidential development environment constructed on confidential computing principles.

Why Mainstream Adoption Has Not Yet Arrived

Despite 75% enterprise adoption in some form, the IDC study found that only 18% of organisations have incorporated confidential computing into production environments. Shadid identifies three principal barriers: the complexity of attestation validation, a persistent perception of the technology as niche, and a shortage of engineers with the relevant skills.

Attestation validation, he explains, is considerably more involved in practice than it appears on paper. Attestation evidence arrives as binary structures or JSON objects containing measurements, certificates, and collateral that must be parsed, checked against vendor roots, and validated for freshness and revocation. Developers must then determine what counts as trusted — which firmware versions, image hashes, and application measurements are acceptable — and wire that logic into their own control plane or key management system. Major cloud providers including AWS, Azure, and Oracle already offer confidential compute at costs broadly comparable to standard infrastructure, so the barrier is not access or price. It is the engineering depth required to operationalise attestation correctly.

Shadid’s view is that broader adoption will depend on three converging forces. First, attestation validation needs to become significantly more accessible, either through standardisation or through open-source tooling that abstracts the complexity away from individual development teams. Second, regulatory pressure will continue to drive adoption in the way that DORA already has — if frameworks in other sectors follow a similar trajectory, the business case for confidential computing will become increasingly difficult to set aside. Third, and perhaps most fundamentally, public awareness of what happens to data inside AI systems needs to grow. Most people, Shadid contends, have no clear picture of what occurs when they submit a prompt to a consumer AI tool. Greater awareness of that exposure — among developers and general users alike — would generate the kind of social pressure that accelerates adoption far more effectively than technical arguments alone.

Looking further ahead, he suggests that if confidential computing and verifiable execution become default infrastructure, the way AI services are designed, sold, and governed will change materially. Customers would receive cryptographic evidence of how their data was handled rather than policy assurances, enabling enterprises to demonstrate compliance to regulators and boards in concrete rather than documentary terms. The analogy Shadid draws is to storage and network encryption, which moved from optional security measure to universal baseline over a relatively short period. The direction for confidential execution, he argues, is the same — and once it arrives, every inference, every fine-tuning job, and every data handoff will carry a cryptographic attestation, making the integrity of the pipeline a matter of verifiable fact rather than institutional trust.

The post Confidential Computing Is How AI Earns Back The Trust It Has Already Lost — And Why It Needs To Become The New Standard appeared first on Metaverse Post.

Market Opportunity
Intuition Logo
Intuition Price(TRUST)
$0.06694
$0.06694$0.06694
-1.06%
USD
Intuition (TRUST) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

The post IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge! appeared on BitcoinEthereumNews.com. Crypto News 17 September 2025 | 18:00 Discover why BlockDAG’s upcoming Awakening Testnet launch makes it the best crypto to buy today as Story (IP) price jumps to $11.75 and Hyperliquid hits new highs. Recent crypto market numbers show strength but also some limits. The Story (IP) price jump has been sharp, fueled by big buybacks and speculation, yet critics point out that revenue still lags far behind its valuation. The Hyperliquid (HYPE) price looks solid around the mid-$50s after a new all-time high, but questions remain about sustainability once the hype around USDH proposals cools down. So the obvious question is: why chase coins that are either stretched thin or at risk of retracing when you could back a network that’s already proving itself on the ground? That’s where BlockDAG comes in. While other chains are stuck dealing with validator congestion or outages, BlockDAG’s upcoming Awakening Testnet will be stress-testing its EVM-compatible smart chain with real miners before listing. For anyone looking for the best crypto coin to buy, the choice between waiting on fixes or joining live progress feels like an easy one. BlockDAG: Smart Chain Running Before Launch Ethereum continues to wrestle with gas congestion, and Solana is still known for network freezes, yet BlockDAG is already showing a different picture. Its upcoming Awakening Testnet, set to launch on September 25, isn’t just a demo; it’s a live rollout where the chain’s base protocols are being stress-tested with miners connected globally. EVM compatibility is active, account abstraction is built in, and tools like updated vesting contracts and Stratum integration are already functional. Instead of waiting for fixes like other networks, BlockDAG is proving its infrastructure in real time. What makes this even more important is that the technology is operational before the coin even hits exchanges. That…
Share
BitcoinEthereumNews2025/09/18 00:32
Whales Dump 200 Million XRP in Just 2 Weeks – Is XRP’s Price on the Verge of Collapse?

Whales Dump 200 Million XRP in Just 2 Weeks – Is XRP’s Price on the Verge of Collapse?

Whales offload 200 million XRP leaving market uncertainty behind. XRP faces potential collapse as whales drive major price shifts. Is XRP’s future in danger after massive sell-off by whales? XRP’s price has been under intense pressure recently as whales reportedly offloaded a staggering 200 million XRP over the past two weeks. This massive sell-off has raised alarms across the cryptocurrency community, as many wonder if the market is on the brink of collapse or just undergoing a temporary correction. According to crypto analyst Ali (@ali_charts), this surge in whale activity correlates directly with the price fluctuations seen in the past few weeks. XRP experienced a sharp spike in late July and early August, but the price quickly reversed as whales began to sell their holdings in large quantities. The increased volume during this period highlights the intensity of the sell-off, leaving many traders to question the future of XRP’s value. Whales have offloaded around 200 million $XRP in the last two weeks! pic.twitter.com/MiSQPpDwZM — Ali (@ali_charts) September 17, 2025 Also Read: Shiba Inu’s Price Is at a Tipping Point: Will It Break or Crash Soon? Can XRP Recover or Is a Bigger Decline Ahead? As the market absorbs the effects of the whale offload, technical indicators suggest that XRP may be facing a period of consolidation. The Relative Strength Index (RSI), currently sitting at 53.05, signals a neutral market stance, indicating that XRP could move in either direction. This leaves traders uncertain whether the XRP will break above its current resistance levels or continue to fall as more whales sell off their holdings. Source: Tradingview Additionally, the Bollinger Bands, suggest that XRP is nearing the upper limits of its range. This often points to a potential slowdown or pullback in price, further raising concerns about the future direction of the XRP. With the price currently around $3.02, many are questioning whether XRP can regain its footing or if it will continue to decline. The Aftermath of Whale Activity: Is XRP’s Future in Danger? Despite the large sell-off, XRP is not yet showing signs of total collapse. However, the market remains fragile, and the price is likely to remain volatile in the coming days. With whales continuing to influence price movements, many investors are watching closely to see if this trend will reverse or intensify. The coming weeks will be critical for determining whether XRP can stabilize or face further declines. The combination of whale offloading and technical indicators suggest that XRP’s price is at a crossroads. Traders and investors alike are waiting for clear signals to determine if the XRP will bounce back or continue its downward trajectory. Also Read: Metaplanet’s Bold Move: $15M U.S. Subsidiary to Supercharge Bitcoin Strategy The post Whales Dump 200 Million XRP in Just 2 Weeks – Is XRP’s Price on the Verge of Collapse? appeared first on 36Crypto.
Share
Coinstats2025/09/17 23:42
Uphold’s Massive 1.59 Billion XRP Holdings Shocks Community, CEO Reveals The Real Owners

Uphold’s Massive 1.59 Billion XRP Holdings Shocks Community, CEO Reveals The Real Owners

Uphold, a cloud-based digital financial service platform, has come under the spotlight after on-chain data confirmed that it safeguards approximately 1.59 billion XRP. According to Uphold’s Chief Executive Officer (CEO), Simon McLoughlin, these tokens are fully owned by customers, not the exchange itself.  Uphold Clarifies Massive XRP Holdings The crypto community was taken by surprise […]
Share
Bitcoinist2025/09/18 00:30