In partnership with

The AI your stack deployed is losing customers.

You shipped it. It works. Tickets are resolving. So why are customers leaving?

Gladly's 2026 Customer Expectations Report uncovered a gap that most CIOs don't see until it's too late: 88% of customers get their issues resolved through AI — but only 22% prefer that company afterward. Resolution without loyalty is just churn on a delay.

The difference isn't the model. It's the architecture. How AI is integrated into the customer journey, what it hands off and when, and whether the system is designed to build relationships or just close tickets.

Download the report to see what consumers actually expect from AI-powered service — and what the data says about the platforms getting it right.

If you're responsible for the infrastructure, you're responsible for the outcome.

ResearchAudio.io

NVIDIA Made OpenClaw Enterprise-Safe in One Command

OpenShell enforces YAML policy guardrails. Nemotron runs locally. Here is the architecture.

OpenClaw went from a weekend side project to the fastest-growing open source project in history in under two months. The problem enterprises faced: an autonomous agent that needs broad file and network access to be useful is structurally at odds with a security team that cannot let it roam freely. No software patch can resolve that tension. NVIDIA decided to resolve it at the infrastructure level instead.

At GTC 2026, Jensen Huang announced NemoClaw, an open source stack that installs on top of OpenClaw with a single command. It adds the privacy and security infrastructure that enterprises need before trusting an autonomous agent with production data. Huang compared the moment to the arrival of Linux, Kubernetes, and HTML: "OpenClaw is the operating system for personal AI. This is the beginning of a new renaissance in software."

The Problem OpenClaw Alone Could Not Solve

OpenClaw's earlier iterations had documented vulnerabilities around prompt injection and unconstrained file access. Most were patched. But the underlying tension remained: an agent productive enough for enterprise use needs access to files, databases, and external APIs. Left ungoverned, that same breadth of access is a security liability. Analysts at Futurum Research described OpenClaw as "powerful and fast-moving, but essentially unconstrained."

NVIDIA's answer is OpenShell, an open source runtime that sits beneath OpenClaw and enforces policy-based guardrails at the infrastructure layer rather than at the application layer. NemoClaw installs OpenShell as part of the single-command setup alongside Nemotron models for local inference.

NemoClaw Stack Architecture
🤖
OpenClaw
Coding agent
🛡
OpenShell
YAML policy guardrails
Network egress control
Sandbox isolation
Inference Routing
🏠 Local (sensitive data)
Nemotron on RTX / DGX
☁ Cloud (via privacy router)
OpenAI, Anthropic, NVIDIA cloud
Source: NVIDIA NemoClaw Developer Docs & NemoClaw GitHub (github.com/NVIDIA/NemoClaw), March 2026

How OpenShell Enforces Policy

Every network request the agent makes passes through the OpenShell gateway. If the agent tries to reach a host not on the allowlist, OpenShell blocks the request and surfaces it in a terminal UI for operator approval. File access and inference calls follow the same declarative policy system: organizations define what resources agents can reach, which cloud services are permitted, and how different data classifications are handled, all in YAML.

The NemoClaw CLI orchestrates four components: the OpenShell gateway, the agent sandbox, the inference provider, and network policy. Deployment follows a four-stage blueprint lifecycle: resolve the artifact, verify its digest, plan resources, and apply through the OpenShell CLI. Parent agents can spin up child agents for specialized subtasks, and OpenShell maintains policy compliance across all of them.

# Install NemoClaw on an existing OpenClaw instance npx nemoclaw install # OpenShell gateway launches, Nemotron models pulled locally # YAML policy applied, sandboxed agent environment ready sandbox@my-assistant:~$ openclaw agent --agent main --local \ -m "summarize Q2 contracts" --session-id secure-01

Local Models for Sensitive Work, Cloud for Everything Else

NemoClaw installs NVIDIA's Nemotron open models on whatever dedicated hardware is available: GeForce RTX PCs, RTX PRO workstations, DGX Station, and DGX Spark. When a task involves proprietary data, the agent routes inference locally, keeping the data off cloud infrastructure entirely. A privacy router handles tasks that benefit from frontier model capability, routing those calls to OpenAI, Anthropic, or NVIDIA cloud while the guardrails remain in place.

The stack is hardware-agnostic. NemoClaw does not require NVIDIA GPUs to run. It integrates with NeMo, NVIDIA's existing AI agent software suite, but that dependency is optional. The design reflects a deliberate choice: NVIDIA wants adoption among enterprises that have non-NVIDIA hardware, and building in a hard GPU dependency would limit that reach.

1
Command to install
8
Nemotron Coalition founding members
4
Security partners (Cisco, CrowdStrike, Google, Microsoft)

The Coalition and the Security Stack

Alongside NemoClaw, NVIDIA announced the Nemotron Coalition with eight founding members: Black Forest Labs, Cursor, LangChain, Mistral AI, Perplexity, Reflection AI, Sarvam, and Thinking Machines Lab. The coalition's stated goal is co-developing open frontier models optimized for agentic workloads.

NVIDIA is also working with Cisco, CrowdStrike, Google, and Microsoft Security to bring OpenShell compatibility to their respective security tools. The integration would embed OpenShell's guardrails into the broader enterprise security stack, rather than requiring a parallel system. That partnership structure matters: enterprises already have security tooling, and requiring teams to operate NemoClaw as a separate silo would slow adoption.

Key Insight: OpenShell addresses the agent security problem at the infrastructure layer, not the application layer. This is the same architectural logic that made Kubernetes a default: applications do not need to implement their own scheduling; the platform handles it. NemoClaw applies that principle to agent policy enforcement.

Key Insight: The local-plus-cloud inference model means enterprises can classify workloads by data sensitivity. Proprietary contracts and HR data stay on local Nemotron. General tasks go to cloud frontier models. The privacy router handles the routing decision, not the agent, and not the developer.

Key Insight: NemoClaw is currently an early-stage alpha. NVIDIA stated developers should "expect rough edges" and noted the current focus is environment setup, not production-ready deployment. Teams evaluating it for production workloads should treat it as a signal of architectural direction rather than a deployable solution today.

The open question is whether enterprises will hand their agent infrastructure to NVIDIA as readily as they handed their GPU training jobs to them. OpenShell's YAML policies are a reasonable answer to the security problem. Whether that answer is also a durable competitive position depends on how many of those eight coalition members and four security partners build deep integrations before an alternative emerges.

ResearchAudio.io

Sources: NVIDIA NemoClaw · NVIDIA Newsroom · GitHub · TechCrunch

Keep Reading