What This Is

The official NemoClaw installer (nemoclaw onboard) is broken on WSL2 — it forces --gpu when it detects nvidia-smi, but the GPU can’t pass through to k3s out of the box on Docker Desktop. This repo is the workaround that actually works.

We built a complete setup pipeline that handles sandboxed OpenClaw agent deployment across every platform NVIDIA supports — plus the one they don’t support well yet (WSL2 with GPU passthrough).

This is the first confirmed working GPU-enabled NemoClaw sandbox on WSL2 with an RTX 5090.


What’s Included

  • Two deployment paths for WSL2 — cloud inference (stable) and full GPU passthrough (experimental, confirmed RTX 5090)
  • macOS and native Linux support — tested paths for both platforms
  • Five vertical security policy templates — HIPAA, SOC 2, legal, base lockdown, and dev environment YAML policies for OpenShell
  • CDI pipeline patch scripts — automated GPU UUID device entry, libdxcore.so mount, containerd CDI enablement
  • Architecture documentation — two-gateway diagram, credential injection flow, GPU passthrough mechanics

Quickstart

Prerequisites

PlatformRequirements
WSL2Windows 11, Docker Desktop, NVIDIA drivers 560+, WSL2 kernel 5.15+
macOSDocker Desktop, Apple Silicon or Intel
LinuxDocker, NVIDIA Container Toolkit (for GPU path)

Get Running

# Clone the repo
git clone https://github.com/thenewguardai/tng-nemoclaw-quickstart.git
cd tng-nemoclaw-quickstart

# WSL2 — cloud inference (stable path)
./setup.sh --cloud

# WSL2 — GPU passthrough (experimental)
./setup.sh --gpu

# macOS / Linux
./setup.sh

The setup script handles NemoClaw installation, sandbox creation, OpenShell policy deployment, and agent configuration. Cloud path takes ~5 minutes. GPU path takes ~15 minutes including the CDI pipeline patch.


Architecture

The stack runs as two layers:

NVIDIA OpenShell — sandbox runtime governing network, filesystem, and inference access via declarative YAML policy. Every request the agent makes passes through policy enforcement before execution.

NemoClaw orchestration — wraps OpenClaw inside the OpenShell sandbox, configures inference routing (local Nemotron models or cloud frontier models via privacy router), and manages the agent lifecycle.

┌─────────────────────────────────────┐
│  OpenClaw Agent                     │
│  ├── Skills / Tools                 │
│  └── LLM Inference                  │
├─────────────────────────────────────┤
│  OpenShell Sandbox                  │
│  ├── Network Policy (egress rules)  │
│  ├── Filesystem Policy (/sandbox)   │
│  └── Inference Policy (routing)     │
├─────────────────────────────────────┤
│  NemoClaw Runtime                   │
│  ├── Privacy Router                 │
│  ├── CDI / GPU Passthrough          │
│  └── Container Orchestration (k3s)  │
└─────────────────────────────────────┘

Security Policy Templates

The repo ships five ready-to-deploy OpenShell YAML policies for different verticals:

PolicyUse CaseKey Restrictions
base-lockdown.yamlDefault starting pointNo external network, /sandbox only, local inference
hipaa.yamlHealthcare / PHIAudit logging, no cloud inference, encrypted storage
soc2.yamlSaaS / enterpriseAllowlisted egress, credential rotation, access logging
legal.yamlLegal / privilegeDocument isolation, no external sharing, privilege markers
dev.yamlDevelopment / testingRelaxed network, broader filesystem, cloud inference allowed

Policies are hot-swappable at runtime — change guardrails without restarting the agent.


Platform Notes

WSL2 GPU Passthrough (Experimental)

The GPU path patches the CDI pipeline to pass the GPU through Docker Desktop → k3s → NemoClaw sandbox:

  1. Adds GPU UUID device entry to the CDI spec
  2. Mounts libdxcore.so for DirectX → CUDA translation
  3. Enables CDI in the containerd config
  4. Restarts the container runtime

Confirmed working on RTX 5090. Should work on any RTX 30/40/50 series with driver 560+. File an issue if it doesn’t — we want to expand the compatibility matrix.

macOS

Works on both Apple Silicon and Intel. No GPU passthrough (MPS support planned). Uses cloud inference by default.

Native Linux

Cleanest path. NVIDIA Container Toolkit handles GPU passthrough natively. Both cloud and GPU inference work out of the box.