Skip to content

roadmap: Provider Enhancements -- Declarative Profiles, Auto-Injected Policy, Multi-Provider Inference #896

@johntmyers

Description

@johntmyers

Sourced from GitHub Discussion #865 by @johntmyers

We're proposing a significant enhancement to OpenShell's provider system. Today, providers only handle credentials -- network access, inference routing, and policy configuration are all separate manual steps. This proposal unifies them under declarative provider profiles.

We'd love community feedback on the UX, the policy composition model, and anything we might be missing.

Related issues: #565 (deny rules, shipped), #768 (incremental policy updates), #825 (policy update CLI)


Today, configuring a provider and configuring network access are completely disconnected. Creating a github provider does nothing for network policy -- the user must separately author YAML that allows api.github.com:443 and github.com:443, define binaries, set access presets, and get enforcement right. Testing showed this requires multiple policy update loops even with agent assistance.

Providers already know what endpoints they need. A claude provider needs api.anthropic.com, statsig.anthropic.com, sentry.io. A github provider needs api.github.com and github.com. This information should come from the provider, not from the user.

Additionally, providers that offer inference endpoints need to have those endpoints explicitly set in networking policies or users must select exactly one inference provider + model to use with the localized inference.local endpoint exposed by the proxy. In this proposal, we allow providers to register endpoints with inference.local based on their provider type profile definitions.

Core changes

  • Introduce Provider Type Profiles. These are declarative YAML definitions that contain the following elements for a provider. These profiles register provider types. These types are used to create a provider via openshell provider create …
    • Expected credential types (what is currently supported in providers) and injection point for the credentials.
    • Known endpoints. These utilize the existing network policy language
    • Binaries. These utilize the existing policy language.
    • Verification. An optional endpoint that can be used to verify connectivity to a provider when created
    • Inference. Base URL for inference, inference protocol and optional default headers.
  • Auto-inject provider endpoints into sandbox network policy
  • Allow attaching and detaching of providers to/from running sandboxes
  • Inference automation: auto-configure inference when inference-capable providers are attached
    • Support multi-provider inference with path-based routing (inference.local/anthropic/v1/messages)
  • Optional credential verification on provider creation (i.e. probe an endpoint)
  • Allow registration of arbitrary Provider Type Profiles
  • Credential refresh lifecycle: declarative refresh strategies (OAuth2 token rotation, static per-request re-read), configurable max lifetime, fail-closed expiry

Roadmap Items

These are future enhancements that build on top of the proposed provider enhancements:

  • Automatically run the policy prover on sandbox startup and optionally halt sandbox creation if any risks are identified
  • On verification API calls, extract credential scope automatically when supported by the upstream provider. These scopes will be auto-injected into prover analysis.
  • Support additional refresh strategies (OIDC, AWS STS, GCP service account key rotation)
  • Credential refresh telemetry (rotation count, failure rate, time-to-expiry metrics via OCSF events)

Supporting Work

These items are supporting work that are isolated features but facilitate UX around the use of Provider Profiles:

Provider Type Profiles

We show two examples below for GitHub and Claude Code. OpenShell will ship with default provider type profiles, these can be used as-is or as templates for registration of new profiles.

GitHub Example

id: github
display_name: GitHub
description: GitHub API and Git operations
category: source-control

credentials:
  - name: api_token
    description: GitHub personal access token or fine-grained token
    env_vars: [GITHUB_TOKEN, GH_TOKEN]
    required: true
    auth:
      style: bearer

endpoints:
  - host: api.github.com
    port: 443
    access: read-write
    protocol: rest
    enforcement: enforce
    deny_rules:
      - method: PUT
        path: "/repos/*/branches/*/protection"
      - method: PUT
        path: "/repos/*/branches/*/protection/**"
      - method: "*"
        path: "/repos/*/rulesets"
      - method: "*"
        path: "/repos/*/rulesets/*"
      - method: POST
        path: "/repos/*/pulls/*/reviews"
      - method: POST
        path: "/repos/*/actions/runs/*/approve"
  - host: github.com
    port: 443
    access: read-only
    protocol: rest
    enforcement: enforce

binaries:
  - /usr/bin/gh
  - /usr/local/bin/gh
  - /usr/bin/git
  - /usr/local/bin/git

verification:
  endpoint: api.github.com
  method: GET
  path: /user
  expected_status: 200

Claude Code Example

id: claude
display_name: Claude Code
description: Claude Code (Anthropic's coding CLI)
category: inference

credentials:
  - name: api_key
    description: Anthropic API key
    env_vars: [ANTHROPIC_API_KEY, CLAUDE_API_KEY]
    required: true
    auth:
      style: header
      header_name: x-api-key

endpoints:
  - host: api.anthropic.com
    port: 443
    access: read-write
    protocol: rest
    enforcement: enforce
  - host: statsig.anthropic.com
    port: 443
    access: read-write
    protocol: rest
    enforcement: enforce
  - host: sentry.io
    port: 443
    access: read-write
    protocol: rest
    enforcement: enforce

binaries:
  - /usr/local/bin/claude
  - /usr/bin/claude

inference:
  base_url: https://api.anthropic.com/v1
  protocols: [anthropic_messages, model_discovery]
  default_headers:
    anthropic-version: "2023-06-01"

verification:
  type: inference_probe

Inference-capable providers can set type: inference_probe to reuse the existing inference verification (minimal POST /v1/messages request) instead of defining a custom endpoint.

CLI Changes

Browse available Provider Types

Providers are grouped based on the category key in the provider type profile.

$ openshell provider types

Available Provider Types:
  INFERENCE
    anthropic    Anthropic API (Claude models)           endpoints: 1
    claude       Claude Code (coding CLI)                endpoints: 3
    nvidia       NVIDIA AI endpoints                     endpoints: 2
    openai       OpenAI API (GPT models)                 endpoints: 1

  SOURCE CONTROL
    github       GitHub API and Git operations           endpoints: 2   deny: 6
    gitlab       GitLab API and Git operations           endpoints: 2

  MESSAGING
    slack        Slack messaging                         endpoints: 2
    telegram     Telegram Bot API                        endpoints: 1
    discord      Discord Bot API                         endpoints: 3

Provider Creation

Provider creation largely stays the same with the exception of additional verbosity that shows which policy data is also included. This policy data will automatically merge into a sandbox's running policy.

$ openshell provider create --type github --name my-github --from-existing

  Verifying credentials against api.github.com...
  Credential verification passed (200 OK)

  Created provider 'my-github' (type: github)

  Credentials:  GITHUB_TOKEN ............a1b2  (verified)
  Endpoints:    api.github.com:443 (read-write), github.com:443 (read-only)
  Binaries:     /usr/bin/gh, /usr/local/bin/gh, /usr/bin/git, /usr/local/bin/git
  Deny rules:   6 safety rules (branch protection, PR approval, ...)

  Sandboxes using this provider will automatically have network access
  to these endpoints. No additional policy configuration required.

Or providing credentials directly:

$ openshell provider create --type github --name ci-github \
    --credential "GITHUB_TOKEN=ghp_abc123def456"

  Verifying credentials against api.github.com...
  Credential verification passed (200 OK)
  Scopes: repo, workflow

  Created provider 'ci-github' (type: github)

  Credentials:  GITHUB_TOKEN ............f456  (verified)
  Endpoints:    api.github.com:443 (read-write), github.com:443 (read-only)
  Binaries:     /usr/bin/gh, /usr/local/bin/gh, /usr/bin/git, /usr/local/bin/git
  Deny rules:   6 safety rules (branch protection, PR approval, ...)

Verification failure:

$ openshell provider create --type anthropic --name bad-key \
    --credential "ANTHROPIC_API_KEY=sk-ant-INVALID"

  Verifying credentials against api.anthropic.com...
  Credential verification failed: 401 Unauthorized

  Provider not created. Next steps:
    - Verify the API key is correct and active
    - Retry with --no-verify to skip verification

Multi-provider inference

When a provider profile contains inference information, when that profile is loaded to a sandbox, the inference endpoint for that provider becomes available on the inference.local proxy endpoint:

$ openshell sandbox create --provider my-anthropic,my-openai -- claude

  Inference auto-configured
  inference.local/anthropic -> api.anthropic.com (anthropic_messages)
  inference.local/openai    -> api.openai.com (openai_chat_completions)

  Default route: inference.local -> anthropic (first inference provider)

Customizing Provider Profiles

Need GitHub with extra endpoints or different deny rules? Fork and register:

$ openshell provider export-profile github > my-github.yaml
# Edit my-github.yaml: add endpoints, remove deny rules, etc.

$ openshell provider register-profile my-github.yaml

  Registered provider profile 'my-github'

$ openshell provider create --type my-github --name work-github --from-existing

  Verifying credentials against api.github.com...
  Credential verification passed (200 OK)
  Scopes: repo, read:org, write:packages

  Created provider 'work-github' (type: my-github)

Provider Attach / Detach

Currently, providers may only be attached to sandboxes at sandbox creation time. This triggers the user land process to carry placeholder env variables that are used in API calls and those placeholders are replaced with concrete credentials by the sandbox proxy.

It is not possible to attach providers after a sandbox has launched as there is no safe way to modify the live environment variables of the child process managed by the supervisor. This is a hard kernel limitation.

However, with Provider Profiles, the credential location (headers, query params, HTTP basic auth) are defined within the provider profile. This will enable the sandbox proxy to directly inject credentials without having to scan placeholder variables that derive from the user space environment.

Current Credential Injection Support

Today, the proxy intercepts outbound HTTP requests and resolves placeholder strings (openshell:resolve:env:<KEY>) to real secret values. The proxy supports six injection points:

Injection Point Example How It Works
Exact header value x-api-key: openshell:resolve:env:API_KEYx-api-key: sk-real Entire header value is a placeholder
Bearer token Authorization: Bearer openshell:resolve:env:TOKENAuthorization: Bearer sk-real Bearer <placeholder> pattern
Basic auth Authorization: Basic base64("user:openshell:resolve:env:PASS") → resolved + re-encoded Base64-decoded, placeholder resolved, re-encoded
Query parameter ?key=openshell:resolve:env:API_KEY?key=sk-real Per-value in query string, percent-encoded
URL path segment /api/openshell:resolve:env:ORG_TOKEN/resources/api/org-secret/resources Standalone path segment
URL path substring /botopenshell:resolve:env:TELEGRAM_TOKEN/sendMessage/bot123:ABC/sendMessage Concatenated within a path segment (Telegram-style)

All injection paths are fail-closed: if any unresolved openshell:resolve:env:* placeholder remains in the outbound request after rewriting, the request is rejected. Secrets are validated against HTTP header injection (CWE-113) and path traversal (CWE-22).

Provider Profile Credential Declarations

Provider profiles declare how credentials should be injected via the auth block on each credential:

credentials:
  - name: api_token
    env_vars: [GITHUB_TOKEN, GH_TOKEN]
    required: true
    auth:
      style: bearer            # Proxy injects as: Authorization: Bearer <value>

  - name: api_key
    env_vars: [ANTHROPIC_API_KEY]
    required: true
    auth:
      style: header            # Proxy injects as: <header_name>: <value>
      header_name: x-api-key

  - name: api_key
    env_vars: [YOUTUBE_API_KEY]
    required: true
    auth:
      style: query             # Proxy injects as: ?<query_param>=<value>
      query_param: key

  - name: bot_token
    env_vars: [TELEGRAM_BOT_TOKEN]
    required: true
    auth:
      style: path              # Proxy injects into URL path segment
      path_template: "/bot{credential}/..."

By declaring the injection style in the profile, the proxy no longer needs to scan placeholder strings from the child process environment. Instead, when a request targets a provider endpoint, the proxy knows exactly where and how to inject the credential based on the profile's auth declaration. This opens the door to attaching providers after sandbox creation, since credential injection is proxy-side and does not depend on the child process environment.

Credential Scoping

Provider profiles also enable credential scoping -- binding credentials to specific endpoints and binaries. Today, credential injection is endpoint-blind: the proxy resolves placeholders in any outbound request regardless of destination. With profiles, the credential, endpoints, and binaries are declared as a single unit, and the proxy enforces this binding at runtime.

This means a GITHUB_TOKEN credential is only injected for requests targeting api.github.com:443 or github.com:443, from binaries /usr/bin/gh or /usr/bin/git. A request from an unlisted binary to an unlisted endpoint carrying a credential placeholder would be rejected -- the credential is not scoped to that (endpoint, binary) pair.

Today With Provider Profiles
Credential injection Any request, any destination, any binary Only requests matching the profile's endpoints + binaries
Exfiltration risk Process could embed placeholder in request to attacker-controlled host Proxy rejects injection for unscoped destinations
Binding None -- credentials float freely (credential, endpoint, binary) triple declared in profile, enforced by proxy

Credential Refresh

Today, providers that use short-lived tokens (OAuth2, SSO) require users to build their own host-side refresh daemons and push scripts. This is the #1 workaround pattern in community demos (see brevdev/nemoclaw-demos#20, google-workspace-demo). Provider Profiles should make credential refresh a first-class, declarative feature.

The Problem

The existing proxy injection model assumes static credentials -- an API key that doesn't change for the lifetime of the sandbox. But many providers use OAuth2 tokens that expire (Anthropic SSO: ~8h, Google: ~1h). Without platform support, users must:

  1. Write a host-side daemon that reads the refresh token, calls the token endpoint, and pushes the short-lived access token into the sandbox
  2. Handle rotation timing, retry logic, sandbox replacement detection, and PID lifecycle
  3. Ensure the refresh token never enters the sandbox

This is ~300 lines of boilerplate per provider. Provider Profiles should eliminate it.

Refresh Strategies

Profiles declare a refresh block on each credential. Three strategies are supported:

Strategy Mechanism What enters sandbox Rotation
static (default) Proxy re-reads credential from store per-request Nothing (proxy-injected) Instant on host-side update
oauth2 Platform-managed refresh loop on the gateway Short-lived access token only Automatic, bounded by max_lifetime + lead_time
external User-managed daemon/script pushes tokens via openshell sandbox upload Whatever the script pushes User-controlled

Profile Schema: refresh Block

Static (default) -- No refresh block needed. The proxy re-reads the credential from the provider's credential store on every request. If the user rotates the key on the host, it takes effect immediately on the next proxied request.

credentials:
  - name: api_key
    env_vars: [ANTHROPIC_API_KEY]
    required: true
    auth:
      style: header
      header_name: x-api-key
    # refresh.strategy defaults to "static" -- no block needed

OAuth2 -- The gateway runs a refresh loop. The refresh token stays on the host; only the short-lived access token is used for proxy injection.

credentials:
  - name: oauth_token
    description: Anthropic OAuth access token (auto-refreshed from SSO)
    env_vars: [ANTHROPIC_ACCESS_TOKEN]
    required: true
    auth:
      style: header
      header_name: Authorization
      prefix: "Bearer "
    refresh:
      strategy: oauth2
      token_url: https://platform.claude.com/v1/oauth/token
      client_id: "9d1c250a-e61b-44d9-88ed-5944d1962f5e"
      grant_type: refresh_token
      scopes: [user:inference]
      max_lifetime: 7200        # Force rotation every 2h regardless of server expiry
      lead_time: 600            # Begin refresh 10 min before expiry
      source: ~/.claude/.credentials.json
      source_path: claudeAiOauth  # Path to the credential object in the source file

Google OAuth2 example:

credentials:
  - name: google_access_token
    description: Google OAuth access token (auto-refreshed)
    env_vars: [GOOGLE_ACCESS_TOKEN]
    required: true
    auth:
      style: header
      header_name: Authorization
      prefix: "Bearer "
    refresh:
      strategy: oauth2
      token_url: https://oauth2.googleapis.com/token
      client_id: "<installed-app-client-id>"
      client_secret_env: GOOGLE_CLIENT_SECRET   # Read from host env/credential store
      grant_type: refresh_token
      scopes: [https://www.googleapis.com/auth/gmail.modify, https://www.googleapis.com/auth/calendar]
      max_lifetime: 3600        # Google tokens expire in ~1h
      lead_time: 600
      source: ~/.nemoclaw/credentials.json
      source_path: google

External -- The platform does not manage refresh. Instead, an external daemon pushes tokens and the profile declares where the gateway should read them from. This is an escape hatch for providers with non-standard auth flows.

credentials:
  - name: custom_token
    env_vars: [CUSTOM_TOKEN]
    required: true
    auth:
      style: header
      header_name: Authorization
      prefix: "Bearer "
    refresh:
      strategy: external
      token_file: /sandbox/.openclaw-data/custom/access_token
      expiry_file: /sandbox/.openclaw-data/custom/token_expiry   # epoch seconds

Security Invariants

These hold across all refresh strategies:

  • Refresh tokens never enter the sandbox. The oauth2 strategy runs the refresh loop on the gateway. The sandbox proxy injects only the short-lived access token into outbound requests.
  • Static credentials never enter the sandbox. The static strategy re-reads from the host credential store per-request. The sandbox process holds no secret material.
  • Fail-closed expiry. If the refresh loop fails (token endpoint unreachable, refresh token revoked), the access token expires naturally and subsequent proxy injections fail. The sandbox cannot fall back to a long-lived credential.
  • Sandbox replacement detection. The refresh loop tracks the sandbox UUID. If the sandbox is recreated (new UUID), the loop exits cleanly -- it will not push tokens into a different sandbox instance.
  • Credential scoping still applies. Refreshed tokens are bound to the same (credential, endpoint, binary) triple as static credentials. A refreshed OAuth token for Anthropic is only injected into requests targeting api.anthropic.com from allowed binaries.

Refresh Lifecycle

┌─────────────┐     ┌───────────────────┐     ┌──────────────────┐
│ Host         │     │ Gateway           │     │ Sandbox          │
│              │     │                   │     │                  │
│ credentials  │────▶│ Refresh loop      │     │ Process makes    │
│ .json        │     │ (oauth2 strategy) │     │ API request      │
│              │     │                   │     │       │          │
│ refresh_token│     │ access_token ─────│────▶│  Proxy injects   │
│ (never moves)│     │ (short-lived)     │     │  Bearer token    │
│              │     │                   │     │       │          │
│              │     │ Rotation timer    │     │  Request sent    │
│              │     │ min(server_expiry │     │  to upstream     │
│              │     │   - lead_time,    │     │                  │
│              │     │   max_lifetime)   │     │                  │
└─────────────┘     └───────────────────┘     └──────────────────┘

The refresh loop runs on the gateway, not as a separate host-side daemon. When the gateway manages multiple sandboxes using the same provider, a single refresh loop serves all of them -- the access token is shared across sandboxes since it's injected by the proxy, not pushed into sandbox filesystems.

Retry and Failure Handling

When a token refresh fails:

  1. Exponential backoff: retry at [5, 15, 30, 60, 120] second intervals (5 attempts)
  2. After exhaustion: the refresh loop marks the credential as degraded. The existing access token continues working until it expires.
  3. On expiry: proxy injection returns a 502 to the sandbox process. The error includes a diagnostic message: "credential refresh failed for provider 'my-claude' -- token expired"
  4. Recovery: if a subsequent refresh succeeds, the credential is marked healthy and injection resumes immediately.

Rotation Timing

Two bounds control when the next rotation fires:

  1. Server expiry bound: refresh lead_time seconds before the token's expires_at (default: 600s / 10 min)
  2. Max lifetime bound: force rotation after max_lifetime seconds regardless of server expiry (default: 7200s / 2h)

The effective interval is min(server_expiry - lead_time, max_lifetime), floored at 30 seconds.

Setting max_lifetime Worst-case compromise window
Every 2h (default) 7200 ≤ 2h
Hourly 3600 ≤ 1h
Near server expiry (~8h) 0 (disabled) ≤ server expiry
Custom 1800 user-configured

CLI: Refresh Management

View refresh status for a provider's credentials:

$ openshell provider refresh-status my-claude

PROVIDER    CREDENTIAL      STRATEGY  STATUS   LAST REFRESH         NEXT REFRESH         TTL
my-claude   oauth_token     oauth2    healthy  2025-04-21 10:15:02  2025-04-21 12:15:02  1h 43m

Force an immediate credential rotation:

$ openshell provider rotate my-claude

  Rotating credentials for provider 'my-claude'...
  OAuth2 refresh against platform.claude.com... OK
  New access token issued (expires in 8h, next rotation in 2h)

  Active sandboxes using this provider: 3
  All sandbox proxies will use the new token on next request.

Update refresh configuration for an existing provider:

$ openshell provider refresh-config my-claude --max-lifetime 3600

  Updated refresh config for 'my-claude':
    max_lifetime: 7200 -> 3600
    Next rotation adjusted: 2025-04-21 11:15:02

  Active sandboxes using this provider: 3

Provider Creation with OAuth2

When a provider profile uses the oauth2 strategy, provider create initiates the OAuth flow or reads existing tokens:

$ openshell provider create --type claude-sso --name my-claude --from-existing

  Reading OAuth credentials from ~/.claude/.credentials.json...
  Found refresh token (scopes: user:inference)

  Verifying token refresh against platform.claude.com...
  Access token obtained (expires in 8h)

  Created provider 'my-claude' (type: claude-sso)

  Credentials:  oauth_token (auto-refresh, max_lifetime: 2h)
  Endpoints:    api.anthropic.com:443, statsig.anthropic.com:443, sentry.io:443
  Refresh:      oauth2 via platform.claude.com (every 2h or 10m before expiry)

  The refresh token stays on this host. Sandboxes will only receive
  short-lived access tokens via proxy injection.

CLI: Attach and Detach

Attach a provider to a running sandbox:

$ openshell provider attach my-sandbox --provider my-github

  Attached provider 'my-github' to sandbox 'my-sandbox'

  Endpoints injected:
    api.github.com:443 (read-write, rest, enforce, 6 deny rules)
    github.com:443 (read-only, rest, enforce)
  Binaries: /usr/bin/gh, /usr/local/bin/gh, /usr/bin/git, /usr/local/bin/git

  Credential injection: proxy-side (GITHUB_TOKEN via Authorization: Bearer)

  Sandbox will receive updated policy on next refresh cycle.

Attach at sandbox creation time (existing flow, enhanced output):

$ openshell sandbox create --provider my-claude,my-github -- claude

  Providers attached:
    my-claude:  3 endpoints, inference enabled
    my-github:  2 endpoints, 6 deny rules

  Policy auto-configured
  5 endpoints injected from providers

  Inference auto-configured
  inference.local -> api.anthropic.com (anthropic_messages)

  Created sandbox 'my-sandbox-abc'

Detach a provider from a running sandbox:

$ openshell provider detach my-sandbox --provider my-github

  Detached provider 'my-github' from sandbox 'my-sandbox'

  Removed endpoints:
    api.github.com:443
    github.com:443

  Credential injection for GITHUB_TOKEN disabled.

  Sandbox will receive updated policy on next refresh cycle.

Detach removes the provider's _provider_* entry from the effective policy. User-authored rules for the same endpoints (Layer 3) are unaffected.

List providers attached to a sandbox:

$ openshell provider list --sandbox my-sandbox

NAME         TYPE       ENDPOINTS  INFERENCE  DENY RULES
my-claude    claude     3          yes        0
my-github    github     2          -          6

Policy Layer Model

Policy gains layers with tracked provenance. Layers are stored independently and composed JIT for sandbox use.

The 3-layer stack is:

+---------------------------------------------+
|              Effective Policy               |
|  (what the sandbox enforces -- single doc)  |
+---------------------------------------------+
|  Layer 3: User Policy                       |  openshell policy set
|  (explicit user-authored network rules)     |
+---------------------------------------------+
|  Layer 2: Provider Policy (per provider)    |  auto-generated from providers
|  [provider:github] endpoints, deny rules    |
|  [provider:claude] endpoints                |
+---------------------------------------------+
|  Layer 1: Base Policy                       |  filesystem, process, landlock
|  (static sandbox config)                    |
+---------------------------------------------+

Composition Semantics

Layers are concatenated, not merged. Each layer contributes separate entries to the network_policies map. They are never combined into a single entry.

network_policies:
  # Layer 2 entries (from providers) -- one per attached provider
  _provider_work_github:    { endpoints: [...], binaries: [...] }
  _provider_my_claude:      { endpoints: [...], binaries: [...] }

  # Layer 3 entries (from user) -- user-authored or via `policy update`
  custom_pypi:              { endpoints: [...], binaries: [...] }
  allow_uploads_github_443: { endpoints: [...], binaries: [...] }

There is no merging between layers. If Layer 2 and Layer 3 both reference api.github.com:443, they exist as separate rules. OPA evaluates all rules independently.

What Happens When the Same Endpoint Appears in Multiple Rules

This is the key question. The Rego evaluation handles it:

Decision Evaluation Semantics
L4 allow network_policy_for_request ANY rule matching (host, port, binary) grants L4 access. Most permissive wins.
L7 allow allow_request ANY matching rule whose L7 rules permit the request grants L7 access. Most permissive wins.
L7 deny deny_request ANY matching rule whose deny rules match the request blocks it globally. Most restrictive wins.

This means:

  • Provider deny rules can't be bypassed by user rules. If _provider_github denies POST /repos/*/pulls/*/reviews, adding a user rule for the same endpoint with access: full won't help -- deny_request scans across ALL matching rules globally.
  • User rules can add access beyond what the provider grants. If _provider_github only allows read-only, a user rule for api.github.com:443 with access: read-write effectively grants write access (most permissive allow wins). But deny rules still apply.
  • The combination is: union of allows, union of denies, deny wins over allow.

Composition Triggers

The effective policy is computed JIT from independently stored layers. Any change to a layer triggers recomposition -- the sandbox detects the change on its next poll cycle and rebuilds the effective policy locally.

What Triggers Recomposition

Trigger What Changes Layer Affected Scope
openshell policy set --file policy.yaml Full user policy replacement Layer 3 Single sandbox
openshell policy update --add-endpoint ... Incremental user policy update (#825) Layer 3 Single sandbox
Chunk approval (mechanistic mapper) Incremental user policy update Layer 3 Single sandbox
openshell provider attach Provider added to sandbox Layer 2 Single sandbox
openshell provider detach Provider removed from sandbox Layer 2 Single sandbox
Provider profile re-registered Profile definition updated in registry Layer 2 All sandboxes using that profile
openshell policy set --global Global policy override All layers All sandboxes

Full Policy Replacement

When a user runs openshell policy set --file policy.yaml, this replaces the Layer 3 user policy entirely. However, provider rules (Layer 2) are still composed in on top. The user-authored YAML only controls Layer 3 -- it cannot remove or override _provider_* entries.

Before:
  Layer 2: _provider_github (from attached provider)
  Layer 3: custom_pypi, allow_uploads_github_443 (user-authored)

User runs: openshell policy set --file new-policy.yaml
  new-policy.yaml contains: custom_npm (new rule)

After:
  Layer 2: _provider_github (unchanged -- still attached)
  Layer 3: custom_npm (replaced)

The _provider_github entry persists because it comes from the attached provider, not the user policy. To remove it, the user must detach the provider.

Incremental Policy Updates

Incremental updates via openshell policy update (#825) and chunk approvals both modify Layer 3 using the unified merge_policy() function. These changes are additive -- they merge into the existing Layer 3 rather than replacing it.

Before:
  Layer 2: _provider_github
  Layer 3: custom_pypi

User runs: openshell policy update my-sandbox --add-endpoint "npm.pkg.github.com:443:read-only"

After:
  Layer 2: _provider_github (unchanged)
  Layer 3: custom_pypi, allow_npm_pkg_github_com_443 (added)

The same merge function handles chunk approvals from the mechanistic mapper. When a user approves a draft rule in the TUI or CLI, the proposed rule is merged into Layer 3 using the same semantics.

Provider Attach / Detach

Attaching a provider adds a _provider_* entry to Layer 2. Detaching removes it. Neither operation touches Layer 3.

Before:
  Layer 2: _provider_claude
  Layer 3: custom_pypi

User runs: openshell provider attach my-sandbox --provider my-github

After:
  Layer 2: _provider_claude, _provider_github (added)
  Layer 3: custom_pypi (unchanged)
User runs: openshell provider detach my-sandbox --provider my-github

After:
  Layer 2: _provider_claude (github removed)
  Layer 3: custom_pypi (unchanged)

If the user had also added a Layer 3 rule for api.github.com:443 via policy update, that rule survives the detach -- it belongs to Layer 3, not to the provider.

Global Policy Override

When a global policy is active (openshell policy set --global), incremental updates, chunk approvals, and provider attach/detach are all blocked. The global policy takes full control. This is existing behavior and is unchanged.

Metadata

Metadata

Assignees

No one assigned

    Labels

    area:policyPolicy engine and policy lifecycle workroadmap

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions