Functional AI agents on your hardware.

Open-source software that runs AI agents on your own servers. No cloud, no telemetry, no lock-in. Deploy on bare metal, VMs, or Kubernetes.

physiclaw — quick-start
$ curl -fsSL https://get.physiclaw.dev | sh -s -- \
  --cluster-name my-agents \
  --enable-gpu \
  --license oss

How it works

1. Deploy on your infrastructure

Run Physiclaw on bare metal, VMs, or Kubernetes. Everything stays inside your perimeter.

2. Connect to your tools

Agents plug into Prometheus, K8s, Vault, Slack, and other on-prem services you already use.

3. Chat or command; agents run

From terminal or API you assign tasks. Agents execute on your stack with no data leaving your network.

Agent Roles

Specialized agents, your infrastructure

Pre-built roles for SRE, security, data, and code tasks. Each loads its own toolchain.

PrometheusK8sTerraformGrafanaAlertingLog AnalysisCVE ScanIAMSIEMComplianceSQLETLSnowflakedbtQualityRefactorTestsLintingCI/CDDocs

← Hover a persona to illuminate

Integrations

Common enterprise on-prem services

Connect agents to the tools you already run inside your perimeter.

Prometheus

Metrics and alerting

Grafana

Dashboards and visualization

Kubernetes

Orchestration and workloads

Vault

Secrets and identity

LDAP / Active Directory

Identity and access

PostgreSQL

Data and vector store

GitLab

Source and CI

Jenkins

Pipelines and automation

SIEM

Security events and audit

Slack

Chat and notifications

Microsoft Teams

Chat and collaboration

OpenTelemetry

Traces and observability

Splunk

Log aggregation and search

Elastic

Search and analytics

Security

Nothing leaves your network.

Every layer runs inside your perimeter. No telemetry, no phone-home, no external trust boundaries.

YOUR INFRA

← Hover a security layer to illuminate

Extend

Everything is a config change.

Swap runtimes, vector stores, and audit backends in YAML. No vendor calls, no lock-in.

physiclaw.yaml
config v0.9
# physiclaw.yaml
---
runtime:
backend: "vllm"# hot-swappable inference engine
model: "llama-3-70b"# any GGUF / safetensors weight
gpu_layers: "auto"# offload control
max_concurrent: "32"# per-node parallelism
knowledge:
store: "pgvector"# your vectors, your network
embedder: "bge-large"# on-prem embedding model
chunker: "semantic"# document splitting strategy
reranker: "cross-encoder"# optional re-ranking pass
audit:
backend: "merkle-log"# tamper-evident storage
signing: "cosign"# cryptographic verification
export: "siem-sink"# compliance export target
retention: "forever"# WORM retention policy
runtime: vllmknowledge: pgvectoraudit: merkle-log
valid