PREVIEW · FORTROS.DEV
FORTROS · CLUSTERING HYPERVISOR + SOVEREIGN · ZERO-TRUST · SPLIT-TOLERANT + BRING YOUR OWN OS · WE MANAGE THE IRON + NO VENDORS IN YOUR TRUST CHAIN + HARDWARE-ROOTED IDENTITY · K-OF-N MULTISIG + PREVIEW BUILD · 2026.04 + FORTROS · CLUSTERING HYPERVISOR + SOVEREIGN · ZERO-TRUST · SPLIT-TOLERANT +
SYS·INDEX / PREVIEW 001.04.2026
FORTROS · CLUSTERING HYPERVISOR

YOUR HARDWARE.
ONE SELF-HEALING CLUSTER.

FortrOS is a self-organizing operating system that runs at the hypervisor layer. Bring your own guest OS — Windows, Linux, whatever your users need — and FortrOS handles gossip-coordinated clustering, zero-knowledge storage, hardware-rooted identity, and split-tolerant state across every machine you own.

STATUSOPERATOR-VERIFIED INITS6+S6-RC CONSENSUSGOSSIP+CRDT TRUSTED25519 · K-OF-N BUILD2026-04-21T19:50:49Z SHA425b43a-dirty

Scope / 02

Your OS stack stays.
We run the iron underneath.

FortrOS runs at the hypervisor layer — same architectural tier as Proxmox, ESXi, or AWS EC2. Your existing OS deployments (Windows, Ubuntu, your managed Linux) run as guests and keep their own update cadences, EDR agents, and compliance controls.

That layer split is also the regulatory posture. Consumer-OS duties — age assurance, on-device verification, platform-content obligations — live with the guest OS vendor where they always have. You pick whichever guest your compliance team already clears. FortrOS isn't one.

STACK / 02.01
CLASSIFICATION · INFRASTRUCTURE
LAYER

Hypervisor tier

Same procurement bucket as Proxmox, ESXi, EC2. Bring-your- own-hardware compute fabric, nothing above the VM boundary.

GUEST OS

Your existing stack

Windows, Ubuntu, RHEL, managed distros — whatever your users already run, with their existing tooling and agents intact.

CONTENT DUTIES

Stay with the guest

Age-verification, platform-duty frameworks, and consumer-OS content regs apply where they always have — at the guest OS vendor, not the infrastructure.


Stack collapse / 03

Five procurement lines.
One operating system.

The typical stack for zero-trust, self-healing, multi-site VM orchestration involves five to seven separate products from five to seven separate vendors — each with their own auth, billing, SLA, and compliance posture. FortrOS collapses them into one Rust workspace with no external dependencies.

Capability FortrOS Proxmox VE vSphere / Nutanix Tailscale + Vault + Jamf
Zero-trust per connection Native (conn_auth) No Bolt-on (NSX) Tailscale layer
Split-tolerant operations Gossip+CRDT, no quorum Corosync quorum halts vCenter single point Not in scope
Immutable base, rebuild-not-patch image.sig, rolling LUKS rotate Debian mutable ESXi mutable OS out of scope
Zero-knowledge encrypted storage Erasure-coded slivers ZFS w/ host access vSAN + add-on N/A
Hardware-rooted identity (TPM + YK/CAC) TPM NV + HKDF No Enterprise add-on At the edge only
K-of-N multisig for destructive ops Ed25519 + topology spread No No Vault policy only
Self-hosted control plane The org is the control plane Proxmox VE vCenter (required) Self-host Headscale
External dependencies in hot path None None Broadcom licensing 4-5 SaaS vendors
Remote wipe via org-signed envelopes Shipped No Jamf at guest layer Jamf layer

Compliance alignment / 04

Built for regulated environments.

FortrOS's primitives map cleanly onto the controls compliance reviewers actually ask about. Hardware-rooted identity at the boot layer. Dual-control at the destructive-operation boundary. An immutable audit chain rooted in the same key material that authorises reads. Air-gap capable by construction, because the control plane is the org.

04.01
Hardware-rooted auth

TPM NV + YubiKey / CAC / PIV

Permanent hardware identity unlocks the LUKS keyslot before any network touches the box. Smart-card auth for FIPS-shape environments; YubiKey for the rest.

Read trust chapter →
04.02
Partition-resistant ops

K-of-N node verification, topology-spread

Destructive operations are verified by K of N org nodes, with signatures required from K distinct branches of the topology tree. A compromised node and its partitioned provisioning children can't muster enough cross-branch signatures to push changes through.

Read trust chapter →
04.03
Air-gap + field deploy

No external dependency

No phone-home, no SaaS in the hot path. The org is the control plane. Transport profiles for hostile networks: LAN, CDN-fronted, direct-origin, Tor.

Read transport profile →
04.04
Permanent audit chain

Provisioning chain, recursive revoke

Every enrollment records which admin via which invite. Compromised node revocation walks the chain: any nodes that node provisioned inherit the revocation.

Read trust chapter →
04.05
Split-tolerant

Gossip + CRDTs, no quorum

A WAN cut between facilities doesn't halt either side. Both partitions keep operating; state merges on reconnect. No consensus-halt failure mode to write into the runbook.

Read CRDTs concept →
04.06
Auditable trust surface

Rust, ~thousands of lines

No systemd, no containerd, no Python in the hot path. Each critical primitive is a few hundred lines of Rust in one workspace. A CISO audit is a tractable ask, not a multi-week OSS-pedigree review.

Read more →
Full guide — 60+ pages of design rationale
Interface preview / 04

Three surfaces.
One design language.

From PXE first boot to admin reach-in, the operator sees a single, consistent visual system: IBM Plex set against deep navy, brass accents on active state, granite surfaces, corner-bracketed panels with catalog identifiers. Built like a piece of equipment, not a web app.