Governance & Policy
Governance and policy covers the standards, regulatory frameworks, and institutional guidance that constrain how Physical AI systems are designed, evaluated, and deployed. It spans cross-cutting AI risk frameworks (NIST, OECD), regional regulation (EU AI Act, US executive orders), and domain-specific safety standards for industrial and collaborative robotics (ISO 10218, ISO/TS 15066).
From an engineering standpoint, governance is not a separate workstream from the technical stack — it shapes data provenance requirements, evaluation evidence, incident reporting, and the boundary between human oversight and autonomous action. Teams that treat it as a late-stage checklist routinely discover that earlier architectural choices (data pipelines, logging, model documentation) make compliance disproportionately expensive.
When choosing what to track, scope by deployment region (EU AI Act vs. US frameworks), risk tier (consumer, industrial, safety-critical), and domain standards that already apply to your robot class. A small core — NIST AI RMF for risk practice, an applicable ISO robotics standard, and the regional regulation for your market — is usually enough to set the structure of an internal governance baseline.
The NIST AI Risk Management Framework is the most useful single artefact to read first: voluntary, technically grounded, and structured in a way that maps cleanly onto engineering processes.
- NIST AI Risk Management Framework — Voluntary framework for managing risks across the AI lifecycle, applicable to robotics.
- EU AI Act — Regulation establishing risk-tiered obligations for AI systems sold or operated in the EU.
- ISO 10218 / ISO/TS 15066 — Industrial robot and collaborative-robot safety standards underpinning workplace deployment.
- IEEE 7000 Series — Standards on ethical and value-based design for autonomous and intelligent systems.
- OECD AI Principles — Intergovernmental principles guiding trustworthy AI deployment, including embodied systems.
- UK AI Safety Institute — Government body publishing evaluations and guidance on frontier AI risks.
- White House Executive Order on AI (14110) — US federal directive on safe and trustworthy AI development relevant to robotics deployers.
- ISO/IEC 42001 — AI management-system standard for governance, controls, and continuous improvement.
- NIST AI RMF Generative AI Profile — Practical profile extending AI RMF controls to generative-model deployments.
- EU Machinery Regulation (EU 2023/1230) — Core legal framework governing safety requirements for machinery and many robotic systems in the EU.
- UNECE R155 — Cybersecurity requirements for connected and automated road vehicles.
- UNECE R156 — Software update and update-management requirements for vehicles.
- ISO 13482 — Safety standard for personal care robots operating near people.
- UL 4600 — Safety case standard for autonomous products and systems.
- ISO 26262 — Functional safety standard for electrical and software systems in road vehicles.