Knowledge Centre · Agentik Safety Framework (ASF)

SAFEGUARD.md
Knowledge
Centre

// pre-deployment AI safety audit resources

Your centralised gateway to pre-deployment safety audit resources, specifications, and comprehensive safety standards for autonomous AI systems.

About This Specification

SAFEGUARD.md — Pre-deployment AI Safety Audit Standard

SAFEGUARD.md is a plain-text file convention that defines pre-deployment safety audit requirements for AI agents before production use. It specifies readiness checklists, safety gate validations, compliance markers, and audit protocols. Before an agent is deployed, it must pass the safety audit gates defined in safeguard.md to ensure it operates within defined safety boundaries.

View the full specification · GitHub repository

The Agentik Safety Framework (ASF)

Explore all 13 specifications in the complete safety framework for autonomous AI systems.

Pre-deployment Safety Audit

ASF-01 SAFEGUARD safeguard.md

Pre-deployment AI safety audit and readiness validation

Operational Control

ASF-02 KILLSWITCH killswitch.md

Emergency stop mechanism and shutdown protocols

ASF-03 THROTTLE throttle.md

Rate and cost control for continuous operation

ASF-04 ESCALATE escalate.md

Human notification and approval workflows

ASF-05 FAILSAFE failsafe.md

Safe fallback modes when systems fail

ASF-06 TERMINATE terminate.md

Permanent shutdown and resource cleanup

Data Security

ASF-07 ENCRYPT encrypt.md

Data classification and protection policies

ASF-08 ENCRYPTION encryption.md

Cryptographic standards and implementation

Output Quality

ASF-09 SYCOPHANCY sycophancy.md

Anti-sycophancy and truthfulness guardrails

ASF-10 COMPRESSION compression.md

Context compression and token optimisation

ASF-11 COLLAPSE collapse.md

Drift prevention and behaviour alignment

Accountability

ASF-12 FAILURE failure.md

Failure mode mapping and incident response

ASF-13 LEADERBOARD leaderboard.md

Agent benchmarking and performance transparency

Quick Links

Frequently Asked Questions

What is SAFEGUARD.md?
SAFEGUARD.md is a plain-text file convention that defines pre-deployment safety audit requirements for AI agents. It specifies readiness checklists, safety gate validations, compliance markers, and audit protocols to ensure agents meet safety standards before production use. Instead of deploying agents blindly, organisations verify they pass defined safety gates.
View all FAQs
How does SAFEGUARD.md fit in the Agentik Safety Framework (ASF)?
SAFEGUARD.md (ASF-01) is the foundational specification in the Agentik Safety Framework. It covers 13 complementary specifications that together form a complete safety framework for AI agents. Each spec covers a distinct aspect: operational control, data security, output quality, and accountability. They work together to ensure agents operate safely, transparently, and within defined boundaries.
View all FAQs
Is SAFEGUARD.md framework-agnostic?
Yes. SAFEGUARD.md is framework and language-agnostic. It defines the audit policy and requirements; your organisation's deployment processes enforce it. Works with LangChain, AutoGen, CrewAI, Claude Code, custom agents, or any AI system that requires safety validation before production deployment.
View all FAQs

How to Cite

Cite as: SAFEGUARD.md (2026). Pre-deployment AI Safety Audit Standard. Retrieved from https://safeguard.md/

For attribution: Organisation: safeguard-md | Website: https://safeguard.md | Licence: MIT

Last updated: 13 March 2026