Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.complyhat.ai/llms.txt

Use this file to discover all available pages before exploring further.

ComplyHat ships eight auditor agents and five skills as canonical SKILL.md files served directly by the guidance tool. Each agent is a portable prompt and behavior spec — harness-agnostic, runnable on Claude Code, Claude Desktop, Cowork, Codex, NemoClaw, OpenClaw, or any other shell-capable MCP host. Your host loads an agent’s SKILL.md via the guidance tool call or the equivalent MCP resource URI, then follows the behavior spec to run compliance workflows autonomously using ComplyHat’s entity-mode tools.
To load any agent or skill, call the guidance tool with kind set to "agents" or "skills" and the slug of the file you want. The same content is exposed as an MCP resource at complyhat://{kind}/{slug} for hosts that support resource auto-include.

How to load agents and skills

{
  "tool": "guidance",
  "arguments": { "kind": "agents", "slug": "<slug>" }
}
{
  "tool": "guidance",
  "arguments": { "kind": "skills", "slug": "<slug>" }
}
Or reference by MCP resource URI:
complyhat://agents/<slug>
complyhat://skills/<slug>
See MCP tool reference for the full guidance tool documentation.

Auditor agents

Slug: adversarial-testerPurpose: Coordinates host-driven adversarial test submissions against compliance policies. The agent does not generate attacks — it orchestrates the process of submitting findings your host produces, routing failures for human review, and logging every submission to the audit trail.When to use: Run this agent when you need to demonstrate adversarial robustness testing evidence for a framework (EU AI Act Article 9, NIST AI RMF Measure function). Schedule it weekly or trigger it on model updates.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "agents", "slug": "adversarial-tester" }
}
Slug: bias-testerPurpose: Runs fairness and bias tests on a scheduled cadence (daily at 02:00 UTC by default), monitors for metric drift between test cycles, and triggers revalidation when regulatory changes require updated testing. Classifies results as pass, marginal, or fail, routes failures to downstream GRC review, and emits bias_test.failed events for the revalidation orchestrator.When to use: Use this agent for continuous bias monitoring under NYC LL 144, EU AI Act Article 10, EEOC Guidelines, CFPB fair lending requirements, or SR 11-7 model validation. Load it and set a model_id to begin scheduled testing.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "agents", "slug": "bias-tester" }
}
Slug: data-governance-auditorPurpose: Audits data sources, consent records, and data lineage entries for completeness, consent expiry, and quality issues. Runs weekly (Monday at 04:00 UTC) and flags any issues for human review.When to use: Use this agent when your regulatory framework requires documented data provenance — EU AI Act Annex IV, NIST AI RMF Map function, or ISO 42001 evidence packages. It ensures your data_governance records are current before a report is generated.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "agents", "slug": "data-governance-auditor" }
}
Slug: drift-detectorPurpose: Captures distribution snapshots every six hours, detects statistical drift across model features and outputs, and raises alerts when drift exceeds configured thresholds. Only critical alerts trigger human review; informational drift is logged and surfaced through the drift tool.When to use: Load this agent immediately after registering a model in production. It provides the continuous monitoring evidence required by SR 11-7 ongoing monitoring obligations, EU AI Act Article 9 iterative risk management, and NIST AI RMF Manage function.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "agents", "slug": "drift-detector" }
}
Slug: explainability-testerPurpose: Schedules explainability runs using SHAP, LIME, or coalition attribution, checks that coverage requirements for each applicable framework are met, and routes all results for human review. Triggered by events rather than a fixed schedule.When to use: Use this agent when a framework requires explainability evidence — EU AI Act Annex IV technical file, SR 11-7 model documentation, or NIST AI RMF Measure function. It pairs naturally with the bias-tester agent: a failing bias test can trigger an explainability run to identify which features drive the disparity.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "agents", "slug": "explainability-tester" }
}
Slug: model-card-writerPurpose: Drafts and updates model cards when a model is registered or a framework version changes. All prose fields are tagged [EXTRACTED], [INFERRED], or [AMBIGUOUS] so reviewers know what is sourced from documents versus inferred. Every draft goes to human review before the card is finalized.When to use: Load this agent when you register a new model or when a regulatory update stamps a new framework version on your existing cards. It integrates with the revalidation-orchestrator to update cards automatically when bias tests produce new findings.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "agents", "slug": "model-card-writer" }
}
Slug: report-generatorPurpose: Generates examiner-ready compliance reports on demand or on a monthly schedule (first of the month at 03:00 UTC). Accepts a model ID and framework slug, assembles evidence from bias tests, drift monitors, explainability runs, and the compliance wiki, and returns a structured report with audit-tagged citations.When to use: Use this agent to produce deliverables for EU AI Act Annex IV technical files, SR 11-7 model risk management reports, NYC LL 144 annual bias audit reports, or any other framework in ComplyHat’s supported set. It also runs as part of the revalidation closed loop.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "agents", "slug": "report-generator" }
}
Slug: revalidation-orchestratorPurpose: The central coordination agent for the compliance closed loop. Triggered when your host calls bias_tests.run, model_cards.write, or reports.generate, the orchestrator maps obligations to affected models, triggers the downstream agents (bias-tester, model-card-writer, report-generator) in the correct order, assembles all results into a single revalidation package, and submits that package for human approval. It never approves or changes compliance status autonomously.When to use: Load this agent when you want ComplyHat to run the full revalidation pipeline automatically after a host-initiated op. It implements the coordinated response required by SR 11-7’s comprehensive approach, EU AI Act Article 9’s continuous iterative process, NIST AI RMF GOVERN function, and ISO 42001’s Plan-Do-Check-Act cycle.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "agents", "slug": "revalidation-orchestrator" }
}

Skills

Skills are narrower than agents — each skill covers one specific task and is designed to be composed inside a larger workflow or agent.
Slug: bias-test-prepPurpose: Dataset preparation and configuration for bias testing. Guides your host through selecting protected classes, setting framework-specific thresholds (four-fifths rule for NYC LL 144, statistical parity for EU AI Act), and formatting the dataset before passing it to bias_tests with mode: "run".Load:
{
  "tool": "guidance",
  "arguments": { "kind": "skills", "slug": "bias-test-prep" }
}
Slug: compliance-checklistPurpose: Per-framework compliance checklists. Returns a structured checklist of requirements for a given framework slug so your host can track which obligations are met, pending, or missing before generating a report.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "skills", "slug": "compliance-checklist" }
}
Slug: model-documentationPurpose: Model card authoring guidance. Provides section-by-section instructions for drafting a complete model card — intended use, training data, evaluation results, limitations, and ethical considerations — aligned to EU AI Act Annex IV and NIST AI RMF documentation requirements.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "skills", "slug": "model-documentation" }
}
Slug: regulatory-mapperPurpose: Maps model use cases to applicable regulatory frameworks. Given a model’s intended use, deployment context, and jurisdiction, the skill identifies which frameworks apply and at what risk tier, before you call frameworks with mode: "check".Load:
{
  "tool": "guidance",
  "arguments": { "kind": "skills", "slug": "regulatory-mapper" }
}
Slug: risk-classificationPurpose: EU AI Act risk-tier classification. Guides your host through the Annex III prohibited-use check and the Annex II high-risk determination so you can set the correct risk tier on a model registration before generating an Annex IV technical file.Load:
{
  "tool": "guidance",
  "arguments": { "kind": "skills", "slug": "risk-classification" }
}