ComplyHat’s bias engine runs four fairness metrics deterministically against tabular data you supply. Each test returns aDocumentation Index
Fetch the complete documentation index at: https://docs.complyhat.ai/llms.txt
Use this file to discover all available pages before exploring further.
pass or fail ruling against a configurable threshold, per protected class, with data-quality assessments that tell you whether the result is statistically meaningful. When any test fails, the model’s compliance_status is automatically updated to non_compliant; on a clean run it moves to needs_review.
The four test types
All four tests operate on a tabular dataset with an outcome column and one or more protected-class columns. For the statistical details and academic sources behind each metric, see methodology.| Test type | What it measures | Threshold | Ground truth required? |
|---|---|---|---|
disparate_impact | Favorable rate ratio between subgroups (Four-Fifths Rule) | Fail if any ratio < 0.80 | No |
statistical_parity | Absolute difference in favorable rates across subgroups | Fail if difference > 0.10 | No |
equal_opportunity | True positive rate ratio across subgroups | Fail if min/max TPR < 0.80 | Yes |
predictive_parity | Positive predictive value difference across subgroups | Fail if max − min PPV > 0.10 | Yes |
If you include
equal_opportunity or predictive_parity in test_types, you must also supply ground_truth_column. The engine returns a 422 error if you omit it.Run a bias test
Callbias_tests with mode: "run". Supply the model_id, your dataset inline in data.rows, the column names, and the test types you want. The data object requires source: "inline".
ground_truth_column field:
test_id, an overall_result (pass or fail), per-(test_type, protected_class) results with details, and a data_quality assessment for each protected class. Check data_quality[*].adequate and data_quality[*].warnings before treating a pass as conclusive.
List and retrieve test results
List all bias tests for a model withmode: "list":
test_id with mode: "get":
Schedule recurring tests
Regulators specify both the test types and the cadence they expect. You can encode both into a recurring schedule so your host agent runs tests automatically without manual intervention. Create a schedule withmode: "create_schedule". Provide the dataset_id to run against, a test_config object describing the test parameters, the cadence (monthly, quarterly, or annually), and the next_run_at timestamp for the first run.
mode: "list_schedules":
Framework-specific requirements
Different frameworks require different test types and cadences. Configure your schedules accordingly:| Framework | Required tests | Cadence |
|---|---|---|
sr-11-7 | disparate_impact, statistical_parity | Quarterly |
eu-ai-act | disparate_impact, statistical_parity, equal_opportunity, predictive_parity | Quarterly |
nyc-ll144 | disparate_impact, statistical_parity | Annual (per AEDT use case) |
naic-model-bulletin | disparate_impact | Annual |
cms-0057-f | disparate_impact, equal_opportunity | Quarterly |
Next steps: Review the statistical methods behind each test in methodology, or see all
bias_tests modes in the tool reference.