EthicAI Audit

Fairness Audit

Explore an example audit (demo) or audit your own model outputs using a CSV workflow.

How does this tool work? (for non-technical users)

This tool helps you check whether a classification model treats different groups (such as gender or race) differently in its predictions.

What data do I need?

  • y_true: the real outcome that actually happened (for example: did someone repay a loan, was income above 50K, etc.).
  • y_pred or score: what the model predicted.
    • y_pred = predicted class (0 or 1)
    • score = probability (between 0 and 1)
  • group: the group you want to audit fairness for (for example: Male/Female, race categories, age groups).

Where does this data usually come from?

  • y_true comes from historical outcomes in your dataset.
  • y_pred / score comes from running your model and exporting its predictions.
  • group comes from demographic or categorical fields you already track.

What if I don’t have a model?

You can use the Demo Mode to see how fairness metrics behave, or upload example data using the provided sample CSV template.

πŸ”’ Privacy note: All files are processed locally in your browser. No data is uploaded to any server.

Example Model (Demo)

Demo

Run a demo audit on built-in sample data.

Audit Your Model

Upload

Upload a main dataset + a predictions file, then merge by an ID column.

Step 1 β€” Main Dataset CSV

Contains demographics (group) and real outcomes (y_true). Example: a historical dataset.

Step 2 β€” Predictions CSV

Contains model outputs (y_pred or score) and an ID column to match rows.