Glossary
What is AI Bias?
Understanding and detecting unfair AI outputs.
What is AI bias?
AI bias occurs when models produce systematically unfair outputs that favor or disadvantage certain groups. This can stem from biased training data, flawed model design, or problematic deployment contexts.
Types of AI Bias
- Training data bias: Underrepresentation or stereotypes in data
- Algorithmic bias: Model architecture amplifies patterns
- Deployment bias: Misuse in inappropriate contexts
- Confirmation bias: Reinforcing existing beliefs
Detecting Bias
- Test with diverse demographic inputs
- Monitor policy violations for discrimination
- Audit outputs for unfair patterns
- Track metrics across user segments
Regulatory Requirements
Many jurisdictions now require bias monitoring:
- EU AI Act requires bias assessments
- NYC Local Law 144 for hiring AI
- State insurance regulations
- Fair lending requirements
How do I detect bias?
Monitor outputs across demographic groups, test with diverse inputs, track policy violations related to discrimination, and audit outputs for patterns of unfair treatment. Continuous monitoring catches bias that testing misses.
Monitor for policy violations
Start Free