Bill Overview
Title: Stopping Unlawful Negative Machine Impacts through National Evaluation Act
Description: This bill makes an entity that uses artificial intelligence (AI) to make or inform decisions liable for violations of civil rights laws caused by those decisions in the same manner and to the same extent as if the entity made the decision without using AI. Additionally, the bill establishes a temporary program within the National Institute of Technology and Standards to evaluate AI systems for bias and discrimination on the basis of race, sex, age, and other protected characteristics and assist in mitigating those effects. The program terminates on December 31, 2028.
Sponsors: Sen. Portman, Rob [R-OH]
Target Audience
Population: Individuals potentially subject to AI-driven decisions or biases
Estimated Size: 300000000
- The bill targets any entity using AI to assist or make decisions and holds them accountable for civil rights violations.
- Individuals impacted are those subject to decisions made by AI systems, specifically in areas prone to race, sex, age, or other discrimination.
- The bill covers all areas and sectors where AI is used, including hiring, banking, housing, law enforcement, and more.
- Since AI is increasingly used globally, the potential impact is broad and not confined to a specific region.
- The NIST program will evaluate AI systems across all these sectors and characteristics, potentially impacting AI developers, entities using AI, and individuals who are affected by AI decisions.
Reasoning
- The population affected by AI-driven decisions is vast and diverse, as AI is used in multiple decision-making domains like hiring, finance, and legal enforcement.
- Considering a budget limit, the focus will likely be on entities with the largest user bases or those previously noted for discrimination issues.
- The impact of the policy will vary, with some individuals experiencing significant improvements in treatment and opportunities, while others may see little change depending on the AI usage in their sector.
- Implementation of such policies can take time, with immediate impacts on AI developers and decision-making entities, but incremental longer-term impacts on individual wellbeing.
- The 10-year budget suggests that extensive systemic changes or adaptations of AI systems to mitigate bias will be a long-term effort.
Simulated Interviews
Software Engineer (San Francisco, CA)
Age: 45 | Gender: male
Wellbeing Before Policy: 7
Duration of Impact: 5.0 years
Commonness: 3/20
Statement of Opinion:
- I think this policy is crucial in ensuring fair use of AI.
- Our company needs to closely assess our AI systems to prevent any potential biases.
Wellbeing Over Time (With vs Without Policy)
| Year | With Policy | Without Policy |
|---|---|---|
| Year 1 | 7 | 7 |
| Year 2 | 8 | 7 |
| Year 3 | 8 | 7 |
| Year 5 | 8 | 7 |
| Year 10 | 9 | 7 |
| Year 20 | 9 | 7 |
Financial Analyst (New York, NY)
Age: 28 | Gender: female
Wellbeing Before Policy: 6
Duration of Impact: 10.0 years
Commonness: 3/20
Statement of Opinion:
- The policy could ensure fairer lending practices without discrimination.
- I hope this leads to better transparency in how AI decisions impact loan approvals.
Wellbeing Over Time (With vs Without Policy)
| Year | With Policy | Without Policy |
|---|---|---|
| Year 1 | 7 | 6 |
| Year 2 | 8 | 6 |
| Year 3 | 8 | 6 |
| Year 5 | 9 | 6 |
| Year 10 | 9 | 6 |
| Year 20 | 9 | 6 |
Unemployed (Austin, TX)
Age: 39 | Gender: other
Wellbeing Before Policy: 4
Duration of Impact: 10.0 years
Commonness: 5/20
Statement of Opinion:
- If this law works, maybe I'll have a fairer chance at job applications.
- The bias evaluation program sounds promising to address unseen discrimination.
Wellbeing Over Time (With vs Without Policy)
| Year | With Policy | Without Policy |
|---|---|---|
| Year 1 | 5 | 4 |
| Year 2 | 7 | 4 |
| Year 3 | 8 | 4 |
| Year 5 | 8 | 4 |
| Year 10 | 8 | 4 |
| Year 20 | 9 | 4 |
Police Officer (Phoenix, AZ)
Age: 32 | Gender: male
Wellbeing Before Policy: 7
Duration of Impact: 5.0 years
Commonness: 2/20
Statement of Opinion:
- AI helps us work more efficiently, but I'm worried about potential biases.
- A fair evaluation would help our processes become more transparent and trusted.
Wellbeing Over Time (With vs Without Policy)
| Year | With Policy | Without Policy |
|---|---|---|
| Year 1 | 7 | 7 |
| Year 2 | 7 | 7 |
| Year 3 | 8 | 7 |
| Year 5 | 8 | 7 |
| Year 10 | 8 | 7 |
| Year 20 | 9 | 7 |
HR Manager (Chicago, IL)
Age: 54 | Gender: female
Wellbeing Before Policy: 6
Duration of Impact: 8.0 years
Commonness: 4/20
Statement of Opinion:
- We need to ensure these tools do not perpetuate biases.
- The policy is a necessary step to make these tools more reliable and fair.
Wellbeing Over Time (With vs Without Policy)
| Year | With Policy | Without Policy |
|---|---|---|
| Year 1 | 7 | 6 |
| Year 2 | 8 | 6 |
| Year 3 | 8 | 6 |
| Year 5 | 8 | 6 |
| Year 10 | 9 | 6 |
| Year 20 | 9 | 6 |
Small Business Owner (Miami, FL)
Age: 40 | Gender: female
Wellbeing Before Policy: 8
Duration of Impact: 3.0 years
Commonness: 6/20
Statement of Opinion:
- I don't think this policy directly affects my AI use, but I'm glad accountability is improving.
- It's reassuring to know that the broader implementation of AI is considered.
Wellbeing Over Time (With vs Without Policy)
| Year | With Policy | Without Policy |
|---|---|---|
| Year 1 | 8 | 8 |
| Year 2 | 8 | 8 |
| Year 3 | 8 | 8 |
| Year 5 | 9 | 8 |
| Year 10 | 9 | 8 |
| Year 20 | 9 | 8 |
Graduate Student (Boston, MA)
Age: 25 | Gender: female
Wellbeing Before Policy: 6
Duration of Impact: 10.0 years
Commonness: 2/20
Statement of Opinion:
- This is a critical policy as it touches on core ethical concerns of AI use.
- I expect more academic discussion to stem from this implementation.
Wellbeing Over Time (With vs Without Policy)
| Year | With Policy | Without Policy |
|---|---|---|
| Year 1 | 7 | 6 |
| Year 2 | 8 | 6 |
| Year 3 | 8 | 6 |
| Year 5 | 9 | 6 |
| Year 10 | 9 | 6 |
| Year 20 | 9 | 6 |
Retired (Detroit, MI)
Age: 60 | Gender: male
Wellbeing Before Policy: 5
Duration of Impact: 8.0 years
Commonness: 5/20
Statement of Opinion:
- The importance of this policy cannot be overstated for personal peace of mind.
- Ensuring fairness in AI will benefit all, especially retirees.
Wellbeing Over Time (With vs Without Policy)
| Year | With Policy | Without Policy |
|---|---|---|
| Year 1 | 6 | 5 |
| Year 2 | 7 | 5 |
| Year 3 | 7 | 5 |
| Year 5 | 7 | 5 |
| Year 10 | 7 | 5 |
| Year 20 | 7 | 5 |
AI Researcher (Seattle, WA)
Age: 37 | Gender: female
Wellbeing Before Policy: 7
Duration of Impact: 10.0 years
Commonness: 1/20
Statement of Opinion:
- The policy supports and validates my area of expertise.
- This will hopefully encourage more funding and attention to bias-related research.
Wellbeing Over Time (With vs Without Policy)
| Year | With Policy | Without Policy |
|---|---|---|
| Year 1 | 8 | 7 |
| Year 2 | 8 | 7 |
| Year 3 | 8 | 7 |
| Year 5 | 9 | 7 |
| Year 10 | 9 | 7 |
| Year 20 | 9 | 7 |
Freelance Writer (Los Angeles, CA)
Age: 30 | Gender: male
Wellbeing Before Policy: 7
Duration of Impact: 5.0 years
Commonness: 4/20
Statement of Opinion:
- This is exactly the direction I've been advocating for—addressing AI bias at a systemic level.
- I'm hopeful this will lead to more nuanced understanding of AI's role in society.
Wellbeing Over Time (With vs Without Policy)
| Year | With Policy | Without Policy |
|---|---|---|
| Year 1 | 8 | 7 |
| Year 2 | 8 | 7 |
| Year 3 | 8 | 7 |
| Year 5 | 9 | 7 |
| Year 10 | 9 | 7 |
| Year 20 | 9 | 7 |
Cost Estimates
Year 1: $500000000 (Low: $450000000, High: $550000000)
Year 2: $500000000 (Low: $450000000, High: $550000000)
Year 3: $500000000 (Low: $450000000, High: $550000000)
Year 5: $500000000 (Low: $450000000, High: $550000000)
Year 10: $0 (Low: $0, High: $0)
Year 100: $0 (Low: $0, High: $0)
Key Considerations
- The widespread adoption of AI across industries necessitates robust evaluation systems to prevent civil rights violations.
- The temporary nature of the NIST program means that sustainable mechanisms may need to be developed beyond 2028 for continuous AI assessment.
- The potential increase in compliance costs for businesses using AI systems may be offset by improved system trustworthiness and efficiency.