A recent update on algorithmic bias and the American Privacy Rights Act, or APRA, reveals significant changes. Originally, the privacy bill aimed to impose race and gender quotas on AI algorithms, as discussed in a detailed article for the Volokh Conspiracy two weeks ago. However, due to public backlash, a new discussion draft was released, dropping much of the quota language. Despite this, the bill still contains provisions on algorithmic discrimination, potentially leading to quotas being implemented in algorithms without clear justification.
While the new version of APRA removes explicit quotas, it requires algorithms to avoid harming protected groups, which could indirectly lead to quotas as a way to achieve group fairness. This shift towards proportionate representation may compromise algorithm accuracy in various decision-making processes, from healthcare to ride-sharing services. The bill’s ambiguous language could incentivize designers to include racial or gender factors in algorithms without transparency, potentially impacting the lives of individuals.
Experts in algorithmic bias have recommended interventions to achieve “fairness constraints” in algorithms, even if it means sacrificing accuracy. This approach, aimed at achieving proportional outcomes, may go unnoticed as algorithms produce results based on the criteria set by the trainers. The potential consequences of implementing such constraints in critical areas like healthcare could be harmful.
Given the implications of these changes, it is crucial to reconsider APRA’s approach to algorithmic bias and avoid inadvertently introducing quotas that could compromise algorithm accuracy and fairness.
The solution is not to simply remove those provisions, but to directly address the issue of stealth quotas. APRA should be revised to establish the fundamental principle that adjustments to algorithms based on identity require specific justification. These adjustments should be a last resort, only used when there is clear evidence of discrimination distorting algorithmic outcomes and when other solutions are inadequate. They should not be employed when apparent bias can be rectified by enhancing the algorithm’s accuracy. For instance, face recognition software struggled with identifying minorities and darker skin tones in the past, but advancements in technology have largely resolved these issues through better lighting, cameras, software, and training sets. Improving algorithmic accuracy is more likely to be viewed as fair than implementing identity-based solutions.
Moreover, any inclusion of race, gender, or other protected characteristics in an algorithm’s design or training should be transparent and open. Controversial “group justice” measures should not be concealed from the public, algorithm users, or those impacted by such measures.
With these factors in mind, a rough proposal for amending APRA to prevent the widespread imposition of algorithmic quotas could look like this:
“(a) Unless specified in section (b), a covered algorithm must not be altered, trained, incentivized, rewarded, or otherwise manipulated based on race, ethnicity, national origin, religion, sex, or any other protected characteristic—
(1) to influence the algorithm’s outcomes, or
(2) to generate a specific distribution of outcomes primarily or partially based on race, ethnicity, national origin, religion, or sex.
(b) A covered algorithm may only be modified, trained, incentivized, rewarded, or manipulated as described in section (a) if:
(1) it is necessary to address proven acts of discrimination that directly impacted the data on which the algorithm relies, and
(2) the algorithm is designed in a way that allows identification and notification of any adversely affected parties when the modified algorithm is utilized.
(c) An algorithm modified in accordance with section (b) cannot be utilized to inform any decision without identifying and notifying the adversely affected parties. Any notified party may contest the algorithm’s compliance with section (b).”
It is uncertain whether such a provision would pass through a Democratic Senate and a narrowly Republican-controlled House. However, the composition of Congress could change significantly in the near future. Additionally, the regulation of artificial intelligence is not solely a federal matter.
Several left-leaning state legislatures have taken the lead in enacting laws addressing AI bias, with seven jurisdictions identified last year by the Brennan Center. The Biden administration is also pursuing multiple anti-bias initiatives. These legal measures, combined with a widespread push for ethical codes targeting AI bias, may have a similar impact on promoting quotas as APRA.
Conservative lawmakers have been slow to respond to the growing interest in AI regulation, and their silence means that their constituents may be governed by algorithms adhering to blue-state regulatory standards. To prevent the introduction of stealth quotas, conservative legislatures must implement their own laws limiting algorithmic discrimination based on race and gender, and mandating transparency whenever algorithms are modified using such characteristics. Therefore, even if APRA is not amended or enacted, the language provided above, or a more refined version of it, could play a significant role in the national discourse on artificial intelligence.
Source link