We are living through peak artificial intelligence excitement. If this technology proves anywhere near as powerful as its proponents promise, it has the potential to fuel inequities and injustice.
Over the past half-century, corporations and police have partnered to exploit and profit off technology that tracks and criminalizes people. They harvested our personal information, spied on us, manipulated people, and targeted reproductive care, immigration status, and sexual orientation, among other rights.
Now is the time to carefully consider how AI can harm our civil rights and liberties, and act to prevent it. With the right laws, guardrails, and frameworks, we can design a new digital age that harnesses the power of AI to protect our rights, increase our freedoms, and strengthen our communities.
Policymakers must carefully assess whether, when, and how to deploy AI systems in ways that improve people’s lives instead of threatening their rights and safety. They should fully enforce existing constitutional and statutory laws in the context of AI and pass new laws that:
The following resources will help.
As rights are under attack in many communities and AI-powered systems loom larger, the stakes are even higher for policymakers to have the necessary resources to be able to ask and answer the questions about AI. This report provides a modern framework for understanding and scrutinizing AI proposals.
In our extensive comment to Governor Newsom’s Executive Order, we articulated a civil rights framework for state and local agencies to center the needs and rights of people when considering AI.
We explained the importance of a robust process to make decisions about AI transparent and accountable to those who are most impacted.
We proposed mechanisms to ensure that AI systems are built carefully and laid out the dangers to our public systems – from housing to healthcare – if adopted recklessly.
We explained how AI proposals should be assessed through an evidence-based evaluation of whether the risks outweigh the benefits.
And we made clear that AI should never be trusted to make high stakes decisions in our criminal, immigration, and policing systems, including prohibitions on its use for: facial recognition and other biometric surveillance systems, emotion detection systems, family policing systems, predictive policing systems, and other criminal justice systems that incorporate AI.