On Panel for National AI Committee, UnidosUS Calls for Accountable and Democratic Policies on Artificial Intelligence
On June 9, 2024, Laura MacCleery, senior policy director for UnidosUS, joined esteemed colleagues from civil rights organizations on a panel before the National Artificial Intelligence Advisory Committee (NAIAC). The panel reflected on the Committee’s work on artificial intelligence (AI) policy and leadership over the past year and looked forward on steps to regulate AI. The meeting was open to the public and was recorded for later viewing online at ai.gov/naiac.
In her remarks, Ms. MacCleery said:
“At UnidosUS, the nation’s largest Latino-serving civil rights and advocacy organization, we see first-hand how communities of color have been left out and left behind by technological advances. The racialized wealth gap has not budged for forty years—so while technology has transformed our world, it has failed to change much of what truly matters.
“And now the spread of artificial intelligence (AI) could lock that into every place it touches. We just cannot afford more waves of alleged progress that leave fundamental forms of inequality unchallenged. For the Latino community, both the risks and the possibilities of AI are enormous. Latinos will be a stunning 78% of new workers from 2020 to 2030. But gaps in skills training and access to devices and broadband exclude many from emerging tech roles and mean that Latinos’ lives may not even be in the models’ training data.
“At the same time, if it could be governed accountably, AI can be a powerful force for good: facilitating connection across language and learning barriers, personalizing education, driving nuanced research, and matching diverse talent with quality jobs of the future.
“When we think about these problems and possibilities, we connect three concepts—constitutional principles, participatory oversight, and capacity-building. We call these Values, Voice, and Investment.
“First, Values. We owe communities who will be impacted first and worst a system that embeds core democratic freedoms and holds technology accountable to shared values. We must protect our elections, bar unaccountable uses of AI by governments and private actors—including predictive law enforcement—create binding rules for data minimization and privacy, require consent and ownership of personal data, study and develop safeguards to prevent manipulation, create provenance and labeling systems for synthetic media, bar non-consensual sexual imagery, compensate and incentivize human creators, and provide legal accountability for harms.
“Second, Voice. Inclusive AI governance is essential. In this area, there has been real progress. The Biden Administration, including the President, Office of Management and Budget, National Institutes for Standards and Technology, and the National Artificial Intelligence Advisory Committee (NAIAC), as well as many federal departments and agencies, spent the past year grappling squarely with the challenges of regulating and integrating artificial intelligence (AI) into the fabric of government and examining its risks for the public.
“As the final set of deadlines in the Executive Order come due, it’s clear the real work is still ahead. We have a new set of standards and practices for federal agencies—including basic safeguards for rights- and privacy-impacting uses—but these have loopholes for the most important uses by law enforcement and national security, as we noted in recent testimony. And all of these provisions must leave the page, to be realized and evaluated in practice as agencies build a muscle behind them.
“NAIAC’s work over the past year has been notable for pushing to close some of the gaps, including on transparency and law enforcement uses and reporting on high-risk uses of AI, envisioning the AI Safety Institute, addressing fairness and privacy needs, operationalizing a rights-respecting approach, leveraging procurement, and needed steps to bolster AI safety, among other topics. Your reports and findings are thoughtful, and a sound complement to larger efforts.
“Still, the response to AI from both policymakers and Congress requires far more urgency. We must discard the often-unstated presumption that an AI-driven solution is necessarily preferable or inevitable. Instead we must ask ourselves, as a threshold question, whether an AI tool or model is sufficiently accurate, accountable, transparent, fair, and safe to be used in this way. And we must ensure that deployers of AI tools—in both government and the private sector—understand, train for, and communicate publicly about limitations and biases.
“Many AI uses today would fail such a basic test. More than a decade of research demonstrates that algorithms already deployed for consequential decisions in areas like lending and housing are biased. It is likely a mistake, then, to talk of “trustworthy” AI – trust is not inherent to a car or computer, for example, and it is not inherent to AI. Instead, trust is an earned and human quality, is always measured to purpose and place—and can be squandered in an instant.
“AI is a powerful tool, even a revolutionary one, but it does not have magical or human qualities, and its uses occur within the same power dynamics of any human institution. It therefore needs the same democratic checks. We agree with NAIAC that we need more robust safeguards for AI, but would urge work over the next year to channel efforts into a cohesive vision for AI accountability and fairness. The Committee’s papers highlight red teaming, incident reporting, and other tools, including impact assessments, participatory approaches, and sharing safety information, for example, but has not yet articulated how these should work together.
“We have proposed creation of real-time public dashboards for large models, alongside shared governance of consequential models in specific settings like housing and lending. Our AI governance framework includes real-time benchmarking, user feedback—also called incident reporting, but far more public and participatory—community advisory committees to assist federal agencies, inclusive red-teaming and impact assessments, and more.
“We envision these as part of an interlocking, layered approach that could create a community of learning for the public sector and level setting on AI outcomes and gaps. We must build a regulatory ecosystem informed by public and lived experience and by evidence. We have developed such systems in the past around new technologies like cars and drugs, and we can do it again.
“The technology sector has dashboards and metrics, psychologists, and ethicists. So our proposals build on tools that are already in use but are not well attuned to public needs or building public sector metrics. We must learn what companies already know—or do not care to know—about the impacts of models in use every day, and develop a public evidence base that is as robust, but more inclusive. To do this, we should democratize technology policy and standards development.
“Third, Investment. We must engage impacted communities as equal partners, not just performatively. Investing directly and specifically in impacted communities would ensure the design of AI models is inclusive and accountable, and technological advances are more likely to benefit communities equitably. We have proposed a public-private foundation to be chartered by Congress modeled on the CDC Foundation to provide investments at the community level on workforce development, up-skilling, and advocacy for accountability in tech. Such programs could smooth workforce transitions, generate new education pathways and skills-based hiring, expand broadband access, and support impacted groups to participate peer-to-peer with the tech sector in the AI governance model described here.
“The future of AI should foster shared democratic values and human empowerment, be shaped by inclusive public oversight and real-world grounding about its impacts, and accompanied by a robust vision for how it can change lives and power economic opportunity for every community. In sum, we urge the Committee to unite the work of NAIAC around a shared vision for prosperity and accountability, and to set out specific recommendations on the governance mechanisms to bring this vision into being.”
To learn more about UnidosUS’s commitment to including the Latino community in conversations around AI, see the documents below:
- Testimony on Three Pillars for AI work: Values, Voice, Investment
- Op-ed on Lessons Learned at the AI Insight Forum
- Written Statement of Janet Murguia at the first AI Insight Forum
- Press Statement on AI EO with Fact Sheet
- Comments and Fact Sheet on OMB Memorandum on AI Executive Order
- Comments on Federal Election Commission Rulemaking on Deepfakes in Elections
- Comments Addressing Concerns on Law Enforcement Technologies
- Written Testimony on Civil Rights Implications of Federal Use of Facial Recognition Technology
- Supplemental Written Testimony on Civil Rights Implications of Federal Use of Facial Recognition Technology
- Press Statement on the Final OMB AI Memo
- Statement on Support of Three AI Elections Bills
- Statement on Release of AI Congressional Roadmap
- Op-Ed in Tech Policy Press on American Privacy Rights Act