top of page

Keep The Future Human

New Orleans, Jan 26-28, 2026

Preamble (written by The Future of Life Institute)

As companies race to develop and deploy AI systems, humanity faces a fork in the road. One path is a race to replace: humans replaced as creators, counselors, caregivers and companions, then in most jobs and decision-making roles, concentrating ever more power in unaccountable institutions and their machines. An influential fringe even advocates altering or replacing humanity itself. This race to replace poses risks to societal stability, national security, economic prosperity, civil liberties, privacy, and democratic governance. It also imperils the human experiences of childhood and family, faith, and community.

A remarkably broad coalition rejects this path, united by a simple conviction: artificial intelligence should serve humanity, not the reverse. There is a better path, where trustworthy and controllable AI tools amplify rather than diminish human potential, empower people, enhance human dignity, protect individual liberty, strengthen families and communities, preserve self-governance and help create unprecedented health and prosperity. This path demands that those who wield technological power be accountable to human values and needs, in support of human flourishing.

Draft Preamble by FLI

The Pro-Human AI Declaration 

The signatories agree with the spirit of the following principles, which help guide this better path:

1. Keeping humans in charge

Human Control Is Non-Negotiable: Humanity must remain in control. Humans should choose how and whether to delegate decisions to AI systems.

Meaningful Human Control: Humans should have authority and capacity to understand, guide, proscribe, and override AI systems.

No Superintelligence Race: Development of superintelligence should be prohibited until there is broad scientific consensus that it can be done safely and controllably, and there is strong public buy-in.

Off-Switch: Powerful AI systems must have mechanisms that allow human operators to promptly shut them down.

No Reckless Architectures: AI systems must not be designed so that they can self-replicate, autonomously self-improve, resist shutdown, or control weapons of mass destruction.

Independent Oversight: Highly autonomous AI systems where controllability is not obvious require pre-development review and independent oversight: genuine authority to understand, prohibit, and override, not industry self-regulation.

Capability Honesty: AI companies must provide clear, accurate and honest representations of their systems’  capabilities and limitations. 

2. Avoiding concentration of power

No AI Monopolies: AI monopolies that concentrate power, stifle innovation, and imperil entrepreneurship must be avoided.

Shared Prosperity: The benefits and economic prosperity created by AI should be shared broadly.

No Corporate Welfare: AI corporations should not be exempted from regulatory oversight or receive government bailouts.

Genuine Value Creation: AI Development should prioritize solving real problems and creating authentic value.

Democratic Authority Over Major Transitions: Decisions about AI's role in transforming work, society, and civic life require democratic support, not unilateral corporate or government decree.

Avoid Societal Lock-In: AI development must not severely limit humanity's future options or irreversibly limit our agency over our future.

3. Protecting the human experience

Defense of Family and Community Bonds: AI should not supplant the foundational relationships that give life meaning—family, friendship, faith communities, and local connections.

Child Protection: Companies must not be allowed to exploit children or undermine their wellbeing with AI interactions creating emotional attachment or leverage.

Right to Grow: AI companies should not be allowed to stunt children's physical, mental or social growth or deprive them of essential experiences for healthy development during critical periods.

Pre-Deployment Safety Testing: Like drugs, chatbots must undergo pre-deployment testing for increased suicidal ideation, exacerbation of mental health disorders, escalation of acute crisis situations, and other known harms.

Bot-or-Not Labeling: AI-generated content that could reasonably be mistaken for human-generated must be clearly labeled as such.

No Deceptive Identity: AI should clearly and correctly identify itself as artificial, non-human, and not a professional, and it should not claim experiences it lacks.

No Behavioral Addiction: AIs should not cause addiction or compulsive use through manipulation, sycophantic validation, or attachment formation.

4. Human Agency and Liberty

No AI Personhood: AI systems must not be granted legal personhood, and AI systems should not be designed such that they deserve personhood.

Trustworthiness: AI must be transparent, accountable, reliable, and free from perverse private or authoritarian interests.

Liberty: AI must not curtail individual liberty, freedom of speech, religious practice, or association.

Data Rights and Privacy: People should have power over their personal data, with rights to access, correct, and delete it from active systems, AI training sets, and derived inferences. 

Psychological Privacy: AI should not be allowed to exploit data about the mental or emotional states of users.

Avoiding enfeeblement: AI systems should be designed to empower, rather than enfeeble their users.

5. Responsibility and Accountability for AI companies

No Liability Shield: AI must not be able to act as a liability shield, preventing those deploying it from being legally responsible for their actions.

Developer Liability: Developers and deployers bear legal liability for defects, misrepresentation of capabilities, and inadequate safety controls, with statutes of limitation that account for harms emerging over time.

Personal Liability: There should be criminal penalties for executives responsible for prohibited child-targeted systems or ones causing catastrophic harm.

Independent Safety Standards: AI development shall be governed by independent safety standards and rigorous oversight.

No Regulatory Capture: AI companies must not be allowed undue influence over rules that govern them.

Failure Transparency: If an AI system causes harm, it should be possible to ascertain why as well as who is responsible.

Capability Honesty: AI companies must provide clear, accurate and honest representations of their systems’  capabilities and limitations. 

AI Loyalty: AI systems performing functions in professions with fiduciary duties, such as, health, finance, law, or therapy, must fulfill all of those duties, including mandated reporting, duty of care, conflict of interest disclosure, and informed consent.

2. Avoiding concentration of power

No AI Monopolies: AI monopolies that concentrate power, stifle innovation, and imperil entrepreneurship must be avoided.

Shared Prosperity: The benefits and economic prosperity created by AI should be shared broadly.

No corporate welfare: AI corporations should not be exempted from regulatory oversight or receive government bailouts.

Genuine Value Creation: AI Development should prioritize solving real problems and creating authentic value.

Democratic Authority Over Major Transitions: Decisions about AI's role in transforming work, society, and civic life require democratic support, not unilateral corporate or government decree.

Avoid Societal Lock-In: AI development must not severely limit humanity's future options or irreversibly limit our agency over our future.

Agenda

3:00-6:00p
Check in 
6:00-8:00p
Evening reception and introductions

Bios

Anthony Aguirre

Executive Director

Future of Life Institute

Ranya Ahmed

Head of Analytics

ACLU

Dr. Ranya Ahmed (she/her) is a passionate social justice advocate, with a decade of experience in the nonprofit and academic sectors. Ranya currently serves at the Head of Analytics at the American Civil Liberties Union (ACLU), where she oversees a team of data analysts, data scientists, analytics engineers, and subject matter experts, including the ACLU's technical algorithmic justice work. She also serves on Amnesty International's Board of Directors.

JOEBOT Allen

Tech Editor

War Room

Julianna Arnold

Founding Member and Executive Director

Parent RISE!

Mackenzie Arnold

Director of US Policy

Institute for Law & AI

Mackenzie is Director of US Policy at LawAI, where he provides analysis and advice to ensure that advances in AI benefit the public at large. His research focuses on administrative law, agency decision making, and liability. Prior to joining LawAI, Mackenzie clerked for Judge Joseph A. Greenaway, Jr. of the Third Circuit Court of Appeals, worked in public health law at a New York nonprofit, and graduated, cum laude, from Harvard Law School.

Ailen Arreaza

Executive Director

ParentsTogether

Ailen Arreaza is Executive Director and Co-founder of ParentsTogether, a national nonprofit impact media organization that reaches 1 in 3 parents in the United States each year. Through innovative and engaging content, Ailen and her team are building deep, trusting relationships with millions of parents across the country, helping to shape their worldviews and raise their expectations about what families deserve, and organizing them to take action on the issues that most matter to families, including online safety and tech accountability. Ailen lives in Charlotte, NC with her husband, two sons, and her mother, and she is a native of Havana, Cuba.

Adam Billen

VP of Public Policy

Encode AI

Joann Bogard

Founding Member

Parents SOS

I am Joann Bogard from Indiana and I lost my 15-year-old son Mason to a viral social media trend called the "choking challenge" in 2019. I am a strong advocate for legislation that holds big tech accountable for being proactive in designing AI products that are safe.

Malo Bourgon

CEO

Machine Intelligence Research Institute

Brian Boyd

U.S. Faith Liaison

Future of Life Institute

Brian J. A. Boyd is a moral theologian serving as U.S. Faith Liaison for the Future of Life Institute. His role is to be a resource for the religious, offering information about how AI is and will likely be affecting their congregations, encouragement and connections to build agency for the faithful, and collaboration on joint efforts to promote human flourishing and a human future. Boyd is also an affiliated scholar of the Institute for Advanced Catholic Studies at USC, a board member of Church Life Africa, a strategic consultant to The New Atlantis, and the lead author for AEI's forthcoming AI Ethics Council.

Catherine Bracy

Founder and CEO

TechEquity

Mark Brakel

Global Director of Policy

Future of Life Institute (FLI)

Daniel Bring

Executive Editor

American Affairs

Donna Broughan

Event Manager

Worldview Studio

Andrew Broz

AI and Advanced Tech Research Lead

Civilization Research Institute

Andrew Broz is an AI risk researcher currently working for the Civilization Research Institute. He is interested in both the capabilities and limitations of current and near future AI systems as well in the ways that software and automation shapes human behaviors and values.

Justin Bullock

VP of Policy

ARI

I am Dr. Justin B. Bullock. I was a tenured professor at Texas A&M University in public administration and public policy. I have been concerned for about a decade both about how digital technologies are reshaping our collective humanity and how it may be very difficult to control and shape the incentives of advanced AI. I am currently the VP of Policy at ARI, but I also have numerous academic publications, blogs, and published books on these topics.

Camille Carlton

Director of Policy

Center for Humane Technology

Lachlan Carroll

Special Projects Associate

Center for AI Safety

Lachlan Carroll, Center for AI Safety, AI safety and policy researcher.

Hamza Chaudhry

AI and National Security Lead

Future of Life Institute

Himangini Chauhan

Youth Advisory Board Member

Plan International USA

Himangini is an undergraduate student pursuing a degree in Mechanical Engineering with an interest in the intersection between STEM and education equity. A Plan USA Youth Advisory Board member and Youth Leadership Academy alum, she has mentored young changemakers in their community, and led sessions on Youth Civic and Political Participation and Protection from Gender-Based Violence. The Youth Advisory Board is a body of young people ages 14-22 from around the U.S. who work towards Plan’s global mission of advancing diverse perspectives and ensuring the agency of youth in building a better world

Daniel Cochrane

Senior Researcher

The Heritage Foundation

Daniel Colson

Executive Director

The AI Policy Network

Sydney Cullen

Associate Director

EconTAI - University of Virginia

Sydney Cullen is the Associate Director of EconTAI (Economics of Transformative AI, an initiative out of the University of Virginia. EconTAI's goal is to equip leaders, policymakers, and society at large with the economic insights needed to harness AI's transformative potential while managing its risks—creating an AI-enabled economy that works for everyone. Prior to EconTAI, Sydney worked in federal consulting, assisting to develop Responsible AI policy for the Department of Defense.

Ben Cumming

Communications Director

Future of Life Institute

Zachary Davis

Executive Director

Faith Matters

Zachary Davis is the Executive Director of Faith Matters and the Editor of Wayfare Magazine. He is also the co-founder of Organized Intelligence, an initiative that brings together Latter-day Saint voices from across disciplines to explore the ethical, social, and religious implications of AI.

Andrea Dehlendorf

Co-Lead

Democracy Takes Work

Vivian Dong

Programs Director

LASST

Rob Drake

Executive Producer

Dash Pictures

Holly Elmore

Founder and Executive Director

PauseAI US

Beatrice Erkers

Existential Hope Program Director

Foresight Institute

I’m Beatrice Erkers, Program Director for the Existential Hope program at the Foresight Institute. My work focuses on exploring and articulating the kinds of futures we want to create in the face of transformative technologies like AI. I’m particularly interested in pro-human approaches that reduce suffering, preserve human agency and dignity, and ensure technological progress continues meaningfully and genuinely helps all beings thrive.

Alison Fell

Operations Director

Worldview Studio

Andrea Fiegl

Senior Policy Director, Media and Technology

Common Cause

Andrea Fiegl brings nearly two decades of cross-sector experience to questions of democracy, technology, and governance. She has directed U.S. Government investments of more than $250M at USAID, advanced bipartisan foreign policy strategy on the Senate Foreign Relations Committee, and, most recently, is leading national media and technology policy at Common Cause, a nonpartisan organization dedicated to strengthening democratic institutions and advancing policy in the public interest. Her research and policy work focus on civil and political rights in the context of emerging technologies, with particular attention to the governance of artificial intelligence. She has held fellowships with the National Endowment for Democracy, the Wilson Center, and the Institute for AI Policy and Strategy (IAPS). She is also currently a Fellow with The Future Society, where she focuses on cross-walking AI governance frameworks between the US and EU. Trained in ethics and political philosophy, Andrea combines analytical rigor with practical expertise across government, civil society, and global technology policy.

Beatrice Fihn

Director

Lex International

Beatrice Fihn is the director of Lex International Fund and the 2017 Nobel Peace Prize laureate. She developed and led the International Campaign to Abolish Nuclear Weapons (ICAN) and its efforts to get a UN treaty on the prohibition of nuclear weapons. She has extensive experience with building partnerships and coalitions between governments, international organisations, academic institutions and non-governmental actors to come together and work collectively for global governance solutions.

Lara Galinsky

Head of Partnerships

Project Liberty

Megan Garcia

President

Blessed Mother Family Foundation

John Garrett

Managing Director, Research

Panterra

Vael Gates

Founder

Humans in Control

Vael Gates is the Executive Director and Founder of Humans in Control (humansincontrol.org), an early-stage bipartisan mass movement organization. Previous to this role, Vael worked on technical AI safety field-building at various organizations.

Sunny Glottmann

Policy & Programs Manager

AFL-CIO Tech Institute

Katja Grace

Lead Researcher

AI Impacts

Mark Graves

Research Director

AI and Faith

Mark Graves is Research Director at AI and Faith, Research Associate Professor of Psychology at Fuller Theological Seminary, and Visiting Scholar at University of San Francisco. In addition to earning his PhD in computer science (artificial intelligence at University of Michigan) and an MA in systematic and philosophical theology (at Graduate Theological Union and Jesuit School of Theology at Berkeley), he has completed fellowships in genomics, moral psychology, and moral theology. He held teaching or research positions at eight institutions of higher learning and published ninety technical and scholarly works in computer science, biology, psychology, theology, and ethics, including three books. Mark also has 15 years’ experience developing AI and data solutions in biotech, pharmaceutical, and healthcare industries.

Liz Grise

On Site Logistical Support

Worldview Studio

Julie Guirado

Chief Operating Officer

Center for Humane Technology

Saheb Gulati

Special Projects

Center for AI Safety

Bobby Halick

Director of Content & Engagement

HitRecord

Bobby Halick - Director of Content & Engagement at Joseph Gordon-Levitt's HITRECORD AI safety digital media fund. I work to partner with top social media creators to spread research backed messages about AI safety through native content. Our messages focus on the potential harms, but also, how we can harness AI for good.

Isabella Hampton

Associate, Futures Program

Future of Life Institute

Lucas Hansen

Co-founder

CivAI

Lucas Hansen, co-founder of CivAI. CivAI is a non-profit that educates the public, civil society groups, and the government about AI, primarily through the use of live software demos. Our goal is to give people deep intuition about what AI can do and to create emotional urgency about the risks by giving them personal experience with the technology.

Chase Hardin

US Communications Manager

Future of Life Institute

Dalia Hashad

Chief State Strategy Officer

The Future of Life Institute

David Haussler

Professor, Director of Genomics Institute

UC Santa Cruz

My PhD was in Computer Science with emphasis on machine learning. I contributed to machine learning research during the first decade of the NeurIPS conferences and the Computational Learning Theory (CoLT) Conferences. The latter I co-organized. I received the 2003 Allen Newall Award from the American Association for Artificial Intelligence and the ACM for my work. While my main research lately is in neuroscience and genomics, I have kept up with and teach AI and machine learning. I am a member of the NAS and the NAE.

Sacha Haworth

Founder/Executive Director

The Tech Oversight Project

Sam Hiner

Executive Director

Young People’s Alliance

Sam Hiner is the Executive Director of the Young People's Alliance (YPA). The Young People's Alliance is a youth-led movement to reclaim the American Dream by securing opportunity, affordability, and community in the age of AI. YPA organizes students across 55 college and high school campuses in 5 states, develops bipartisan, pro-youth policy solutions, and advocates at the state and federal level. Sam leads the Human-like AI Coalition, which brings together leading organizations across party lines to create and pass policy to address AI companions' impact on human connection.

Wes Hodges

Acting Director, Center for Technology and the Human Person

The Heritage Foundation

Joshua Hughes

Assistant Pastor

Greater Grace Christian Center

As an accomplished bilingual sales professional with more than 4 years of commission-based quota experience, my objective is to harness the wisdom of the academic and financial sectors, to revitalize the interest of a generation pivoting away from our legacy/spiritual institutions. Colleagues would describe me as a detail-oriented, personable ambassador, with in-depth knowledge of story-telling, problem solving, reporting and presentation fundamentals.

Krystal Jackson

Research Director

Black in AI Safety and Ethics

Ellen Jacobs

Principal

Omidyar Network

Ellen Jacobs serves as a principal at Omidyar Network. In this role, she leads work on tech governance and shapes policies and advocacy strategies on issues such as artificial intelligence, competition, and privacy. Prior to joining Omidyar Network, Ellen served as Senior Adviser on AI Policy at Reset Tech, a non-profit focused on realigning digital media markets with democratic values. At the Institute for Strategic Dialogue, she established the organization’s U.S. digital policy team, built coalitions to advance transparency legislation, and advised Congressional staff, federal regulators, state lawmakers, and civil society partners on transparency, data access, AI, and safety.

Anna Jahn

Executive Director

Centre for Media, Technology and Democracy at McGill University

Anna Jahn is the Executive Director of the Centre for Media, Technology and Democracy at McGill University, where she also serves as Associate Professor (Research) at the Max Bell School of Public Policy. She leads research and policy initiatives at the intersection of artificial intelligence, democratic governance, and public policy. Previously, as Senior Director of Public Policy and Inclusion at Mila, the world's largest AI research institute in deep learning, Anna contributed to UN AI governance discussions, including recommendations for the UN's Independent International Scientific Panel on Artificial Intelligence. She launched the Mila AI Policy Fellowship, connecting researchers and practitioners to produce evidence-based policy briefs, and directed the Indigenous Pathfinders in AI program, fostering AI solutions for Indigenous communities.

Meetali Jain

Director

Tech Justice Law Project

Emilia Javorsky

Director, Futures

The Future of Life Institute

Tyler John

Senior Program Officer, Artificial Intelligence

Effective Institutions Project

Tyler John is the Senior Program Officer for Artificial Intelligence at the Effective Institutions Project and the author of The Foundation Layer: Philanthropic strategy for the AGI transition. In his career he’s advised the giving of the Musk Foundation, TED Audacious, the John and Daria Barry Foundation, Good Ventures, the Waking Up Foundation, the Berggruen Foundation, and more than a dozen other philanthropists on AGI safety and preparedness. Tyler holds a PhD in analytic philosophy and democratic theory from Rutgers University—New Brunswick, and has published papers in law, economics, political science, health care ethics, queuing theory, and the philosophy of cognitive science in some of the top academic journals.

Will Jones

Associate, Futures Program

Future of Life Institute

Will Jones, Futures Program Associate at the Future of Life Institute (FLI), hosts and organisers of this event. My work leading FLI's religious engagement feeds into this broader pro-human movement by providing religious diversity and representation and a good set of potential communities (and influential avenues for change) for building bipartisan momentum.

DZ Kalman

Senior Researcher

Shalom Hartman Institute

Darius Kemp

Executive Director

Common Cause CA/CITED

Michael Kleinman

Head of US Policy

Future of Life Institute

Evan Davison Kotler

Director, Research & Health Security

Helena

David Krueger

Founder & Assistant Professor

Evitable; University of Montréal & Mila

David is an AI professor at Mila and the CEO and Founder of Evitable, a nonprofit whose mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligence. David has been an influential advocate on AI risk within the AI research community for over a decade. In 2023, he initiated the CAIS Statement on AI Risk, alerting the world to the growing expert concern that AI might lead to human extinction.

Mike Kubzansky

CEO

Omidyar Network

Connor Leahy

CEO

Conjecture Ltd

Connor Leahy is an advisor to ControlAI and a major voice in the AI conversation, having previously founded EleutherAI, a grassroots community that built the first open-source LLMs, and founded the AI safety startup Conjecture.

Cristine Legare

Professor

The University of Texas at Austin

Cristine Legare is a professor of psychology and the director of the Center for Applied Cognitive Science at The University of Texas at Austin. Her research examines how the human mind enables us to learn, create, and transmit culture. She conducts comparisons across age, culture, and species to address fundamental questions about cognitive and cultural evolution. Her work on cultural transmission and innovation provides critical insights into how human societies can shape technological development—including AI—to reflect our values and preserve what makes us distinctively human, rather than allowing technology to evolve in ways that erode our cultural agency and diversity.

Brie Linkenhoker

Founder & CEO

Worldview Studio

Cass Madison

Executive Director

Center for Civic Futures

Cassandra Madison is the Executive Director of the Center for Civic Futures, a nonprofit that helps state, tribal, and territorial governments make thoughtful decisions about AI in public services. Her work focuses on building institutional capacity and actively bridging policy, operations, and real-world implementation. She previously led major public-sector technology transformations and now convenes government leaders, researchers, and practitioners to shape responsible, human-centered approaches to emerging technology.

Shana Mansbach

VP of Strategy

Fathom

Shana is VP of Strategy and Communications at Fathom, where she leads the organization’s efforts to build out the policy solutions needed for our transition to a world with AI. She comes to Fathom after more than a decade serving in the U.S. Government. Most recently, she served as senior advisor to Secretary of State Antony J Blinken; previous public service positions include campaign speechwriter and advisor to Vice President Kamala Harris, deputy director of communications for House Speaker Nancy Pelosi, and advisor to Secretary of State John Kerry. She concurrently serves as a Senior Adjunct Fellow at the Center for New American Security and as a Special Government Employee at the National Security Commission on Emerging Biotechnology.

Kate McCarthy

Director of Programs

Women's Media Center

Kate McCarthy is the Director of Programs for the Women's Media Center, a non-profit feminist organization that works to make diverse women and girls more visible and powerful in media. Working on the prevention of online GBV, particularly of journalists and civic society leaders, led to an interest in technology aided abuse, including deep fakes and nudify tools. Working to ensure human values and oversight rules are written into AI development is a priority if we are ensure human rights that have been won over these last decades are protected.

John McElligott

CEO

Servitium AI and Serviti Corp

Wes McEnany

Labor Outreach and Coalition Builder

Future of Life

Colin McGlynn

AI Policy Advisor

Demand Progress

Colin McGlynn is the AI Policy Advisor at Demand Progress. Demand Progress believes that everyone worried about AI, regardless of their reason for concern, is on the same side and that the current tensions between "AI Ethics" and "AI Safety" are counterproductive for both sides. Previously, Colin spent a decade working in startups in engineering and engineering management.

Mark Medish

Vice Chair

Panterra

Medlir Mema

Director

Organized Intelligence

Alan Minsky

Executive Director

Progressive Democrats of America (PDA)

Geoff Mitelman

Founding Director

Sinai and Synapses

Rabbi Geoffrey A. Mitelman is the Founding Director of Sinai and Synapses, an organization that bridges the scientific and religious worlds, and is being incubated at Clal – The National Jewish Center for Learning and Leadership. His work has been supported by multiple grants from the John Templeton Foundation and Templeton Religion Trust, and he was co-editor of the Fall 2025 Issue of the CCAR Journal on “AI and the Rabbinate.” His writings about the intersection of religion and science have been published in the books Striving to Be Human, Seven Days, Many Voices and A Life of Meaning, (all published by the CCAR press) and These Truths We Hold (published by HUC Press) as well as on The Huffington Post, Jewish Telegraphic Agency, My Jewish Learning, Nautilus, The Wisdom Daily, and Orbiter. He has been an adjunct professor at both the Hebrew Union College – Jewish Institute of Religion and the Academy for Jewish Religion, as well an ambassador to the Island of Knowledge, and is an internationally sought-out teacher, presenter, and scholar-in-residence.

Esha Mufti

Head of Strategy & Activation

The B Team

Brandie Nonnecke

Sr. Policy Director

Americans for Responsible Innovation

Patrick Oakford

Director, Worker Power and Economic Security

Roosevelt Institute

Teri Olle

VP

Economic Security California (ESP)

Teri Olle is Vice President at Economic Security Project, and leads the work of its affiliate, Economic Security California. Teri advances policies and campaigns that tackle the affordability crisis from both ends: First, by putting money into the pockets of people who need it most—direct cash such as guaranteed income and tax credits; and equally important, by blunting the forces that take money out of people’s pockets: corporate concentration and power that result in broken markets, higher prices and fewer choices. As tech and AI have exacerbated these problems, her work focuses on critical questions at the intersection of tech and the political economy: who decides the future of this burgeoning technology, whom it benefits, and who bears the risks? Teri advocates for a tech future that delivers the broad-based prosperity that is often promised, but has yet to be realized.

Jeremy Ornstein

Movement Strategy

Center for AI Safety

Riki Parikh

Policy Director

The Alliance for Secure AI

Riki Parikh serves as Policy Director at The Alliance for Secure AI, where he leads strategy to build bipartisan support for smart, enforceable safeguards that ensure artificial intelligence is developed and deployed responsibly, transparently, and in the public interest. He brings more than a decade of senior experience at the intersection of public policy, strategic communications, and law—including most recently as Senior Counselor to the Secretary at the U.S. Department of Homeland Security, where he advised on national security and emerging technology issues. Previously, he held leadership roles in global public affairs and policy communications at Meta and LinkedIn, shaping corporate engagement with policymakers around the world. He spent six years on Capitol Hill, as counsel to U.S. Senator Michael Bennet and as press secretary to U.S. Senator Mark Warner.

Brett Puterbaugh

AI Governance Lead

The Church of Jesus Christ of Latter-day Saints

I work as the AI Governance Lead for The Church of Jesus Christ of Latter-day Saints. I serve on our AI governance committee as focus on how we are using AI for outward facing products.

Philip Reiner

CEO

Institue for Security and Technology (IST)

Philip Reiner is founder and CEO of the Institute for Security and Technology, a global nonprofit think tank whose vision is of a democratic world secured and empowered by technology built on trust. Philip served on the National Security Council under President Obama for 4 years and the Office of the Secretary of Defense for Policy as a civil servant, and previously for years in the defense technology space in Space and Airborne Systems division of Raytheon. At the core of Philip and IST's work lies the commitment to ensuring technology is used to enhance the safety and security of as many as possible. IST's work on AI and other issues related to this convening since 2017 has included projects focused on AI and cybersecurity, nuclear weapons, strategic stability, AGI, risk reduction, open source, cognitive security, export controls, democracy, disinformation, and much more.

Alison Rice

Managing Director, Senior Advisor

Design It For Us

Alison Rice is the Managing Director and Senior Advisor of Design It For Us, a youth coalition leveraging grassroots power to disrupt Big Tech’s harmful business models. Launched in 2023, the coalition has mobilized young people across the country to use their voices and demand that products are designed with safety and privacy at center. To date, DIFU has helped drive the successful passage of 5 state laws. Alison's background includes a focus on corporate accountability and economic justice campaigning, among other crosscutting issues, while in roles at Accountable Tech, The Hub Project and EMILY's List.

Guillaume Riesen

Science Engagement Designer

Worldview Studio

Ari Rosenthal

AI Governance Strategist

Torchbearer Community

My name is Ari Rosenthal. I am an educator, organizer, and AI governance strategist currently working with Torchbearer Community to address AI risk at the federal and local level in Massachusetts. I am working with lawmakers to develop talking points, joint letters, and policy frameworks, and working with local communities to educate on the responsible use of AI and its impacts on their communities.

Marc Rotenberg

Executive Director

Center for AI and Digital Policy

Founder of the Center for AI and Digital Policy, a global network of AI policy experts and human rights advocates. Author, "AI Policy Sourcebook," a compendium of AI governance framework and policy resources and "AI and Democratic and Democratic Values," a comprehensive review of AI polices and practices worldwide. Member OECD AI Group of Experts, which drafted first governance framework for AI, and also Universal Guideliens for AI.

Emma Ruby-Sachs

Executive Director

Ekō

My name is Emma Ruby-Sachs. I'm a lawyer by training but I've been a campaigner for human rights and corporate responsibility for the last 15 years. I currently run a non-profit campaigning community with 23 million global members called Ekō. We're heavily focused on helping to curb long term risks and harms associated with AI development worldwide.

Laura Ryan

Advisor, Renovating Democracy

Berggruen Institute

Laura J. Ryan is a democracy advisor to the Renovating Democracy program at the Berggruen Institute, where she works on participatory and deliberative approaches to democratic governance amid rapid technological change. Through this work, she has advised Governor Gavin Newsom’s administration and the State of California on Engaged California, a statewide digital platform for citizen participation, and serves as an advisor to Change.org’s emerging citizen engagement platform. At Berggruen, she also co-developed and led Imaginative Intelligences, a Mozilla-supported initiative exploring creativity, narrative, and human agency in the age of AI. Earlier in her career, Ryan served as Press Secretary and Digital Media Specialist for Jared Polis in the U.S. Congress, launched National Journal’s technology policy vertical as a reporter, and taught technology ethics with Michael Sandel. She studied at Wellesley College, Harvard Divinity School, and MIT, where studied with Sherry Turkle, and grew up in Silicon Valley.

Mungkol Sarin

Co-founder and Chief Research Officer

AI Safety Asia

Dr. Supheakmungkol Sarin researches the societal and second-order impacts of AI, advocating for frameworks grounded in equity, inclusion, and human rights to ensure humanity’s flourishing. As Co-founder and Chief Research Officer of AI Safety Asia, he leads work on safety and cross-border governance. Previously, he headed Data & AI Ecosystems at the World Economic Forum and led inclusive AI initiatives at Google that empowered over one billion users, while serving as an advisor to governments, the World Bank, and UNESCAP.

Calli Schroeder

Director: AI and Human Rights Program

The Electronic Privacy Information Center (EPIC)

Calli Schroeder is the Director of the AI and Human Rights Program at the Electronic Privacy Information Center (EPIC). Her work focuses on AI risks and how AI systems as they currently exist are affecting people's lives and can be redesigned with the goal of improving people's lives rather than exploiting their data, eroding opportunities, or perpetuating harms.

Julia Senkfor

Research Associate + JOSHUA Program Manager

American Security Foundation

Julia Senkfor manages ASF's research and Jewish Online Safety, Health, and Unity Alliance (JOSHUA), the umbrella under which ASF organizes its work on AI-enabled antisemitism, AI-enabled disinformation, and other forms of AI misalignment and manipulation. She recently published a report examining antisemitism in large language models (LLMs), exposing that foreign entities are employing novel tactics to manipulate narratives and influence opinions through AI systems.

Juli Sherry

Director of Design

Worldview Studio

Nate Soares

President

MIRI

Oliver Stephenson

Associate Director for AI and Emerging Technology Policy

Federation of American Scientists

Josh Tan

Product & Strategy

Public AI

Josh is a computer scientist and mathematician. He leads product and strategy at Public AI, including at the Public AI Inference Utility and Airbus for AI. He also founded and leads research at Metagov, a research lab for engineering institutions. His work studies the intersection between AI and collective intelligence.

bottom of page