AI Triad: A Dialogue Across Differences

AI Triad: A Dialogue Across Differences

On 5 November 2025, Harvard University’s Berkman Klein Center hosted “AI Triad: A Dialogue Across Differences”, featuring Jason Crawford, founder of the Roots of Progress Institute, Amba Kak, co-executive director of AI Now Institute, and Brian McGrail, policy lead and senior counsel for the Center on AI Safety Action Fund. Moderated by Jonathan Zittrain, faculty director at the Berkman Klein Center, the session explored competing perspectives on AI development, including accelerationist, safetyist, and skeptic viewpoints, examining where these schools of thought disagree and potentially align.

Opening Introductions

An introducer welcomed attendees to the event, part of the Berkman Klein Center’s fall speaker series aimed at sparking conversations on technological, regulatory, philosophical, and existential questions posed by AI. She provided brief bios, noting Jason Crawford as the founder of the Roots of Progress Institute, dedicated to building a culture of progress for the 21st century, and author of the Techno-Humanist Manifesto. Amba Kak was introduced as co-executive director of AI Now, a US-based research institute focused on advancing diagnosis and actionable policy recommendations to address concerns with artificial intelligence and concentrated power; she had recently completed a term as senior advisor on AI at the Federal Trade Commission. Brian McGrail was described as policy lead and senior counsel for the Center on AI Safety Action Fund, a nonpartisan advocacy organization dedicated to mitigating national security risks from advanced AI, with prior experience as senior advisor to the Deputy Secretary of Commerce. The event was moderated by Jonathan Zittrain, the center’s faculty director.

Framing the AI Triad

Jonathan Zittrain thanked the introducer and attendees, emphasizing the Berkman Klein Center’s interest in anticipating emerging issues and fostering a broad community to reconcile differing views on AI. He described the current state of AI as unclear but significant, drawing on a reference to philosopher Buffalo Springfield. Jonathan Zittrain highlighted contrasting perspectives encountered in the Bay Area: in Berkeley, concerns about catastrophic or existential risks from AI progress, and in San Francisco, efforts to accelerate AI advancement. He outlined three points in what he termed the AI Triad: safetyists worried about uncontrolled experimentation and risks from advanced AI; accelerationists focused on rapid progress to cure diseases and improve society; and skeptics viewing AI as another technology that reinforces hierarchies and brings significant but not necessarily existential unintended consequences. Jonathan Zittrain noted potential agreement on AI’s transformative nature but differences in emphases on risks and controllability. He invited workshopping of the terms and, before turning to the guests, polled the in-person audience with hums to gauge alignment with each perspective, finding varied responses including some uncertainty and overlap.

Jason Crawford’s Perspective: Entering an Intelligence Age

Jonathan Zittrain first asked Jason Crawford for his open-ended sense of AI’s future, choices involved, and what techno-humanism means. Jason Crawford described AI as something big, with questions about its scale. He recalled discussions in tech about the next big thing after the internet era, noting AI is now clearly the next major development in computing. Economists, he said, are considering if AI could be a general-purpose technology comparable to the steam engine. Jason Crawford suggested AI might represent the next stage in human evolution, transitioning from the agricultural and industrial ages to an intelligence age, potentially automating mental labor as engines did physical labor. He pointed to historical economic growth rates increasing over time—from minimal in the Stone Age to higher in agricultural and industrial periods—and speculated that an intelligence age could sustain 20% annual GDP growth, transforming the world. Jason Crawford viewed AI as significant and transformative, with positive potential if managed well.

Brian McGrail on Risks and Policy Recommendations

Jonathan Zittrain then turned to Brian McGrail, asking for an elevator pitch to Congress, assuming they felt optimistic after Jason Crawford‘s remarks. Brian McGrail agreed with much of Jason Crawford‘s assessment but emphasized uncertainty and the need to consider risks before rapid advancement. He highlighted potential benefits but stressed the speed of change, comparing it to the industrial revolution’s social upheaval, possibly compressed. Brian McGrail raised concerns about AI automating cognitive labor within five to 10 years, impacting economies and societies. He argued that even a 10% chance of catastrophic outcomes warrants precautions, likening it to avoiding a risky flight. On policy, Brian McGrail suggested more transparency from frontier developers, citing California’s SB 1047 as a model to federalize, along with third-party audits and pre-deployment testing, without heavy-handed regulation yet. He expressed discomfort with private-sector-driven development creating a race dynamic, where labs compete intensely, and suggested government intervention to moderate pace for this technology. In response to Jonathan Zittrain‘s question on catastrophic success versus other risks, Brian McGrail noted displacement as significant but focused on broader catastrophic potentials.

Amba Kak on Power Concentration and Reimagining AI Trajectories

Jonathan Zittrain invited Amba Kak to respond, asking where she entered the conversation. Amba Kak noted that “transformational” carries unexamined assumptions and conceals ruptures. She described AI Now’s work in research, advocacy, and policy to challenge the industry-set trajectory of AI, centered on concerns with power. Tracing AI’s nearly 70-year history, Amba Kak argued the current path worsens concentration in a narrow tech sector, where advancement equates to scale in capital, inputs, and monetization, controlled by giant companies—not by accident but as a product of the past decade’s power consolidation. She called this an unmitigated bad for markets, democracy, and national security due to single points of failure. Amba Kak emphasized that this diagnosis guides strategies to avoid worsening concentration, questioning if interventions inadvertently reinforce it. She suggested workshopping more optimistic names for perspectives, believing the current path is broken and requires audacious hope for alternatives.

Responses to Decentralization and Research Trajectories

Jonathan Zittrain asked Amba Kak if she preferred undoing AI advancements or decentralizing them, such as running advanced models locally. Amba Kak contested equating decentralization with individual fine-tuning, noting access does not ensure equitable value distribution, as seen in open-source perils. She advocated shaking up the market through structural competition but also questioned the research trajectory set by large companies’ incentives, lacking vibrant independent ecosystems. Amba Kak expressed interest in revisiting paths from 2012’s AlexNet, centering decentralization principles. Jonathan Zittrain posed a similar question to Brian McGrail on widespread AI access mitigating risks. Brian McGrail agreed with Amba Kak on concentration as a major threat, potentially enabling one company to dominate via superintelligent AI. He cited examples like export controls and Nvidia’s influence, suggesting antitrust enforcement against deals like OpenAI’s vertical integrations, though uncertain if universal access fully resolves power issues.


The Berkman Klein Center for Internet & Society at Harvard University is a pioneering hub dedicated to exploring and shaping the intersection of technology, law, and society. Through interdisciplinary research, global partnerships, and public engagement, the Center investigates issues such as digital rights, online safety, and the impact of emerging technologies, empowering scholars, policymakers, and practitioners to build a more equitable, open, and resilient internet ecosystem.

The Conf is a platform that reports on scholarly conferences, symposia, roundtables, book talks, and other academic events. It is managed by a group of students from leading American and European universities and is published by Alma Mater Europaea University, Location Vienna.

Share this article: