Kissinger and the Future of AI

Kissinger and the Future of AI

On 2 December 2025, the John F. Kennedy Jr. Forum at Harvard Kennedy School’s Institute of Politics—co-sponsored by the Belfer Center for Science and International Affairs—hosted a hybrid discussion titled “Kissinger and the Future of AI.” The session featured Eric Schmidt, former CEO and Chairman of Google and current Chair and CEO of Relativity Space, in conversation with Graham Allison, Douglas Dillon Professor of Government at Harvard Kennedy School, who together examined the evolving role of artificial intelligence by drawing on their recent collaborative work with the late Secretary of State Henry Kissinger.

Opening Remarks and Context

The evening opened with undergraduate Matteo Kagliero, junior in applied mathematics and computer science with economics at Harvard College, welcoming attendees and providing logistical notes. He framed the discussion within what he described as “one of the most pivotal moments in human history,” noting AI’s rapid integration into daily life and its profound implications for global stability, economic competition, and the very nature of human decision-making.

Remembering Henry Kissinger’s Influence

Graham Allison introduced Eric Schmidt by recounting Schmidt’s decade-long friendship and mentorship with Henry Kissinger. He highlighted Kissinger’s remarkable intellectual trajectory—from fleeing Nazi Germany to serving as Harvard professor, war veteran, and national security adviser—and underscored Kissinger’s enduring quest to prevent existential threats to humanity. Allison emphasized that Kissinger saw AI as “another generation of a somewhat analogous problem” to nuclear weapons, provoking fundamental questions about human agency and ethical governance.

From Mentorship to Statesmanship

In his reflections, Eric Schmidt described first encountering Kissinger in his eighties and being struck by his intellectual vigor. He shared anecdotes—from Kissinger’s immigrant father working in a shaving brush factory to institutional rules limiting thesis length after Kissinger’s own record-breaking submission—that illustrated Kissinger’s remarkable polymathic talents and his lifelong dedication to strategic thinking. Schmidt remarked that Kissinger’s guiding principle was “no higher duty than to prevent the catastrophe of nuclear war,” a mindset he carried into discussions of AI’s potential and perils.

The “San Francisco Consensus” on AI Development

Turning to AI’s technical trajectory, Schmidt outlined what he dubbed the “San Francisco consensus,” wherein rapid advances in language models, agent-based workflows, and emerging reasoning capabilities are driven by relentless scaling of data, compute, and model size. He noted successive breakthroughs—from OpenAI’s GPT-4.1 to Google’s Gemini 3—illustrating that “if you put more data and more electricity and more chips, you get this emergent behavior.” Schmidt projected that while today’s models still require human-directed training, recursive self-improvement could emerge within a few years, raising urgent questions about when to impose boundaries to preserve human agency.

Divergent Paths: U.S. versus China

The discussion then turned to geopolitical competition. Schmidt contrasted the U.S. approach—characterized by closed-source, venture-backed innovation driven by economic incentives—with China’s surprisingly open-source strategy that proliferates AI models worldwide. He argued that China’s focus on embedding AI in every industry and its state-backed expansion of renewable energy and data-center capacity could give it an outsized advantage in consumer-facing applications and robotics manufacturing. Yet he cautioned that both models carry risks, from intellectual property diffusion to value and bias misalignment.

Ethical and Strategic Imperatives

Throughout the dialogue, both speakers stressed the need for ethical guardrails. Schmidt and Allison urged the audience—many of whom will lead organizations facing these challenges—to grapple with questions Kissinger posed two decades ago: What does it mean to be human in the age of AI? How should societies regulate technologies that can automate workflow, shape information, and even influence children’s development? And at what point must governments or multinational bodies step in to prevent “AI arms races of existential nature”?

Looking Ahead

As the session drew to a close, Eric Schmidt and Graham Allison challenged students and policymakers to pursue interdisciplinary research on AI’s societal impacts, U.S.-China strategic dynamics, and the design of institutions capable of safeguarding freedom and human dignity. They emphasized that AI’s promises—from automating mundane tasks to accelerating scientific discovery—must be balanced against the profound ethical and geopolitical dilemmas that Henry Kissinger first urged them to confront.


The John F. Kennedy Jr. Forum at Harvard Kennedy School’s Institute of Politics is a premier venue for timely discussions on public policy and leadership. By convening thought leaders, practitioners, and students, the Forum fosters rigorous debate on global and domestic issues, inspiring informed civic engagement and shaping the next generation of public servants.

The Conf is a platform that reports on scholarly conferences, symposia, roundtables, book talks, and other academic events. It is managed by a group of students from leading American and European universities and is published by Alma Mater Europaea University, Location Vienna.

Share this article: