
Luister naar The Dynamist
Podcast door Foundation for American Innovation
The Dynamist, a podcast by the Foundation for American Innovation, brings together the most important thinkers and doers to discuss the future of technology, governance, and innovation. The Dynamist is hosted by Evan Swarztrauber, former Policy Advisor at the Federal Communications Commission. Subscribe now!
Probeer 7 dagen gratis
€ 9,99 / maand na proefperiode.Elk moment opzegbaar.
Alle afleveringen
178 afleveringen
The race to harness AI for scientific discovery may be the most consequential technological competition of this time—yet it's happening largely out of public view. While many AI headlines focus on chatbots writing essays and tech giants battling over billion-dollar models, a quiet revolution is brewing in America's laboratories. AI systems like AlphaFold (which recently won a Nobel Prize for protein structure prediction) are solving scientific problems that stumped humans for decades. A bipartisan coalition in Congress is now championing what they call the "American Science Acceleration Project" or ASAP—an audacious plan to make U.S. scientific research "ten times faster by 2030" through strategic deployment of AI. But as federal science funding faces pressure and international competition heats up, can America build the AI-powered scientific infrastructure we need? Will the benefits reach beyond elite coastal institutions to communities nationwide? And how do we ensure that as AI transforms scientific discovery, it creates opportunities instead of new divides? Joining us is Austin Carson [https://x.com/austincarson?lang=en], Founder and President of SeedAI [https://www.seedai.org/], a nonprofit dedicated to expanding AI access and opportunity across America. Before launching SeedAI, Carson led government affairs at NVIDIA and served as Legislative Director for Rep. Michael McCaul. He's been deep in AI policy since 2016—ancient history in this rapidly evolving field—and recently organized the first-ever generative AI red-teaming event at DEF CON [https://www.hackthefuture.com/defcon], collaborating with the White House to engage hundreds of college students in identifying AI vulnerabilities.

It’s easy to take for granted how much social media pervades our lives. Depending on the survey, upwards of 75-80 percent of Americans are using it daily—not to mention billions of people around the world. And over the past decade, we’ve seen a major backlash over the various failings of Big Tech. Much of the ire of policymakers has been focused on content moderation choices—what content gets left up or taken down. But arguably there hasn’t been much focus on the underlying design of social media platforms. What are the default settings? How are the interfaces set up? How do the recommendation algorithms work? And what about transparency? What should the companies disclose to the public and to researchers? Are they hiding the ball? In recent years, policymakers have started to take these issues head on. In the U.S. more than 75 bills have been introduced at the state and federal level since 2023—these bills target the design and operation of algorithms, and more than a dozen have been passed into law. Last year, New York and California passed laws attempting to keep children away from “addictive feeds.” Other states in 2025 have introduced similar bills. And there’s a lawsuit from 42 attorney generals against Meta over its design choices. While Congress hasn’t done much, if anything, to regulate social media, states are clearly filling that void—or at least trying to. So what would make social media better, or better for you? Recently, a group of academic researchers organized by the Knight Georgetown Institute put out a paper called Better Feeds: Algorithms that Put People First [https://kgi.georgetown.edu/research-and-commentary/better-feeds/]They outline a series of recommendations that they argue would lead to better outcomes. Evan is joined by Alissa Cooper [https://alissacooper.com/], co-author of the paper and Executive Director of the Knight-Georgetown Institute. She previously spent over a decade at Cisco Systems, including in engineering roles. Her work at KGI has focused on how platforms can design algorithms that prioritize long-term user value rather than short-term engagement metrics.

When it comes to AI policy, and AI governance, Washington is arguably sending mixed signals. Overregulation is a concern—but so is underregulation. Stakeholders across the political spectrum and business world have a lot of conflicting thoughts. More export controls on AI chips, or less. More energy production, but what about the climate? Less liability, or more liability. Safety testing, or not? “Prevent catastrophic risks”, or “don’t focus on unlikely doom scenarios.” While Washington looks unlikely to pass comprehensive AI legislation, states have tried, and failed. In a prior episode, we talked about SB 1047 [https://open.spotify.com/episode/3XIVyK63GrHRaeqlxLlaJ0?si=82ae6e1ae9e84d72], CA’s ill-fated effort. Colorado recently saw its Democratic governor take the unusual step of delaying implementation of a new AI bill in his signing letter, due to concerns it would stifle innovation the state wants to attract. But are we even asking the right questions? What problem are we trying to solve? Should we be less focused on whether or not AI will make a bioweapon, or more focused on how to make life easier and better for people in a world that looks very different from the one we inhabit today? Is safety versus innovation a distraction, a false binary? Is there a third option, a different way of thinking about how to govern AI? And if today’s governments aren’t fit to regulate AI, is private governance the way forward? Evan is joined by Andrew Freedman [https://fathom.org/], is the co-founder and Chief Strategy Officer of Fathom, a nonprofit building solutions society needs to thrive in an AI-driven world. Prior to Fathom, Andrew served as Colorado’s first Director of Marijuana Coordination, often referred to as the state’s "Cannabis Czar.” You can read Fathom’s proposal for AI governance here [https://fathom.org/resources/Fathom-on-Private-AI-Governance.pdf], and former FAI fellow Dean Ball’s writing on the topic here [https://www.hyperdimensional.co/].

In this week’s episode of The Dynamist, guest host Jon Askonas is joined by Katherine Boyle, (General Partner at a16z), and Neil Chilson, (AI Policy at the Abundance Institute), to tackle a critical yet often overlooked question: How is technology reshaping the American family? As tech giants like TikTok and Instagram come under scrutiny for their effects on children’s mental health, and remote work continues to redefine domestic life, the conversation around technology’s role in family dynamics has never been more urgent. Katherine shares insights from her recent keynote [https://www.youtube.com/live/bKMoCn7IYJU?si=zTFpZWmkijtmdXuQ] at the American Enterprise Institute, highlighting how the core objective of technological innovation, which she calls "American Dynamism," should be empowering the family rather than centralizing state control. Neil provides a fresh perspective on how decentralized systems and emergent technologies can enhance—not hinder—family autonomy and resilience. Amid rising debates about homeschooling, screen time, and the shift toward a remote-first lifestyle, the guests discuss whether tech-driven changes ultimately strengthen or undermine families as society's fundamental institution. Together, they explore the possibility of a new era in which technology revitalizes family autonomy, reshapes education, and reignites productive home economies.

During the Biden Administration, few figures in Washington sparked so much debate and caused so much spilled ink as Lina Khan. The Wall Street Journal published over 80 editorials criticizing her approach, while politically opposed tech titans like LinkedIn's Reid Hoffman and Tesla's Elon Musk called for her firing. Meanwhile, an unlikely coalition of progressive Democrats like Elizabeth Warren and populist Republicans like JD Vance rallied behind her vision of more aggressive antitrust enforcement. For many, her ambitious cases against Microsoft, Amazon, and Meta weren't merely legal challenges. They represented a fundamental break from the antitrust philosophy that had dominated for decades across administrations. These cases now transfer to Trump's FTC, creating a test of regulatory continuity at a time when Big Tech CEOs are looking to curry favor with the White House. In this conversation, Khan reflects on her legacy, discusses what critics may have misunderstood about her approach, and explores how the movement she catalyzed might evolve.
Probeer 7 dagen gratis
€ 9,99 / maand na proefperiode.Elk moment opzegbaar.
Exclusieve podcasts
Advertentievrij
Gratis podcasts
Luisterboeken
20 uur / maand