Scaling Theory
Podcast de Thibault Schrepel
Scaling Theory is a podcast dedicated to the power laws behind the growth of companies, technologies, legal and living systems. The host, Dr. Thibault...
Empieza 7 días de prueba
Después de la prueba $99.00 / mes.Cancela cuando quieras.
Todos los episodios
12 episodiosMy guest today is Rory Linkletter [https://worldathletics.org/athletes/canada/rory-linkletter-14617088], a professional athlete who recently ran the Paris Olympic Marathon and the New York Marathon. Rory’s current personal best in the marathon is an impressive 2:08:01, which makes him the top Canadian marathon runner and the third-best Canadian performance ever. This episode, as you might guess, is different from the others. I wanted to talk to Rory because he inspired me greatly when I went to Paris to watch the race. Most importantly, I am convinced that there is much we can learn from professional athletes, especially marathon runners. In our conversation, we explore how Rory scaled his mental and physical abilities. I draw many parallels with the academic and policy worlds, delving into what we can learn from his process, the power laws he has identified, and his relationship with science. Scaling Theory is not turning into a running podcast, but, true to its mission, it remains focused on exploring the scaling laws behind everything—be it economic, technical, or biological systems. Rory opens new doors regarding this last subject. I hope you enjoy our discussion.
My guest is Stefan Thurner, A Professor of theoretical physics, and the President of the Complexity Science Hub in Vienna. Stefan has published over 240 scientific articles and he was elected Austrian Scientist of the Year 2017. He is also an external professor at the Santa Fe Institute. In our conversation, we first delve into the scaling laws of everything. We explore social, financial, biological, and economic dynamics—for example, how to make the economy more resilient by targeting some unique companies, how social bubbles form, the strength of networks of friends and foes in social contexts, and how the methodology of physics can help us understand other fields, etc. I hope you enjoy our discussion. Find me on X at @ProfSchrepel [https://x.com/profschrepel?lang=en]. Also, be sure to subscribe. *** References: ➝ Measuring social dynamics in a massive multiplayer online game [https://www.sciencedirect.com/science/article/pii/S0378873310000316] (2010) ➝ How women organize social networks different from men [https://www.nature.com/articles/srep01214] (2013) ➝ Multirelational Organization of Large-Scale Social Networks in an Online World [https://www.pnas.org/doi/10.1073/pnas.1004008107] (2010) ➝ What is the minimal systemic risk in financial exposure networks? [https://www.sciencedirect.com/science/article/pii/S0165188920300683] (2020) ➝ Scaling laws and persistence in human brain activity [https://www.sciencedirect.com/science/article/pii/S0378437103002796] (2003) ➝ New Forms of Collaboration Between the Social and Natural Sciences Could Become Necessary for Understanding Rapid Collective [https://pubmed.ncbi.nlm.nih.gov/38079519/] (2024) ➝ Quantifying firm‐level economic systemic risk from nation‐wide supply networks [https://www.nature.com/articles/s41598-022-11522-z] (2022) ➝ Fitting Power-laws in Empirical Data with Estimators that Work for All Exponents [https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0170920] (2017) ➝ Complex Systems: Physics Beyond Physics [https://iopscience.iop.org/article/10.1088/1361-6404/aa5a87/meta] (2017) ➝ Systemic Financial Risk: Agent-based Models to Understand the Leverage Cycle on National Scales and its Consequences [https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=433fb0508b3a498bf875cb15a6f05cca7517a6a9] (2011) ➝ Peer-review in a world with rational scientists: Toward selection of the average [https://arxiv.org/abs/1008.4324] (2010)
My guest today is Allison Stanger [https://www.middlebury.edu/college/people/allison-stanger]. Allison is a Middlebury Distinguished Endowed Professor; an Affiliate at the Berkman Klein Center for Internet and Society, Harvard University; the Co-Director (with Danielle Allen) of the GETTING-Plurality Research Network, Harvard University; founding member of the Digital Humanism Initiative (Vienna); and an External Professor at the Santa Fe Institute. Allison’s next book, Who Elected Big Tech? is under contract with Yale University Press. In this conversation, Allison and I delve into the political science surrounding large tech companies. We explore their effects on consumers and democracy, the interplay between capitalism and democracy, the dangers of fragmented regulation, what the effective governance of social media entails, how to scale and measure it, potential areas of cooperation with China, and the relevance of public choice theory, complexity science, and power laws in shaping our understanding of technology. I hope you enjoy our discussion. *** References * Stanger, Allison. "The Real Cost of Surveillance Capitalism: Digital Humanism in the United States and Europe." Perspectives on Digital Humanism (2022): 33-40. https://library.oapen.org/bitstream/handle/20.500.12657/51945/978-3-030-86144-5.pdf [https://library.oapen.org/bitstream/handle/20.500.12657/51945/978-3-030-86144-5.pdf?sequence=1#page=45] * Werthner, Hannes, et al. "Digital humanism: The time is now." Computer 56.1 (2023): 138-142. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10008968 [https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10008968] * Soros, George. "Fallibility, reflexivity, and the human uncertainty principle." Journal of Economic Methodology 20.4 (2013): 309-329. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10008968 [https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10008968]
My guest is Arvind Narayanan [https://www.cs.princeton.edu/~arvindn/], a Professor of Computer Science at Princeton University, and the director of the Center for Information Technology Policy, also at Princeton. Arvind is renowned for his work on the societal impacts of digital technologies, including his textbook on fairness and machine learning, his online course on cryptocurrencies, his research on data de-anonymization, dark patterns, and more. He has already amassed over 30,000 citations on Google Scholar. In just a few days, in late September 2024, Arvind will release a book co-authored with Sayash Kapoor titled “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” Having had the privilege of reading an early version, our conversation delves into some of the book’s key arguments. We also explore what Arvind calls AI scaling myths, the reality of artificial general intelligence, how governments can scale effective AI policies, the importance of transparency, the role that antitrust can, and cannot play, the societal impacts of scaling automation, and more. I hope you enjoy our conversation. Find me on X at @ProfSchrepel [https://x.com/profschrepel?lang=en]. Also, be sure to subscribe. ** References: ➝ AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference [https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil] (2024) ➝ AI scaling myths [https://www.aisnakeoil.com/p/ai-scaling-myths] (2024) ➝ AI existential risk probabilities are too unreliable to inform policy [https://www.aisnakeoil.com/p/ai-existential-risk-probabilities] (2024) ➝ Foundation Model Transparency Reports [https://arxiv.org/abs/2402.16268] (2024)
My guest today is Sara Hooker [https://www.sarahooker.me], VP of Research at Cohere, where she leads Cohere for AI, a non-profit research lab that seeks to solve complex machine learning problems with researchers from over 100 countries. Sara is the author of numerous research papers, some of which focus specifically on scaling theory in AI. She has been listed as one of AI’s top 13 innovators by Fortune. In our conversation, we first delve into the scaling laws behind foundation models. We explore what powers the scaling of AI systems and the limits to scaling laws. We then move on to discussing openness in AI, Cohere’s business strategy, the power of ecosystems, the importance of building multilingual LLMs, and the recent change in terms of access to data in the space. I hope you enjoy our conversation. Find me on X at @ProfSchrepel [https://x.com/profschrepel?lang=en]. Also, be sure to subscribe. ** References: ➝ Sara Hooker, On the Limitations of Compute Thresholds as a Governance Strategy (2024) [https://arxiv.org/abs/2407.05694] ➝ Sara Hooker, The Hardware Lottery (2020) [https://arxiv.org/abs/2009.06489] ➝ Sara Hooker, Moving beyond “algorithmic bias is a data problem” (2021) [https://www.sciencedirect.com/science/article/pii/S2666389921000611]➝ Longpre et al., Consent in Crisis: The Rapid Decline of the AI Data Commons (2024) [https://arxiv.org/abs/2407.14933]
Disponible en todas partes
¡Escucha Podimo en tu celular, tableta, computadora o coche!
Un universo de entretenimiento en audio
Miles de pódcasts y audiolibros exclusivos
Sin anuncios
No pierdas tiempo escuchando anuncios cuando escuches los contenidos de Podimo.
Empieza 7 días de prueba
Después de la prueba $99.00 / mes.Cancela cuando quieras.
Podcasts exclusivos
Sin anuncios
Podcasts que no pertenecen a Podimo
Audiolibros
20 horas / mes