
inglés
$99 / mes después de la prueba.Cancela cuando quieras.
Acerca de Secure Talk Podcast
Secure Talk reviews the latest threats, tips, and trends on security, innovation, and compliance. Host Justin Beals interviews leading privacy, security and technology executives to discuss best practices related to IT security, data protection and compliance. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.
When Federal Agents Ignore Court Orders: What Happens to Democracy? | Secure Talk with Claire Finkelstein
What happens when federal law enforcement refuses to follow court orders? In Minneapolis, ICE agents denied state investigators access to crime scenes despite court-issued warrants—a breakdown that national security experts had been warning about for months. Dr. Claire Finkelstein, Professor of Law at University of Pennsylvania and Director of the Center for Ethics and the Rule of Law, saw this coming. In October 2024, she ran a tabletop exercise with over 30 retired military leaders simulating exactly this scenario: federal forces confronting state National Guard during civil unrest. The simulation escalated to violence faster than anyone expected, with few off-ramps once momentum built. Now that simulation is playing out in real time. Dr. Finkelstein has been on the legal front lines, representing 155 members of Congress before the Supreme Court. When the Court ruled the administration couldn't use National Guard troops as they intended, ICE agents surged instead—creating the confrontation we're seeing today. The questions are urgent: Can states prosecute federal agents who commit crimes in their jurisdiction? What happens when federal authorities claim immunity? How do soldiers follow orders when they can't trust those orders are lawful? The Supreme Court's immunity decision has made these questions harder to answer. This conversation explores what happens when rule of law meets political will, and what remains when the institutions designed to protect democracy face their greatest test. #CyberSecurity #NationalSecurity #Democracy #RuleOfLaw #Minnesota #Minneapolis Resources: Finkelstein, Claire. (2026, January 21). We ran high-level US civil war simulations. Minessota is exactly how they start. The Guardian. https://www.theguardian.com/commentisfree/2026/jan/21/ice-minnesota-trump [https://www.theguardian.com/commentisfree/2026/jan/21/ice-minnesota-trump]
Shared Wisdom: Why AI Should Enhance Human Judgment, Not Replace It | Secure Talk with Alex Pentland
Most AI discourse swings between paradise and doom—but the real question is how we architect these systems to enhance human understanding rather than replace decision-making. MIT Professor Alex "Sandy" Pentland reveals why treating AI as an information tool instead of an authority is critical for cybersecurity teams, business leaders, and anyone navigating the intersection of technology and culture. The math is stark: 90% of social media users are represented by only 3% of tweets. We're making decisions based on algorithmic extremes, not community wisdom. Pentland shows how Taiwan used the Polis platform to restore government trust from 7% to 70% by eliminating follower counts and visualizing the full spectrum of opinion—proving most people agree more than they think. For security professionals, the implications are profound: culture drives security outcomes more than controls. The stories your team shares about breaches, vulnerabilities, and response protocols create the shared wisdom that determines whether you're actually secure. AI can help synthesize context and surface patterns across distributed organizations, but cannot replace the human judgment needed when edge cases and outliers occur. Drawing parallels to the Enlightenment—when letter-writing networks sparked unprecedented collaboration among scholars—Pentland argues we stand at a similar inflection point. We have tools that let us share information at unprecedented scale, yet our digital systems amplify loud voices and create echo chambers instead of fostering collective wisdom. His book "Shared Wisdom" offers a pragmatic framework for cultural evolution in the age of AI, recognizing we'll take steps forward, make mistakes, and need to choose our direction deliberately. Key insights include understanding AI as a statistical repackaging of human stories, recognizing how four waves of AI development have each failed in predictable ways, and learning why loyal agents—systems legally bound to serve your interests like doctors and lawyers—represent the future of trustworthy AI. Pentland also explains why audit trails and liability matter more than premature regulation, and how communities need local governance that's interoperable but not uniform. Alex "Sandy" Pentland is Stanford HAI Fellow, MIT Toshiba Professor, and member of the US National Academy of Engineering. Named one of "100 People to Watch This Century" by Newsweek and one of "seven most powerful data scientists in the world" by Forbes, his work established authentication standards for digital networks and contributed to pioneering EU privacy law. Episode Resources: Pentland, Alex. (2025). Shared Wisdom: Cultural Evolution in the Age of AI. The MIT Press. https://mitpress.mit.edu/9780262050999/shared-wisdom/
The 2026 Planning Episode: 5 key security imperatives.
While most organizations treat security as a cost center, a select group is using it to win enterprise deals, open new markets, and outpace competitors. The difference? They've stopped asking "how much does security cost?" and started asking "how much value does security create?" This strategic edition synthesizes lessons from security leaders at Walmart, PayPal, Postman, and the defense industrial base to reveal the playbook for 2026: treating security as a business function that enables velocity, builds trust, and creates competitive moats. Five Strategic Imperatives for 2026: 1. Architect for the AI Identity Explosion When AI agents access your CRM, email, and databases on behalf of humans, who's accountable? Walmart's 10,000+ developers faced this at scale. Learn how to govern probabilistic, non-deterministic systems before deployment breaks. 2. Turn Supply Chain Security Into Competitive Advantage CMMC enforcement is here—Raytheon paid $8.4M, Penn State $1.25M. But smart contractors are leading with certification to win contracts. See how quantitative security standards are reshaping business relationships between primes and subs. 3. Extract Intelligence From Your Own Logs One organization prevented $3M in fraud using internal threat intelligence. Learn why focused AI models that analyze your specific environment outperform generic vendor feeds. 4. Make Security Your Primary Differentiator When SOC 2 Type II certification wins you three enterprise customers worth $2M ARR, security spending looks very different to the CFO. Discover how to position security as the reason customers choose you. 5. Build Culture, Not Tool Stacks The oil & gas industry made safety everyone's responsibility through culture, not technology. Apply the same principles to solve cybersecurity's 65% turnover crisis. Expert Insights From: Rishi Bhargava (Descope) | Tobias Yergin (Walmart) | Bob Kolasky (Exiger) | Chris Wysopal (Veracode) | Bill Anderson (Mattermost) | Satyam Patel (Kandji) | Sam Chehab (Postman) | Brian Wagner | Dimitry Shvartsman (PayPal) The Meta-Pattern: Organizations winning in 2026 measure security in business terms—revenue enabled, customers won, time to market reduced. They're not the "department of no" blocking progress—they're the team enabling fast, safe movement. 🎙️ SecureTalk: Strategic conversations with security leaders, hosted by Justin Beals 🔔 Subscribe for insights on AI security, CMMC, threat intelligence & security ROI
Secure Talk Special Episode: "Building Secure Societies in the Age of Division: The Seven Lessons for Humanity Heading Into 2026"
"In 20 years, we transformed food allergy awareness from nonexistent to universal—no law required. What if we could do the same for data security and AI governance?" This special episode reveals how grassroots cultural shifts create lasting change, and why 2026 might be the year cybersecurity professionals become architects of something bigger than defenses. We've distilled 2025's conversations with experts from Harvard, MIT, NYU, Brown, and the AI development frontlines into seven actionable lessons that reframe security from technical problem to human opportunity. From understanding the 800 billion AI agents already in our systems, to recognizing why your most valuable threat intelligence is already in your logs, to building the communities that make external defenses less necessary. Here's what successful security leaders are realizing: The organizations thriving in 2026 aren't just protecting systems—they're creating conditions where humans and AI can flourish together. THE SEVEN LESSONS: • Social division is our greatest vulnerability (and connection is our strength) • Technology won't save us from ourselves (but we can) • Real change happens through grassroots cultural shifts • AI demands fundamentally different thinking (here's how) • Our values can blind us (when to trust them, when not to) • The weakest links are often invisible (where to look) • Context matters more than technology (your advantage is closer than you think) FEATURING INSIGHTS FROM: Dr. Claire Robertson (NYU) | Greg Epstein (Harvard/MIT) | Dr. De Kai | Rishi Bhargava (Descope) | Tobias Yergin (Walmart AI) | Prof. Steven Sloman (Brown) | Lars Kruse | Brian Wagner | Dr. Aram Sinnreich | Jesse Gilbert PERFECT FOR: Security leaders building resilient organizations | Professionals navigating AI transformation | Anyone ready to move beyond purely technical solutions 🔗 StrikeGraph: https://strikegraph.com Which lesson will change how you approach security in 2026? #Cybersecurity #AIGovernance #SecurityLeadership #CyberResilience #AIEthics #CISO #ThreatIntelligence #FutureOfWork
Building a Thriving Future: AI Ethics & Security in Virtual Worlds | Dr. Paola Cecchi - Dimeglio
The mistakes we made building the internet don't have to be repeated in the metaverse—if we act now. Join SecureTalk host Justin Beals for an essential conversation with Dr. Paola Cecchi-Dimeglio about building secure, ethical virtual worlds. Dr. Cecchi-Dimeglio brings 25 years of experience advising governments, Fortune 500 companies, and global institutions on AI ethics and technology governance. Her new book "Building a Thriving Future: Metaverse and Multiverse" (MIT Press, 2025) provides frameworks for building virtual spaces that serve humanity rather than exploit it. CORE THEMES: • Security by design vs. security bolted on after problems emerge • How biases get encoded into AI systems—and prevention strategies • The critical role of "human in the loop" for AI oversight • Why good regulation creates business stability • Digital identity systems for global inclusion • Authentication and verification in virtual spaces • Cross-border legal frameworks for technology governance REAL-WORLD IMPACT: Over 1 billion people globally lack legal identification—virtual worlds could solve this through blockchain-based digital identity, or create new exclusions if built poorly. The standards we set now for authentication, verification, and identity control will determine whether these spaces become tools for human flourishing or mechanisms for surveillance. WHY THIS MATTERS NOW: * Virtual worlds already exist—gaming platforms host billions of users * AI is accelerating everything, including security vulnerabilities * Deepfake technology is improving faster than detection methods * The decisions made today will shape digital society for decades SURPRISING INSIGHTS: → Children currently detect deepfakes better than adults (but not for long) → Major consulting firms have sold governments expensive reports full of AI errors → Voice recognition systems historically failed on non-Western accents due to training data bias → Email autocorrect defaults "Paola" to "Paolo" because datasets contained more men than women ABOUT THE GUEST: Dr. Paola Cecchi-Dimeglio is a globally recognized expert in AI, big data, and behavioral science. She holds dual appointments at Harvard Law School and Kennedy School of Government, co-chairs the UN ITU Global Initiative on AI and Virtual Worlds, and has authored 70+ peer-reviewed publications. Her work advises the World Bank, European Commission, and Fortune 500 executives on ethical AI implementation. THE OPTIMISTIC VISION: Virtual worlds can tap talent anywhere, breaking geographic barriers. They can connect separated families, provide legal identity to excluded populations, and create opportunities we can't yet imagine—but only if we build them with security, ethics, and human values as foundational requirements. ABOUT SECURETALK: SecureTalk ranks in the top 2.5% of podcasts globally, making cybersecurity and compliance topics accessible to business leaders. Hosted by Justin Beals, CEO of Strike Graph and former network security engineer. Perfect for: Security professionals, technology leaders, business executives, policy makers, anyone concerned about building ethical AI systems and secure virtual worlds. 📚 "Building a Thriving Future: Metaverse and Multiverse" by Dr. Paola Cecchi-Dimeglio (MIT Press, 2025) #AIEthics #Cybersecurity #VirtualWorlds #TechnologyGovernance #MetaverseSecurity #DigitalEthics #AIRegulation #SecureByDesign
Elige tu suscripción
Premium
20 horas de audiolibros
Podcasts solo en Podimo
Podcast gratuitos
Cancela cuando quieras
Empieza 7 días de prueba
Después $99 / mes
Empieza 7 días de prueba. $99 / mes después de la prueba. Cancela cuando quieras.