
Human-Centered Security
Podcast door Voice+Code
Cybersecurity is complex. Its user experience doesn’t have to be. Heidi Trost interviews information security experts about how we can make it easier for people—and their organizations—to stay secure.
Probeer 7 dagen gratis
€ 9,99 / maand na proefperiode.Elk moment opzegbaar.
Alle afleveringen
56 afleveringen
You're a founder with a great cybersecurity product—but no one knows or cares. Or you're a marketer drowning in jargon (hey, customers hate acronyms, too), trying to figure out what works and what doesn’t. Gianna Whitver, co-founder of the Cybersecurity Marketing Society, breaks down what the cybersecurity industry is getting wrong—and right—about marketing. In this episode, we talk about: * Cyber marketing is hard (but you knew that already). It requires deep product knowledge, empathy for stressed buyers, and clear, no-FUD messaging. * Building authentic, value-driven communities leads to stronger cybersecurity marketing impact. * Don’t copy the marketing strategies of big enterprises. Instead, focus on clarity, founder stories, and product-market fit. * Founder-led marketing works. Early-stage founders can break through noise by sharing personal stories. * Think twice before listening to the advice of “influencer” marketers. This advice is often overly generic. Or, you’re following advice of marketers marketing to marketers (try saying that ten times fast). In other words, their advice is probably not going to apply to cybersecurity. Gianna Whitver is the co-founder and CEO of the Cybersecurity Marketing Society [https://www.cybersecuritymarketingsociety.com/], a community for marketers in cybersecurity to connect and share insights. She is also the podcast co-host of Breaking Through in Cybersecurity Marketing podcast, and founder of LeaseHoney, a place for beekeepers to find land.

Users, threat actors, and the system design all influence—and are influenced by—one another. To design safer systems, we first need to understand the players who operate within those systems. Kelly Shortridge and Josiah Dykstra exemplify this human-centered approach in their work. In this episode we talk about: * The vital role of human factors in cyber-resilience—how Josiah and Kelly apply a behavioral-economics mindset every day to design safer, more adaptable systems. * Key cognitive biases that undermine incident response (like action bias and opportunity costs) and simple heuristics to counter them. * The “sludge” strategy: deliberately introducing friction to attacker workflows to increase time, effort, and financial costs—as Kelly says, “disrupt their economics.” * Why moving from a security culture of shame and blame to one of open learning and continuous improvement is essential for true cybersecurity resilience. Kelly Shortridge is VP, Security Products at Fastly, formerly VP of Product Management and Product Strategy at Capsule8. She is the author of Security Chaos Engineering: Sustaining Resilience in Software and Systems. Josiah Dykstra is the owner of Designer Security, human-centered security advocate, cybersecurity researcher, and former Director of Strategic Initiatives at Trail of Bits. He also worked at the NSA as Technical Director, Critical Networks and Systems. Josiah is the author of Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls that Derail Us. During this episode, we reference: Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough, “Sludge for Good: Slowing and Imposing Costs on Cyber Attackers,” arXiv preprint arXiv:2211.16626 (2022). Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough, “Opportunity Cost of Action Bias in Cybersecurity Incident Response,” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 66, Issue 1 (2022): 1116-1120.

Imagine a world where product teams collaborate with security teams. Where product designers can shadow their security peers. A place where security team members believe communication is one of the most important skillsets they have. These are key attributes of human-centered security—the type of dynamics Jordan Girman and Mike Kosak are fostering at Lastpass. In this episode, we talk about: * What cross-disciplinary collaboration looks like at Lastpass (for example, a product designer is shadowing the security team). * A set of principles for designing for usable security and privacy. * Why intentional friction might be counterintuitive to designers but, used carefully, is critical to designing for security. * When it comes to improving security outcomes, the words you use matter. Mike explains how the Lastpass Threat Intelligence team thinks about communicating what they learn to a variety of audiences. * How to build a threat intelligence program within your organization--even if you have limited resources. Jordan Girman is the VP of User Experience at Lastpass [https://www.lastpass.com]. Mike Kosak is the Senior Principal Intelligence Analyst at Lastpass. Mike references a series of articles he wrote, including “Setting Up a Threat Intelligence Program From Scratch.” [https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language]

Where are security tools failing security teams? What are security teams looking for when they visit a security vendor marketing website? Paul Robinson, security expert and founder of Tempus Network, says, “Over-promising and under-delivering is a major factor in these tools. The tool can look great in a demo—proof of concepts are great, but often the security vendor is just putting their best foot forward. It's not really the reality of the situation.” Paul’s advice for how can security vendors do better? * Start by admitting security isn’t just a switch you flip—it’s a journey. * Security teams aren’t fooled by glitz and glamour on your marketing website. They want to see how you addressed real problems. * Incredible customer service can make a small, scrappy cybersecurity product stand out from larger, slower-moving vendors. * Cybersecurity vendors need to get onboarding right (it’s a make or break aspect of the user experience). There are more variables than you think—not only technology but also getting buy-in from employees, leadership, and other stakeholders. * Think about the user experience not only of the person using the security product, but the people at the organization who will be impacted by the product. Looking for a cybersecurity-related movie that is just a tad too plausible? Paul recommends Leave the World Behind on Netflix.

When we collaborate with people, we build trust over time. In many ways, this relationship building is similar to how we work with tools that leverage AI. As usable security and privacy researcher Neele Roch found, “on the one hand, when you ask the [security] experts directly, they are very rational and they explain that AI is a tool. AI is based on algorithms and it's mathematical. And while that is true, when you ask them about how they're building trust or how they're granting autonomy and how that changes over time, they have this really strong anthropomorphization of AI. They describe the trust building relationship as if it were, for example, a new employee.” Neele is a doctoral student at the Professorship for Security, Privacy and Society at ETH Zurich. Neele (and co-authors Hannah Sievers, Lorin Schöni, and Verena Zimmermann) recently published a paper, “Navigating Autonomy: Unveiling Security Experts’ Perspective on Augmented Intelligence and Cybersecurity,” presented at the 2024 Symposium on Usable Privacy and Security. [https://www.usenix.org/conference/soups2024/presentation/roch] In this episode, we talk to Neele about: * How security experts’ risk–benefit assessments drive the level of AI autonomy they’re comfortable with. * How experts initially view AI: the tension between AI-as-tool vs. AI-as-“teammate.” * The importance of recalibrating trust after AI errors—and how good system design can help users recover from errors without losing their trust in it. * Ensuring AI-driven cybersecurity tools provide just the right amount of transparency and control. * Why enabling security practitioners to identify, correct, and learn from AI errors is critical for sustained engagement. Roch, Neele, Hannah Sievers, Lorin Schöni, and Verena Zimmermann. "Navigating Autonomy: Unveiling Security Experts' Perspectives on Augmented Intelligence in Cybersecurity." In Twentieth Symposium on Usable Privacy and Security (SOUPS 2024), pp. 41-60. 2024.
Probeer 7 dagen gratis
€ 9,99 / maand na proefperiode.Elk moment opzegbaar.
Exclusieve podcasts
Advertentievrij
Gratis podcasts
Luisterboeken
20 uur / maand