Forsidebilde av showet The Vernon Richard Show

The Vernon Richard Show

Podkast av Vernon Richards and Richard Bradshaw

engelsk

Teknologi og vitenskap

Prøv gratis i 14 dager

99 kr / Måned etter prøveperioden.Avslutt når som helst.

  • 20 timer lydbøker i måneden
  • Eksklusive podkaster
  • Gratis podkaster
Prøv gratis

Les mer The Vernon Richard Show

Vernon Richards and Richard Bradshaw discuss all things software testing, quality engineering and life in the world of software development. Plus our own personal journeys navigating our careers and lifes.

Alle episoder

35 Episoder

episode “Testing isn't a specialism"? You keep using that word… cover

“Testing isn't a specialism"? You keep using that word…

"Testing Is Not a Specialism" - You keep using that word… Vernon got triggered. A bold LinkedIn post declared "PSA: testing is not a specialism. Thank you for your time." Mic drop, walk off stage, no explanation. And it wasn't just one person. So Vernon did what any self-respecting tester would do: he asked why. And didn't get an answer. In this episode, Vernon and Richard dig into why some developers seem to find the idea of testing as a specialism genuinely laughable, what happens when you confuse a skill with a role, and why, in a world where everyone's building agentic workflows, nobody seems to notice that they're writing skills.md files full of testing knowledge. They also explore how AI is already reshaping what's expected of every role on a software team, why "knowing what good looks like" has never mattered more, and what skill stacking means for testers who want to stay ahead of the curve. Chapters 00:00 - Intro 01:17 - Vern's welcome rant 01:42 - The topic: Is testing a specialism? 07:17 - Rich gets a chance to speak 😅 07:29 - Good Testers vs Bad Testers 11:15 - Aren't we all developers now anyway? 13:06 - There's testing and there's Testing 15:01 - Testers communicating their value 18:34 - If testing isn't a specialism, where does that leave agents and skills? 20:30 - The lads cook up a new way to reframe the situation 23:53 - Who should do testing? 32:27 - Vern believes these folks are saying one thing and doing another 35:02 - Rich wants to know what happens to the 0.5x Testers? 40:02 - Skill Stacking 45:36 - The thing most people haven't done but need to 50:04 - Are we all Domain Translators now? 54:23 - Wrap up Links to stuff we mentioned during the pod: * 01:42 - Paul's [https://www.linkedin.com/in/paul-hammond-bb5b78251/] interesting LinkedIn post [https://www.linkedin.com/posts/paul-hammond-bb5b78251_psa-testing-is-not-a-specialism-thank-activity-7442285958332342272-Mtri] that triggered Vernon * 03:34 - The question I asked on LinkedIn [https://www.linkedin.com/posts/vernonrichards_what-is-it-that-gets-%F0%9D%9A%99%F0%9D%9A%8E%F0%9D%9A%98%F0%9D%9A%99%F0%9D%9A%95%F0%9D%9A%8E-developers-activity-7356770639174635520-WtpP] about why "people" get so triggered about testing as a role * 05:33 - Greg's [https://www.linkedin.com/in/gregmunt/] interesting post [https://www.linkedin.com/posts/gregmunt_my-observations-on-test-management-are-below-activity-7376902441482481665-llVt] about test management and levels of competence * 06:31 - Jade Rubick's [https://www.linkedin.com/in/jaderubick/] post questioning whether QA should exist [https://www.rubick.com/should-qa-exist/] at all! * 10:46 - Angie Jones (the only thing Angie is terrible at is being terrible!) * Angie's blog [https://angiejones.tech/] * Angie's LinkedIn [https://www.linkedin.com/in/angiejones/] * 14:11 - The episode When Everything Sounds Like Testing… How Do You Explain What You Really Do? [https://thevernonrichardshow.com/29] * 14:46 - Funnily enough, Vernon is giving this talk at Agile Testing Days 2026 [https://agiletestingdays.com/]! * It's called If Testers Had a Dragon's Den Pitch, Would Anyone Invest? [https://agiletestingdays.com/2026/session/if-testers-had-a-dragons-den-pitch-would-anyone-invest/] * DM Vernon if you would like a discount code for the conference! * 15:07 - The very cool GreaTest Quality [https://www.greatestquality.ch/] Conference * 27:25 - Jason Bourne [https://en.wikipedia.org/wiki/Jason_Bourne] a fictional character from some of the best movies ever (especially the first two ^VR) * 28:29 - Anne-Marie Charrett * The excellent book in digital [https://leanpub.com/qc] or physical [https://www.amazon.com/dp/B0FGYF6DG6] versions * Anne-Marie's website [https://www.annemariecharrett.com/] * Be sure to check out the blog which is no longer pay-walled 🥳 * Anne-Marie's LinkedIn [https://www.linkedin.com/in/testingtimes/] * 28:29 - James Bach * The Test Jumper [https://www.satisfice.com/blog/archives/1372] description we refer to * James' blog [https://www.satisfice.com/blog] * James' LinkedIn [https://www.linkedin.com/in/james-bach-6188a811/] * 51:02 - Nate B. Jones (Shout out Martin [https://www.linkedin.com/in/whoisvds4/] for putting Vernon on to it 🙏🏾 * The post [https://natesnewsletter.substack.com/p/openai-is-charging-20kmonth-for-an] about developer roles mentioned in the episode * Nate's newsletter [https://natesnewsletter.substack.com/] * Nate's website [https://www.natebjones.com/] * Nate's YouTube [https://www.youtube.com/@NateBJones] * Nate's LinkedIn [https://www.linkedin.com/in/natebjones/] * 56:52 - Vernon did write Vernon Version 3 - Now with added AI! [https://yeahbutdoesitwork.substack.com/p/vernon-version-3-now-with-added-ai] all about the skills he thinks he needs to develop going forwards * Please like, subscribe, and share 😊 ^VR Got thoughts on whether testing is a specialism? We genuinely want to hear from you. Vernon still doesn't have an answer to his question. Run it past a friendly developer and let us know what they say. Drop us a message on LinkedIn [https://www.linkedin.com/company/the-vernon-richard-show/] and if Paul or Greg are listening, the invitation to come on the pod is very much open.

27. april 2026 - 58 min
episode 6 AI Tool Ideas That Will Transform How You Test cover

6 AI Tool Ideas That Will Transform How You Test

In this episode, Richard and Vernon explore the evolving concept of automation in quality, especially in the context of AI and Gen AI. They discuss how new technologies are blurring the lines between testing and quality, and what this means for the future of software development and testing practices. 00:00 - Intro 00:52 - Welcome and weekly catch-up 01:11 - Vern's deep dive into the AI rabbit hole 02:39 - Rich’s quit(er) work week, new threads, and dentists 04:15 - Richard buys a domain and we started the pod proper 06:09 - Tool idea #1: Using an LLM to evaluate user stories and acceptance criteria automatically 07:35 - Is analysing a story "testing" or "quality"? The ISTQB static analysis debate 10:27 - Vernon's diabetes analogy: AI is forcing us to finally do what we always said we should 12:19 - Better stories = better testing: how quality work amplifies everything downstream 13:11 - Tool idea #2: "If we made this change, what areas of the system would be impacted?" 14:23 - Distilling years of system knowledge into 5–10 questions an agent could ask 18:37 - Tool idea #3: The PR Analyser — summarising code changes through a testing and quality lens 21:45 - Vernon's "1 unit of effort, 5 units of testing" — the quality multiplier effect 23:29 - Comparing story analysis to actual implementation: where did understanding diverge? 24:43 - Tool idea #4: Dynamic test selection — cherry-picking the right tests to run first 27:05 - Tool idea #5: An agent that analyses failed builds and attempts to fix them 27:28 - Why Richard's first attempt always "fixed" the test instead of the code (and what was missing) 29:21 - Dan's AI agents: one thinking partner, one employee monitoring production 32:42 - The documentation goldmine: why AI-generated RCA notes might matter more than the fix 33:39 - Tool idea #6: A holistic quality dashboard pulling insights across stories, code, tests, and process 36:43 - John Cutler on context: it's not data you pass around — it's formed through interaction 40:43 - More options than ever: whether it's testing, quality, or static analysis — you can do it differently now 41:56 - The real skill: spotting the opportunity to make yourself more effective 42:30 - Ge Hill's Lump of Code Fallacy and why task analysis matters 43:34 - Why Richard got into automation: efficiency, not because he was told to 45:03 - Vernon's big question: in a world where agents can do everything, what's your performance review about? 46:52 - Context, craft, and product knowledge can't be delegated to tools yet 48:29 - Call to action: What are you building? What tools couldn't you build before that you can now? 49:29 - Upcoming: Test Automation Days and PeerCon Live in Nottingham Links to stuff we mentioned during the pod: * 04:15 - Automation in Quality * Richard bought the automationinquality.com [http://automationinquality.com/] domain! The concept explored throughout this episode. * 05:28 - Kalpesh Sodha [https://www.linkedin.com/in/kalpesh-sodha-a900062a/] aka Kalps * Shout out to Richard's colleague who played devil's advocate on the "is it testing or quality?" question * 07:31 - Static analysis [https://en.wikipedia.org/wiki/Static_program_analysis] * 29:44 - Dan "The Agile Guy" Elliott * His post about how he uses AI agents as a "thinking partner" and an "employee" [https://www.agileguy.ca/paisley-and-ocasia/] with different missions and capabilities * Dan’s website [https://www.agileguy.ca/] * Dan's LinkedIn [https://www.linkedin.com/in/agileguy/] * 36:52 - John Cutler * John Cutler's piece on how context isn't just data you move around [https://cutlefish.substack.com/p/tbm-406-seeing-everything-understanding] — it's formed through interaction between people * John's newsletter [https://cutlefish.substack.com/] * John's LinkedIn [https://www.linkedin.com/in/johnpcutler/] * 42:37 - Rob Sabourin * My quick Perplexity search [https://www.perplexity.ai/search/201e187a-818e-4c66-b1d6-aa58daeacc9d] for Rob's public material on Task Analysis * Rob's Linkedin [https://www.linkedin.com/in/robsabamibug/] * 42:45 - Michael “GeePaw” Hill * His Lump of Code Fallacy [https://www.geepawhill.org/2018/04/14/tdd-the-lump-of-coding-fallacy/]. The idea that coding isn't just one activity — there are three flavours of work that occur when you code * Michael’s website [https://www.geepawhill.org/] * Michaels Mastadon [https://mastodon.social/@GeePawHill] * 49:35 - Test Automation Days * Richard will be keynoting at Test Automation Days [https://www.testautomationdays.com/] * Make sure you say hi if you’re there * 50:10 - PeersCon * Vernon and Richard will be recording a live episode at PeersCon [https://testingpeerscon.com/]! * If you're there, come say hi and grab a mic 🎙️

2. mars 2026 - 51 min
episode Six Principles of Automation in Testing: Still Relevant in 2026? cover

Six Principles of Automation in Testing: Still Relevant in 2026?

In this episode, Richard Bradshaw and Vernon discuss the relevance and application of the six principles of automation in testing in the context of AI advancements. They explore how these principles hold up in 2026, the challenges faced in automation, and the future of testing strategies. 00:00 - Intro 01:47 - Welcome (Richard is not at home 👀) 02:07 - Ramadan, cooking without tasting, and plastic teeth 🦷 04:01 - Today's topic: revisiting the AiT principles ahead of a keynote 04:58 - What is Automation in Testing (AiT)? 06:49 - Principle 1: Supporting Testing over Replicating Testing 07:01 - Vernon's take: testing is a performance, not a click sequence 08:22 - What the industry promised vs what automation actually does 08:49 - The serendipity you lose when a human isn't testing 09:59 - Agentic testing: observing more, but still not replicating humans 10:56 - The danger of anthropomorphising AI output 12:10 - LLMs always give an answer — and that's the problem 13:03 - Principle 2: Testability over Automatability 13:14 - Vernon's take: narrow vs broad — operate, control, observe 14:38 - Making apps automatable for the robots but not the humans 15:37 - The shiniest framework in a broken testing context 16:40 - If it's testable, it's probably automatable — but not vice versa 16:55 - Automation strategy vs testing strategy: when they compete, everyone loses 17:46 - The problem has always been testing, not automation 19:57 - Principle 3: Testing Expertise over Coding Expertise 20:18 - Vernon's take: testing expertise lets you leverage the tools 21:47 - The spoonfed tests problem: great at automating, lost without guidance 22:36 - The "code school" era: everyone told to learn to code 22:51 - Coding agents have changed the maths on this 26:01 - The new nuance: test design and framework knowledge over writing the code 28:44 - Evaluating code is a testing problem — and LLMs can help you do it 30:43 - Are agents as good as a junior developer? 31:42 - Outcome Engineering (O16G) and the race to write the AI principles 32:13 - Simon Wardley: we're in the wild west again 33:22 - Principle 4: Problems over Tools 33:29 - Vernon's take: the hammer and the nail 34:07 - Don't let your problems be shaped by the framework you have 34:36 - New automation opportunities beyond testing: PRs, logs, story review 35:30 - Principle 5: Risk over Coverage 36:12 - Vernon's take: 100% coverage ≠ 100% risk coverage 38:00 - The one test case, one automated test fallacy 39:04 - Where in the system is the risk? Do you even know your layers? 39:49 - Probabilistic vs non-deterministic: refining the language around AI 40:53 - Coverage as intentional vs coverage as a number someone picked once 43:15 - Principle 6: Observability over Understanding 43:24 - Vernon's take: just-in-time understanding vs reading everything upfront 44:12 - What the principle was actually about: making automation results observable 47:00 - Does this principle belong in testing, or has it grown into quality? 49:00 - So... what's missing? 50:00 - The four pillars: Strategy, Creation, Usage, and Education 57:05 - Automation in Quality: the bigger opportunity 01:01:00 - Wrap up + Vern's Lead Dev panel Links to stuff we mentioned during the pod: * 04:00 - Automation in Testing (AiT) * The principles live at automationintesting.com [https://automationintesting.com] * AiT was co-created by Richard Bradshaw and Mark Winteringham [https://www.mwtestconsultancy.co.uk/] * 04:00 - Test Automation Days * The conference where Richard is giving his keynote — testautomationdays.com [https://testautomationdays.com] * 24:48 - James Thomas * The "kid in a candy shop" himself — James's blog [https://qahiccupps.blogspot.com/] and LinkedIn [https://www.linkedin.com/in/james-thomas-840aa11a/] * 31:42 - Outcome Engineering [https://o16g.com/] (016G) * The article Richard shared before recording — worth tracking down if you're interested in where agentic development practices are heading * 32:13 - Simon Wardley * If you're not following Simon Wardley, please follow Simon Wardley! His work on Wardley Maps [https://www.swardleymaps.com/] and situational awareness in strategy is essential reading * Simon's LinkedIn [https://www.linkedin.com/in/simonwardley/] * 43:30 - Abby Bangser * Vern's go-to person for all things observability. Abby's LinkedIn [https://www.linkedin.com/in/abbeynathanson/] * 46:04 - Noah Susman * As it turns out, the quote Vern's referencing: advanced monitoring as "indistinguishable from testing" was not by Noah! It was Ed Keyes at GTAC 2007 [https://www.perplexity.ai/search/57c874ed-d667-4a50-8c6f-c39491a9d84d]. * Noah's blog [https://infiniteundo.com/] and LinkedIn [https://www.linkedin.com/in/noahsussman/] * 59:30 - Angie Jones * Vern's been reading Angie's work on testing AI-enabled applications here [https://angiejones.tech/blog/] and here [https://engineering.block.xyz/blog/]. * Angie's website [https://angiejones.tech/] and LinkedIn [https://www.linkedin.com/in/angiejones/] * 01:01:30 - The Lead Dev panel Vernon will be part of * "How to Measure the Business Impact of AI [https://leaddev.com/event/how-to-measure-the-business-impact-of-ai]" — happening 25th February, free to sign up * 01:02:00 - Richard's Selenium Conf talk * "Redefining Test Automation [https://www.youtube.com/watch?v=uIDvGzQdoxc]" — the talk that the Test Automation Days keynote is shaping up to be a spiritual successor to.

23. feb. 2026 - 1 h 3 min
episode This Was Supposed to Be About Testing cover

This Was Supposed to Be About Testing

This was supposed to be about testing.Instead, it turned into a conversation about burnout, money, leadership, community, AI, and what it actually takes to build a sustainable life in tech.Richard and Vernon kick off 2026 reflecting on what they’re changing, what they’re rebuilding, and how testing and quality fit into a future shaped by intention rather than hustle. Links to stuff we mentioned during the pod: * 05:19 - The Malazan Book of the Fallen [https://en.wikipedia.org/wiki/Malazan_Book_of_the_Fallen] by Steven Erikson [https://en.wikipedia.org/wiki/Steven_Erikson] * 14:59 - The $1k Challenge [https://aliabdaal.com/the-1k-challenge/] by Ali Abdaal [https://www.youtube.com/@aliabdaal] Vernon took part in last year * 17:23 - The video [https://www.youtube.com/watch?v=Q10H5RA3eCA] from Daniel Pink on how to have a successful year * Here's [https://www.youtube.com/watch?v=Q10H5RA3eCA&t=1195s] where Daniel talks about having a Challenger Network (but the whole video is 😙🤌🏾) * 18:46 - Toby Sinclair * Toby's website [https://www.tobysinclair.com/] * Toby's LinkedIn [https://www.linkedin.com/in/tobysinclair/] * 19:24 - Keith Klain * Keith's blog [https://qualityremarks.com/] * Keith's podcast [https://www.youtube.com/@KeithKlain] * Keith's LinkedIn [https://www.linkedin.com/in/keithklain/] * 19:25 - Agile Testing Days [https://agiletestingdays.com/] conference * 35:45 - What is Model Drift [https://www.perplexity.ai/search/6fe7f519-f694-4e4c-8c92-c6304df8cb57]? * 41:06 - Glue work * Tanya's Glue Work presentation [https://noidea.dog/glue] which you can read or watch * Vernon's talk about how glue work impacts Quality Engineers [https://youtu.be/EYTjTiRWrJo], Testers, etc. * 48:06 - Gary "GaryVee" Vaynerchuk * Gary's website [https://garyvaynerchuk.com/] * Gary's YouTube [https://www.youtube.com/@garyvee] 00:00 - Intro 00:54 - Greetings & where have we been? 01:32 - The holidays 02:34 - Rest & mood 04:00 - Routines for success 05:59 - Push-up challenge! 08:35 - Dopamine detox 10:28 - THE EPISODE BEGINS! 10:29 - What are our personal 2026 themes (rather than resolutions)? 10:59 - Rich's 2026 themes 13:10 - Vern's themes 17:58 - Friendship, loneliness, and being the initiator 21:28 - Rich has a two itches. One about writing... 21:56 - ...and another about hats 25:23 - Vern's leadership focus and testing foundations 31:06 - AI work: data mindset, agents, and the vibe coding divide 40:11 - Rant about AI testing being stuck in the past 46:37 - Do "cool" shit and "talk" about it. How to stand out from AI Slop 50:10 - Our podcast themes for 2026

26. jan. 2026 - 53 min
episode Shifting Left: Agile vs. Waterfall in QA cover

Shifting Left: Agile vs. Waterfall in QA

In this episode of the Vernon and Richard show, the hosts engage in light-hearted banter about football before diving into a deep discussion on QA, QE, and testing. They explore the concept of 'shift left' in software development, comparing its application in agile versus waterfall methodologies. The conversation shifts to the evolving roles of QA and QE in the context of AI's impact on the industry, emphasizing the importance of task analysis and building a quality culture within teams. The episode concludes with reflections on managing expectations in QA roles and the future of jobs in the field. 00:00 - Intro 00:48 - Welcome and "Hey" (may contain traces of ⚽️) 04:45 - Olly's first question: Does shift left lend itself more to waterfall (than other methodologies)? 14:41 - Olly's second question: Does this limit how much agile can be used? Is there potentially a new methodology that can emerge from this? 22:31 - Olly's third question (remixed by Rich a little): "...is it more now a case of making people aware that they can, should be considering things ahead of development?" 34:24 - Olly's fourth question: How far can you shift-left before it becomes overstepping? 51:53 - Olly's... which question is this now?! Next question! That works!: Where does the QA role end? Links to stuff we mentioned during the pod: * 04:26 - Olly Fairhall * Olly's LinkedIn [https://www.linkedin.com/in/olly-fairhall/] * Here's a link to what Olly sent us * 04:45 - Waterfall (in software development) * Wikipedia article [https://en.wikipedia.org/wiki/Waterfall_model] about the history of the term  * This article [https://www.techtarget.com/searchsoftwarequality/definition/waterfall-model] goes into a little more detail about the different phases and characteristics of the model  * 07:29 - Dan Ashby [https://www.linkedin.com/in/dan-ashby/]'s (yes DAN'S!) famous diagram is part of his often cited "Continuous Testing [https://danashby.co.uk/2016/10/19/continuous-testing-in-devops/]" post * 07:50 - For folks who don't understand that reference, it's... taken (🥁) scene [https://youtu.be/jZOywn1qArI] from the movie Taken * 08:10 - Rich's Whiteboard [https://www.youtube.com/@WhiteboardTesting/videos] used to get a lot more love😞  * 22:31 - Olly's questions and thoughts [https://docs.google.com/document/d/1pFmbfxR731hIDCJ1RKOY-Hw04Bqvnw2SpT93BJaGv24/edit?usp=sharing] that are guiding our conversation. Thanks Olly! * 44:12 - The book "Who Not How [https://www.amazon.co.uk/Who-Not-How-Accelerating-Teamwork-ebook/dp/B0867ZJ151/]" by Dan Sullivan and Dr. Benjamin Hardy [https://whonothow.com/#AboutAuthor] * 46:33 - Elisabeth Hendrickson * Get Elisabeth's excellent book Explore It! [https://pragprog.com/titles/ehxta/explore-it/] * Elisabeth's LinkedIn [https://www.linkedin.com/in/testobsessed/] * 46:49 - Alan Page * Alan's newsletter [https://angryweasel.substack.com/] * Alan and Brent [https://www.linkedin.com/in/brentmjensen/]'s podcast [https://podcasters.spotify.com/pod/show/abtesting] * Alan's LinkedIn [https://www.linkedin.com/in/a-l-a-n/] * 51:53 - Kelsey Hightower * Kelsey did a Q&A at Cloud Native PDX [https://www.meetup.com/cloud-native-pdx/events/310310245/?eventOrigin=group_past_events] and you can listen to the question and answer I was trying to describe here [https://www.youtube.com/watch?v=3WA1GQV_hyA&t=114s]. * I urge you to listen to the whole thing. Kelsey is an excellent orator, storyteller, and all-around human ❤️ * 55:33 - Rob Sabourin * My quick Perplexity search [https://www.perplexity.ai/search/201e187a-818e-4c66-b1d6-aa58daeacc9d] for Rob's public material on Task Analysis * Rob's Linkedin [https://www.linkedin.com/in/robsabamibug/] * 56:59 - Vernon's newsletter "Yeah But Does it Work?!" * The issue mentioned is called "What Is The Vaughn Tan Rule and How Does It Impact Testing? [https://yeahbutdoesitwork.substack.com/p/what-is-the-vaughn-tan-rule-and-how]" and talks about where we might start with unbundling

21. okt. 2025 - 1 h 0 min
Enkelt å finne frem nye favoritter og lett å navigere seg gjennom innholdet i appen
Enkelt å finne frem nye favoritter og lett å navigere seg gjennom innholdet i appen
Liker at det er både Podcaster (godt utvalg) og lydbøker i samme app, pluss at man kan holde Podcaster og lydbøker atskilt i biblioteket.
Bra app. Oversiktlig og ryddig. MYE bra innhold⭐️⭐️⭐️

Velg abonnementet ditt

Mest populær

Premium

20 timer lydbøker

  • Eksklusive podkaster

  • Ingen annonser i Podimo shows

  • Avslutt når som helst

Prøv gratis i 14 dager
Deretter 99 kr / måned

Prøv gratis

Premium Plus

100 timer lydbøker

  • Eksklusive podkaster

  • Ingen annonser i Podimo shows

  • Avslutt når som helst

Prøv gratis i 14 dager
Deretter 169 kr / måned

Prøv gratis

Bare på Podimo

Populære lydbøker

Prøv gratis

Prøv gratis i 14 dager. 99 kr / Måned etter prøveperioden. Avslutt når som helst.