The Vernon Richard Show
In this episode, Richard Bradshaw and Vernon discuss the relevance and application of the six principles of automation in testing in the context of AI advancements. They explore how these principles hold up in 2026, the challenges faced in automation, and the future of testing strategies. 00:00 - Intro 01:47 - Welcome (Richard is not at home 👀) 02:07 - Ramadan, cooking without tasting, and plastic teeth 🦷 04:01 - Today's topic: revisiting the AiT principles ahead of a keynote 04:58 - What is Automation in Testing (AiT)? 06:49 - Principle 1: Supporting Testing over Replicating Testing 07:01 - Vernon's take: testing is a performance, not a click sequence 08:22 - What the industry promised vs what automation actually does 08:49 - The serendipity you lose when a human isn't testing 09:59 - Agentic testing: observing more, but still not replicating humans 10:56 - The danger of anthropomorphising AI output 12:10 - LLMs always give an answer — and that's the problem 13:03 - Principle 2: Testability over Automatability 13:14 - Vernon's take: narrow vs broad — operate, control, observe 14:38 - Making apps automatable for the robots but not the humans 15:37 - The shiniest framework in a broken testing context 16:40 - If it's testable, it's probably automatable — but not vice versa 16:55 - Automation strategy vs testing strategy: when they compete, everyone loses 17:46 - The problem has always been testing, not automation 19:57 - Principle 3: Testing Expertise over Coding Expertise 20:18 - Vernon's take: testing expertise lets you leverage the tools 21:47 - The spoonfed tests problem: great at automating, lost without guidance 22:36 - The "code school" era: everyone told to learn to code 22:51 - Coding agents have changed the maths on this 26:01 - The new nuance: test design and framework knowledge over writing the code 28:44 - Evaluating code is a testing problem — and LLMs can help you do it 30:43 - Are agents as good as a junior developer? 31:42 - Outcome Engineering (O16G) and the race to write the AI principles 32:13 - Simon Wardley: we're in the wild west again 33:22 - Principle 4: Problems over Tools 33:29 - Vernon's take: the hammer and the nail 34:07 - Don't let your problems be shaped by the framework you have 34:36 - New automation opportunities beyond testing: PRs, logs, story review 35:30 - Principle 5: Risk over Coverage 36:12 - Vernon's take: 100% coverage ≠ 100% risk coverage 38:00 - The one test case, one automated test fallacy 39:04 - Where in the system is the risk? Do you even know your layers? 39:49 - Probabilistic vs non-deterministic: refining the language around AI 40:53 - Coverage as intentional vs coverage as a number someone picked once 43:15 - Principle 6: Observability over Understanding 43:24 - Vernon's take: just-in-time understanding vs reading everything upfront 44:12 - What the principle was actually about: making automation results observable 47:00 - Does this principle belong in testing, or has it grown into quality? 49:00 - So... what's missing? 50:00 - The four pillars: Strategy, Creation, Usage, and Education 57:05 - Automation in Quality: the bigger opportunity 01:01:00 - Wrap up + Vern's Lead Dev panel Links to stuff we mentioned during the pod: * 04:00 - Automation in Testing (AiT) * The principles live at automationintesting.com [https://automationintesting.com] * AiT was co-created by Richard Bradshaw and Mark Winteringham [https://www.mwtestconsultancy.co.uk/] * 04:00 - Test Automation Days * The conference where Richard is giving his keynote — testautomationdays.com [https://testautomationdays.com] * 24:48 - James Thomas * The "kid in a candy shop" himself — James's blog [https://qahiccupps.blogspot.com/] and LinkedIn [https://www.linkedin.com/in/james-thomas-840aa11a/] * 31:42 - Outcome Engineering [https://o16g.com/] (016G) * The article Richard shared before recording — worth tracking down if you're interested in where agentic development practices are heading * 32:13 - Simon Wardley * If you're not following Simon Wardley, please follow Simon Wardley! His work on Wardley Maps [https://www.swardleymaps.com/] and situational awareness in strategy is essential reading * Simon's LinkedIn [https://www.linkedin.com/in/simonwardley/] * 43:30 - Abby Bangser * Vern's go-to person for all things observability. Abby's LinkedIn [https://www.linkedin.com/in/abbeynathanson/] * 46:04 - Noah Susman * As it turns out, the quote Vern's referencing: advanced monitoring as "indistinguishable from testing" was not by Noah! It was Ed Keyes at GTAC 2007 [https://www.perplexity.ai/search/57c874ed-d667-4a50-8c6f-c39491a9d84d]. * Noah's blog [https://infiniteundo.com/] and LinkedIn [https://www.linkedin.com/in/noahsussman/] * 59:30 - Angie Jones * Vern's been reading Angie's work on testing AI-enabled applications here [https://angiejones.tech/blog/] and here [https://engineering.block.xyz/blog/]. * Angie's website [https://angiejones.tech/] and LinkedIn [https://www.linkedin.com/in/angiejones/] * 01:01:30 - The Lead Dev panel Vernon will be part of * "How to Measure the Business Impact of AI [https://leaddev.com/event/how-to-measure-the-business-impact-of-ai]" — happening 25th February, free to sign up * 01:02:00 - Richard's Selenium Conf talk * "Redefining Test Automation [https://www.youtube.com/watch?v=uIDvGzQdoxc]" — the talk that the Test Automation Days keynote is shaping up to be a spiritual successor to.
35 afleveringen
Reacties
0Wees de eerste die een reactie plaatst
Meld je nu aan en word lid van de The Vernon Richard Show community!