
Tech Talks Daily
Podcast by Neil C. Hughes
Rajoitettu tarjous
2 kuukautta hintaan 1 €
Sitten 7,99 € / kuukausiPeru milloin tahansa.

Enemmän kuin miljoona kuuntelijaa
Tulet rakastamaan Podimoa, etkä ole ainoa
Arvioitu 4.7 App Storessa
Lisää Tech Talks Daily
If every company is now a tech company and digital transformation is a journey rather than a destination, how do you keep up with the relentless pace of technological change? Every day, Tech Talks Daily brings you insights from the brightest minds in tech, business, and innovation, breaking down complex ideas into clear, actionable takeaways. Hosted by Neil C. Hughes, Tech Talks Daily explores how emerging technologies such as AI, cybersecurity, cloud computing, fintech, quantum computing, Web3, and more are shaping industries and solving real-world challenges in modern businesses. Through candid conversations with industry leaders, CEOs, Fortune 500 executives, startup founders, and even the occasional celebrity, Tech Talks Daily uncovers the trends driving digital transformation and the strategies behind successful tech adoption. But this isn't just about buzzwords. We go beyond the hype to demystify the biggest tech trends and determine their real-world impact. From cybersecurity and blockchain to AI sovereignty, robotics, and post-quantum cryptography, we explore the measurable difference these innovations can make. Whether improving security, enhancing customer experiences, or driving business growth, we also investigate the ROI of cutting-edge tech projects, asking the tough questions about what works, what doesn't, and how businesses can maximize their investments. Whether you're a business leader, IT professional, or simply curious about technology's role in our lives, you'll find engaging discussions that challenge perspectives, share diverse viewpoints, and spark new ideas. New episodes are released daily, 365 days a year, breaking down complex ideas into clear, actionable takeaways around technology and the future of business.
Kaikki jaksot
301 jaksot
MariaDB is a name with deep roots in the open-source database world, but in 2025 it is showing the energy and ambition of a company on the rise. Taken private in 2022 and backed by K1 Investment Management, MariaDB is doubling down on innovation while positioning itself as a strong alternative to MySQL and Oracle. At a time when many organisations are frustrated with Oracle’s pricing and MySQL’s cloud-first pivot, MariaDB is finding new opportunities by combining open-source freedom with enterprise-grade reliability. In this conversation, I sit down with Vikas Mathur, Chief Product Officer at MariaDB, to explore how the company is capitalising on these market shifts. Vikas shares the thinking behind MariaDB’s renewed focus, explains how the platform delivers similar features to Oracle at up to 80 percent lower total cost of ownership, and details how recent innovations are opening the door to new workloads and use cases. One of the most significant developments is the launch of Vector Search in January 2023. This feature is built directly into InnoDB, eliminating the need for separate vector databases and delivering two to three times the performance of PG Vector. With hardware acceleration on both x86 and IBM Power architectures, and native connectors for leading AI frameworks such as LlamaIndex, LangChain and Spring AI, MariaDB is making it easier for developers to integrate AI capabilities without complex custom work. Vikas explains how MariaDB’s pluggable storage engine architecture allows users to match the right engine to the right workload. InnoDB handles balanced transactional workloads, MyRocks is optimised for heavy writes, ColumnStore supports analytical queries, and Moroonga enables text search. With native JSON support and more than forty functions for manipulating semi-structured data, MariaDB can also remove the need for separate document databases. This flexibility underpins the company’s vision of one database for infinite possibilities. The discussion also examines how MariaDB manages the balance between its open-source community and enterprise customers. Community adoption provides early feedback on new features and helps drive rapid improvement, while enterprise customers benefit from production support, advanced security, high availability and disaster recovery capabilities such as Galera-based synchronous replication and the MacScale proxy. We look ahead to how MariaDB plans to expand its managed cloud services, including DBaaS and serverless options, and how the company is working on a “RAG in a box” approach to simplify retrieval-augmented generation for DBAs. Vikas also shares his perspective on market trends, from the shift away from embedded AI and traditional machine learning features toward LLM-powered applications, to the growing number of companies moving from NoSQL back to SQL for scalability and long-term maintainability. This is a deep dive into the strategy, technology and market forces shaping MariaDB’s next chapter. It will be of interest to database architects, AI engineers, and technology leaders looking for insight into how an open-source veteran is reinventing itself for the AI era while challenging the biggest names in the industry.

In this episode, I am joined by Charles Southwood, Regional Vice President, Denodo Technologies, a company recognised globally for its leadership in data management. With revenues of $288M and a customer base that includes Hitachi, Informa, Engie, and Walmart, Denodo sits at the heart of how enterprises access, trust, and act on their data. Charles brings over 35 years of experience in the tech industry, offering both a long-term view of how the data landscape has evolved and sharp insights into the challenges businesses face today. Our conversation begins with a pressing issue for any organisation exploring generative AI: data reliability. With many AI models trained on vast amounts of internet content, there is a real risk of false information creeping into business outputs. Charles explains why mitigating hallucinations and inaccuracies is essential not just for technical quality, but for protecting brand reputation and avoiding costly missteps. We explore alternative approaches that allow enterprises to benefit from AI innovation while maintaining data integrity and control. We also examine the broader enterprise pressures AI has created. The promise of reduced IT costs and improved agility is enticing, but how much of this is achievable today and how much is inflated by hype? Charles shares why he believes 2025 is a tipping point for achieving true business agility through data virtualisation, and what a virtualised data layer can deliver for teams across IT, marketing, and beyond. Along the way, Charles reflects on the industry shifts that caught him most by surprise, the decisions he would make differently with the benefit of hindsight, and the one golden rule he would share with younger tech professionals starting their careers now. This is a conversation for anyone who wants to understand how to harness AI and advanced data integration without falling into the traps of poor data quality, overblown expectations, or short-term thinking.

In the movies, Tony Stark’s JARVIS is the ultimate AI assistant, managing schedules, running simulations, controlling environments, and anticipating needs before they are voiced. In reality, today’s AI agents are still far from that vision. Despite 2025 being heralded as the year of agentic AI, the first offerings from major players have been underwhelming. They can perform tasks, but they remain a long way from the seamless, hyper-intelligent assistants we imagined. In this episode, Dr. Robert “Bobby” Blumofe, CTO at Akamai Technologies, joins me to explore what is really holding AI assistants back and what it will take to build one as capable as a top human executive assistant. Bobby argues that the leap forward will not come from chasing ever-larger models but from optimization, efficiency, and integrating AI with the right tools, infrastructure, and processes. He believes that breakthroughs in model efficiency, like those seen in DeepSeek, could make capable agents affordable and viable for everyday use. We break down the spectrum of AI agents from simple, task-specific helpers to the fully autonomous, general-purpose vision of JARVIS. Bobby shares why many of the most valuable enterprise applications will come from the middle ground, where agents are semi-autonomous, task-focused, and integrated with other systems for reliability. He also explains why smaller, specialized models often outperform “ask me anything” LLMs for specific business use cases, reducing cost, latency, and security risks. The conversation covers Akamai’s role in enabling low-latency, scalable AI at the edge, the importance of combining neural and symbolic AI to achieve reliable reasoning and planning, and a realistic five-to-seven-year timeline for assistants that can rival the best human EAs. We also look at the technical, social, and business challenges ahead, from over-reliance on LLMs to the ethics of deploying highly capable agents at scale. This is a grounded, forward-looking discussion on the future of AI assistants, where they are today, why they have fallen short, and the practical steps needed to turn fiction into reality.

Multi-agent AI systems are moving from theory to enterprise reality, and in this episode, Babak Hodjat, CTO of AI at Cognizant, explains why he believes they represent the future of business. Speaking from his AI R&D lab in San Francisco, Babak outlines how his team is both deploying these systems inside Cognizant and helping clients build their own, breaking down organisational silos, coordinating processes, and improving decision-making. We discuss how the arrival of optimized, open-source models like DeepSeek is accelerating AI democratization, making it viable to run capable agents on far smaller infrastructure. Babak explains why this is not a “Sputnik moment” for AI, but a powerful industry correction that opens the door for more granular, cost-effective agents. This shift is enabling richer, more scalable multi-agent networks, with agents that can not only perform autonomous tasks but also communicate and collaborate with each other. Babak also introduces Cognizant’s open-sourced Neuro-AI Network platform, designed to be model and cloud agnostic while supporting large-scale coordination between agents. By separating the opaque AI model from the fully controllable code layer, the platform builds in safeguards for trust, data handling, and access control, addressing one of the most pressing challenges in AI adoption. Looking ahead, Babak predicts rapid growth in multi-agent ecosystems, including inter-company agent communication and even self-evolving agent networks. He also stresses the importance of open-source collaboration in AI’s next chapter, warning that closed, proprietary approaches risk slowing innovation and eroding trust. This is a deep yet accessible conversation for anyone curious about how enterprise AI is evolving from single, monolithic models to flexible, distributed systems that can adapt and scale. You can learn more about Cognizant’s AI work at cognizant.com and follow Babak’s insights on LinkedIn.

AI adoption in the tech sector has reached a tipping point, and the latest EY Technology Pulse poll offers a fascinating look at just how quickly things are moving. In this episode, I’m joined by Ken Englund, Partner at EY, to unpack the survey’s findings and what they reveal about the next two years of AI in enterprise. Conducted with 500 senior tech executives from companies with more than 5,000 employees, the poll focuses on AI agents, autonomous deployments, and the shifting priorities that are shaping investment strategies. Ken shares why half of the respondents expect more than 50% of their AI deployments to be fully autonomous within 24 months, and why optimism around AI’s potential remains high. He also explains a notable shift away from last year’s heavy focus on large language models toward applied AI and agentic systems designed to handle real-world business workflows. While 92% of tech leaders plan to increase AI spending in 2026, the investment isn’t solely about technology—it’s about competitiveness, customer experience, and strategic alignment. One of the more surprising takeaways is the human side of the equation. Despite headlines predicting widespread job losses, only 9% of respondents anticipate layoffs in the next six months, down from 20% last year. Instead, companies are balancing upskilling existing staff with bringing in new AI talent, with roles emerging in areas like MLOps, AI product management, and forward-deployed engineering. Ken and I also dive into the growing weight of governance, data privacy, and security. With autonomous agents becoming more embedded in core operations, boards, CFOs, and audit teams are taking a closer look at trust frameworks without wanting to slow innovation. The conversation highlights a critical inflection point: most companies are still in the “automating existing workflows” phase, but the real breakthroughs will come when AI enables entirely new business models. This episode is a snapshot of a sector in rapid evolution—where enthusiasm is tempered by lessons learned, and where the road from pilot to production is still full of challenges. You can read the full EY Technology Pulse poll here. [https://www.ey.com/en_us/newsroom/2025/05/ey-survey-reveals-that-technology-companies-are-setting-the-pace-of-agentic-ai-will-others-follow-suit]

Arvioitu 4.7 App Storessa
Rajoitettu tarjous
2 kuukautta hintaan 1 €
Sitten 7,99 € / kuukausiPeru milloin tahansa.
Podimon podcastit
Mainoksista vapaa
Maksuttomat podcastit