
Fully Vested
Podcast door Jason D. Rowley, Graham C. Peck
Undersubscribed chats about tech and venture with Jason D. Rowley & Graham C. Peck
Probeer 7 dagen gratis
€ 9,99 / maand na proefperiode.Elk moment opzegbaar.
Alle afleveringen
45 afleveringen
IS GENERATIVE AI DISRUPTIVE OR SUSTAINING? A quick recap of Clayton Christensen's conceptual framework of disruptive and enabling innovations... DISRUPTIVE INNOVATION Disruptive Innovation: This refers to a process where a smaller company with fewer resources successfully challenges established incumbent businesses. Disruptive innovations typically start by capturing the lower end of the market – offering products or services that are more affordable and accessible. Over time, these innovations improve in quality and performance, eventually displacing the established competitors. Disruptive innovations often change the competitive landscape and can lead to the creation of entirely new markets. A classic example is how digital photography disrupted the traditional film photography industry. ARGUMENTS FOR GENAI AS DISRUPTIVE INNOVATION 1. Low-End Disruption: Christensen often emphasized how disruptive innovations initially target the lower end of the market. In the case of Generative AI, it could initially appeal to smaller businesses or individuals who couldn't previously afford professional services in fields like design, content creation, or data analysis. 2. Market Transformation: Generative AI has the potential to create new markets and value networks, especially in fields like art, content creation, and design, where it enables the creation of novel content that was previously not possible or required extensive human effort. 3. Accessibility: By democratizing skills that were once niche or expert-level (like graphic design, coding, or prose writing), generative AI can disrupt traditional industries by making these skills accessible to a wider audience. 4. Cost Efficiency: It can significantly reduce costs in content production, potentially disrupting sectors reliant on human labor for these tasks. 5. Innovative Business Models: The technology could lead to new business models, particularly in personalized content creation, marketing, and customer interaction, disrupting conventional business strategies. ARGUMENTS AGAINST GENAI AS DISRUPTIVE INNOVATION 6. Dependency on Existing Infrastructure: Generative AI is highly dependent on existing data and computing infrastructure, suggesting it's more of an evolution than a radical market disruptor. 7. Ethical and Regulatory Constraints: Potential ethical issues and regulatory hurdles, especially around data privacy and intellectual property, might slow down its disruptive impact. 8. Integration with Current Systems: Rather than replacing existing systems, generative AI is often used to enhance them, suggesting a more gradual market evolution. SUSTAINING INNOVATION Sustaining Innovation: Sustaining innovations, on the other hand, do not disrupt existing markets but rather evolve them. These innovations enhance or improve existing products, services, or processes, making them more efficient, effective, or accessible. They tend to support and extend the life of existing companies or industries rather than replacing them. An example of sustaining innovation could be the evolution of smartphones, where each new model offers improvements and additional features that enhance user experience but do not necessarily disrupt the existing market in the way the first smartphones did. ARGUMENTS FOR GENAI AS SUSTAINING INNOVATION 1. Enhancing Current Products: Generative AI often acts as an enhancement to existing digital products, like improving software with AI capabilities, which aligns with sustaining innovation. 2. Gradual Improvement: The technology is seeing incremental improvements, aligning with the idea of gradual enhancements characteristic of sustaining innovation. 3. Appealing to Existing Market: In many cases, it serves the existing market better by offering more efficient, high-quality outputs (like in graphic design, coding, or data analysis). ARGUMENTS AGAINST GENAI AS SUSTAINING INNOVATION 4. Potential for Market Transformation: The long-term potential of generative AI could be to completely transform markets, not just sustain them. 5. Beyond Mere Improvement: Generative AI introduces capabilities (like creating new forms of art or generating new data) that go beyond simple improvements of existing products. 6. Altering Consumer Behavior: Its ability to change how consumers interact with technology (for instance, preferring AI-generated content) suggests a shift in market dynamics, not just sustaining existing ones. FURTHER READING 📺 Clayton Christensen: Disruptive innovation [https://www.youtube.com/watch?v=rpkoCZ4vBSI](Clayton Christensen presenting at the Saïd Business School at the University of Oxford, uploaded to YouTube in June 2013) 📃 What Is Disruptive Innovation? [https://hbr.org/2015/12/what-is-disruptive-innovation](Christensen, Raynor, and McDonald for the December 2015 issue of the Harvard Business Review) 📃 Sustaining vs. Disruptive Innovation: What's the Difference? [https://online.hbs.edu/blog/post/sustaining-vs-disruptive-innovation] (Catherine Cote for Harvard Business School Online, February 2022) 📺 Disruptive Technology vs. Sustaining Technology [https://www.youtube.com/watch?v=ut7c0wcn_KA] (Ashley Hodgson on YouTube, December 2022) 📃 Differences between early adopters of disruptive and sustaining innovations [https://www.sciencedirect.com/science/article/abs/pii/S0148296314001398] (Reinhardt and Gurtner, 2015) GENERATIVE AI ECONOMICS THE BRIEFEST OF OVERVIEWS The emerging Generative AI sphere breaks the model of software economics, to an extent. Traditional Software Economics: Building and launching a new SaaS product (for example) is low CapEx, high OpEx, and high margin. Foundation Model Developer Economics: High CapEx (considering that as of right now progress is gated by access to chips). High OpEx (considering that data acquisition, data engineering, model training, and most importantly model inference are massive cost centers). TRAINING AND INFERENCE COSTS Discriminative AIGenerative AITraining CostsHighHighInference CostsLowHigh AI TRAINING: THE LEARNING PHASE Think of AI training like teaching a student. In this phase, you're giving the AI model lots of examples (this is your data) to learn from. These examples could be anything from pictures of cats and dogs to customer reviews. * How It Works: The AI model looks at this data and tries to find patterns. For instance, it might notice that cat pictures often have pointy ears and whiskers. It's like studying for an exam — the model is trying to learn as much as possible from the data it's given. * The Goal: The aim is to make the AI understand these patterns well enough so that it can make its own decisions or predictions later. It's all about the model learning the rules of the game from the examples it sees. * The Challenge: This phase can be resource-heavy. It requires a lot of computational power (think high-end GPUs or specialized hardware), and it can take a lot of time, depending on how complex the task is. AI INFERENCE: THE APPLICATION PHASE Now, imagine the student (our AI model) has graduated and is ready to apply what it learned in the real world. This is the inference phase. Inference in Discriminative AI: * Overview: This is like asking the AI model a multiple-choice question. You present it with new data (like an image or a piece of text), and based on its training, it categorizes or identifies this data. Think of it as asking, "Based on what you've learned, what do you think this is?" * Application: It's widely used in tasks like image recognition (identifying objects in pictures), spam detection (categorizing emails), or sentiment analysis (understanding if a review is positive or negative). Inference in Generative AI: * Overview: Here, instead of categorizing, the AI is creating something new. It's like giving the AI a set of ingredients (data and conditions) and asking it to cook up a new dish (output). This output could be a piece of text, an image, or even music. * How It Works: The model uses patterns it learned during training to generate new, original content. For instance, if it's been trained on a lot of landscape paintings, it can generate a new painting that doesn't exist yet but looks like it could belong to the same collection. * Application: Generative AI is used in creative fields like art generation (creating new images), content writing (generating articles or stories), or even in generating synthetic data for further AI training. FURTHER READING 📃 Navigating the High Cost of AI Compute [https://a16z.com/navigating-the-high-cost-of-ai-compute/] (Appenzeller et al. for the Andreessen Horowitz blog, April 2023) 📃 Compute and Energy Consumption Trends in Deep Learning Inference [https://arxiv.org/abs/2109.05472] (Desislavov et al., March 2023. ArXiv ID: 2109.05472) 📃 How Inferencing Differs From Training in Machine Learning Applications [https://semiengineering.com/how-inferencing-differs-from-training-in-machine-learning-applications/] (Sam Fuller for Semiconductor Engineering, January 2022) 📃 The Inference Cost Of Search Disruption – Large Language Model Cost Analysis [https://www.semianalysis.com/p/the-inference-cost-of-search-disruption] (Dylan Patel and Afzal Ahmad in SemiAnalysis. February 2023) WHERE THE VALUE LIES As with any emerging technology, there are generally three types of players: * Those building core technology (e.g. doing whatever equivalent of "bench science" is in their field) * Those packaging/implementing core technologies into specialized applications * Those providing the infrastructure to builders and packagers At least at this stage, it's clear that captured value is redounded to those participants, in rank order: 1. Infrastructure Providers — e.g. Cloud providers (Microsoft Azure, Google Cloud Platform, Amazon Web Services, etc.), Semiconductor developers (Nvidia, Graphcore, Groq, Cerebras. etc ... plus all the chip development efforts at big companies like Amazon and Microsoft, etc) 2. Foundation Model builders — e.g. OpenAI, Anthropic, Deepgram, etc. 3. Implementers — e.g. Companies that build "wrappers" around infrastructure and foundation models. Put differently: Companies/projects which integrate with foundation model APIs and package outputs from said APIs FURTHER READING 📃 Exploring opportunities in the gen AI value chain [https://www.mckinsey.com/capabilities/quantumblack/our-insights/exploring-opportunities-in-the-generative-ai-value-chain] (Härlin et al. for McKinsey Digital, April 2023) 📃 The value chain of general-purpose AI [https://www.adalovelaceinstitute.org/blog/value-chain-general-purpose-ai/] (Küspert et al. for the Ada Lovelace Institute, February 2023) 📃 “Behind the Hype: A Deep Dive into the AI Value Chain” [https://hashcollision.substack.com/p/behind-the-hype-a-deep-dive-into] (Arun Rao on Hash Collision, June 2023) 📃 Generative AI Value Chain [https://matt-rickard.com/generative-ai-value-chain] (Matt Rickard publishing on his blog, November 2022) 📃 Basics on the Layers and Value Chain of Generative Artificial Intelligence [https://medium.com/@flo.seemann/basic-on-the-layers-and-value-chain-of-generative-artificial-intelligence-65010f490524] (Florian Seemann on Medium, April 2023) 📃 Generative AI Value Chain [https://www.hbs.edu/faculty/Pages/item.aspx?num=64320] (Andy Wu and Matt Higgins publishing a Harvard Business School Background Note, July 2023) OTHER MENTIONS AND LINKS TITLE IDEAS * Back to the 90s * A Whale With Moose Antlers * Breathless Dorks on Twitter * Whither the Poop Shovelers? * Graham's Personal Hope * Luxurious Techno-Communism

Many of the core technologies behind Generative AI are not exactly brand new. For example, the "Attention Is All You Need [https://arxiv.org/abs/1706.03762]" paper, which described and introduced the Transformer model (the "T" in ChatGPT), was published in 2017. Diffusion models—the backbone of image generation tools like StableDiffusion and DALL-e—were introduced in 2015 [https://arxiv.org/pdf/1503.03585.pdf] and were originally inspired by thermodynamic modeling techniques. Generative adversarial networks (GANs) were introduced [https://proceedings.neurips.cc/paper_files/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf] in 2014. However, Generative AI has seemingly taken the world by storm over the past couple years. In this episode, Graham and Jason discuss—in broad strokes—what Generative AI is, what's required to train and run foundation models, where the value lies, and frontier challenges. FACT-CHECKING AND CORRECTIONS Before we begin... * At around 36:16 Jason said that the Pile was compiled by OpenAI or one of its research affiliates. This is not correct. The Pile was compiled by Eleuther.ai, and we couldn't find documentation suggesting that OpenAI incorporates the entirety of The Pile into its training data corpus. * At 49:07 Jason mentions "The Open Source Institute" but actually meant to mention the Open Source Initiative [https://opensource.org/] APPLIED MACHINE LEARNING 101 Not all AI and applied machine learning models are created equally, and models can be designed to complete specific types of tasks. Broadly speaking, there are two types of applied machine learning models: Discriminative and Generative. DISCRIMINATIVE AI Definition: Discriminative AI focuses on learning the boundary between different classes of data from a given set of training data. Unlike generative models that learn to generate data, discriminative models learn to differentiate between classes and make predictions or decisions based on the input data. Historical Background TLDR: * The development of Discriminative AI has its roots in statistical and machine learning approaches aimed at classification tasks. * Logistic regression and Support Vector Machines (SVMs) are early examples of discriminative models, which have been used for many years in various fields including computer vision and natural language processing. * Over time, with the development of deep learning, discriminative models like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have become highly effective for a wide range of classification tasks. Pop Culture Example(s): * "Hotdog vs. Not a Hotdog algorithm [https://www.youtube.com/watch?v=ACmydtFDTGs]" from HBO's Silicon Valley (S4E4) * Image recognition capabilities of something like Iron Man alter ego Tony Stark's JARVIS (2008) **Real-World Example(s * Automatic speech recognition (ASR) * Spam and abuse detection * Facial recognition, such as Apple's Face ID and more Orwellian examples in places ranging from China to England Further Reading: * Discriminative Model [https://en.wikipedia.org/wiki/Discriminative_model] (Wikipedia) GENERATIVE AI Definition: Generative AI refers to a type of artificial intelligence that is capable of generating new data samples that are similar to a given set of training data. This is achieved through algorithms that learn the underlying patterns, structures, and distributions inherent in the training data, and can generate novel data points with similar properties. Historical Background TLDR: * The origins of Generative AI can be traced back to the development of generative models, with early instances including probabilistic graphical models in the early 2000s. * However, the field truly began to gain traction with the advent of Generative Adversarial Networks (GANs) b y Ian Goodfellow and his colleagues in 2014. * Since then, various generative models like Variational Autoencoders (VAEs) and others have also gained prominence, contributing to the rapid advancement of Generative AI. Pop Culture Example: * The AI from the movie Her (2013) Real-World Example(s): * OpenAI's GPT family, alongside image models like StableDiffusion, and Midjourney. Further Reading: * Deepgram's Generative AI page [https://deepgram.com/ai-glossary/generative-ai] in the AI Glossary... co-written by Jason and GPT-4. * Large Language Model [https://deepgram.com/ai-glossary/large-language-model] in the Deepgram AI Glossary... also co-written by Jason and GPT-4. * The Physics Principle That Inspired Modern AI Art [https://www.quantamagazine.org/the-physics-principle-that-inspired-modern-ai-art-20230105/] (Anil Ananthaswamy, for Quanta Magazine) * Visualizing and Explaining Transformer Models From the Ground Up [https://deepgram.com/learn/visualizing-and-explaining-transformer-models-from-the-ground-up] (Zian "Andy" Wang for the Deepgram blog, January 2023) * Transformer Explained [https://paperswithcode.com/method/transformer] hub on PapersWithCode * Transformers, Explained: Understand the Model Behind GPT-3, BERT, and T5 [https://daleonai.com/transformers-explained] (Dale Markowitz on his blog, Dale on AI., May 2021) FURTHER READING BY TOPIC In rough order of when these topics were mentioned in the episode... ECONOMIC/INDUSTRY IMPACTS OF AI How Large Language Models Will Transform Science, Society, and AI [https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-ai] (Alex Tamkin and Deep Ganguli for Stanford HAI's blog, February 2021) The Economic Potential of Generative AI: The Next Productivity Frontier [https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier] ( McKinsey & Co., June 2023) Generative AI Could Raise Global GDP by 7% [https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html] (Goldman Sachs, April 2023) Generative AI Promises an Economic Revolution. Managing the Disruption Will Be Crucial. [https://www.wsj.com/articles/generative-ai-promises-an-economic-revolution-managing-the-disruption-will-be-crucial-b1c0f054] (Bob Fernandez for WSJ Pro Central Banking, August 2023) The Economic Case for Generative AI and Foundation Models [https://a16z.com/the-economic-case-for-generative-ai-and-foundation-models/] (Martin Casado and Sarah Wang for the Andreessen Horowitz Enterprise blog, August 2023) Generative AI and the software development lifecycle [https://www.thoughtworks.com/en-us/insights/articles/generative-ai-software-development-lifecycle-more-than-coding-assistance](Birgitta Böckeler and Ryan Murray for Thoughtworks, September 2023) How generative AI is changing the way developers work [https://github.blog/2023-04-14-how-generative-ai-is-changing-the-way-developers-work/] (Damian Brady for The GitHub Blog, April 2023) The AI Business Defensibility Problem [https://datastream.substack.com/p/the-ai-business-defensibility-problem] (Jay F. publishing on their Substack, The Data Stream) USING LANGUAGE MODELS EFFECTIVELY The emerging types of language models and why they matter [https://techcrunch.com/2022/04/28/the-emerging-types-of-language-models-and-why-they-matter/?guccounter=1] (Kyle Wiggers for TechCrunch, April 2023) * * Crafting AI Commands: The Art of Prompt Engineering [https://deepgram.com/learn/what-is-prompt-engineering] (Nithanth Ram for the Deepgram blog, March 2023) Prompt Engineering [https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/] (Lilian Weng on her blog Lil'Log, March 2023) Prompt Engineering Techniques: Chain-of-Thought [https://deepgram.com/learn/chain-of-thought-prompting-guide] & Tree-of-Thought [https://deepgram.com/learn/tree-of-thoughts-prompting] (both by Brad Nikkel for the Deepgram blog) 11 Tips to Take Your ChatGPT Prompts to the Next Level [https://www.wired.com/story/11-tips-better-chatgpt-prompts/] (David Nield for WIRED, March 2023) Prompt Engineering 101 [https://humanloop.com/blog/prompt-engineering-101] (Raza Habib and Sinan Ozdemir for the Humanloop blog, December 2022) HERE THERE BE DRAGONS Hallucinations * Hallucination (artificial intelligence) [https://en.wikipedia.org/wiki/Hallucination_%28artificial_intelligence%29] (Wikipedia) * Chatbot Hallucinations Are Poisoning Web Search [https://www.wired.com/story/fast-forward-chatbot-hallucinations-are-poisoning-web-search/] (Will Knight for WIRED, October 2023) * How data poisoning attacks corrupt machine learning models [https://www.csoonline.com/article/570555/how-data-poisoning-attacks-corrupt-machine-learning-models.html] (Lucian Constantin for CSO Online) Data Poisoning & Related * Data Poisoning [https://paperswithcode.com/task/data-poisoning] hub on PapersWithCode * Glaze - Protecting Artists from Generative AI [https://glaze.cs.uchicago.edu/] project from UChicago (2023) * Self-Consuming Generative Models Go MAD [https://arxiv.org/abs/2307.01850] (Alemohammad et al. on ArXiv, July 2023) * What Happens When AI Eats Itself [https://deepgram.com/learn/when-ai-eats-itself] (Tife Sanusi for the Deepgram blog, August 2023) * The AI is eating itself [https://www.platformer.news/p/the-ai-is-eating-itself] (Casey Newton for Platformer, June 2023) * AI-Generated Data Can Poison Future AI Models [https://www.scientificamerican.com/article/ai-generated-data-can-poison-future-ai-models/] (Rahul Rao for Scientific American, July 2023) Intellectual Property and Fair Use * Measuring Fair Use: The Four Factors - Copyright Overview [https://fairuse.stanford.edu/overview/fair-use/four-factors/] (Rich Stim for the Stanford Copyright and Fair Use Center) * Is the Use of Copyrighted Works to Train AI Qualified as a Fair Use [https://copyrightalliance.org/copyrighted-works-training-ai-fair-use/] (Cala Coffman for the Copyright Alliance blog, April 2023) * Reexamining "Fair Use" in the Age of AI [https://hai.stanford.edu/news/reexamining-fair-use-age-ai] (Andrew Myers for Stanford HAI) * Copyright Fair Use Regulatory Approaches in AI Content Generation [https://techpolicy.press/copyright-fair-use-regulatory-approaches-in-ai-content-generation/] (Ariel Soiffer and Aric Jain for Tech Policy Press, August 2023) * Japan's AI Data Laws, Explained [https://www.deeplearning.ai/the-batch/japan-ai-data-laws-explained/] (Deeplearning.ai) * PDF: Generative Artificial Intelligence and Copyright Law [https://crsreports.congress.gov/product/pdf/LSB/LSB10922] (Congressional Research Center, September 2023) Academic and Creative "Honesty" * How it started. New AI classifier for indicating AI-written text [https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text] (Kirchner et al., January 2023) * How it's going. OpenAI Quietly Shuts Down Its AI Detection Tool [https://decrypt.co/149826/openai-quietly-shutters-its-ai-detection-tool] (Jason Nelson for Decrypt) * AI Homework [https://stratechery.com/2022/ai-homework/] (Ben Thompson on Stratechery, December 2022) * Teaching With AI [https://openai.com/blog/teaching-with-ai] (OpenAI, August 2023) Human Costs of AI Training (Picking on OpenAI here, but RLHF and similar fine-tuning techniques are employed by many/most LLM developers) * Cleaning Up ChatGPT Takes Heavy Toll on Human Workers [https://www.wsj.com/articles/chatgpt-openai-content-abusive-sexually-explicit-harassment-kenya-workers-on-human-workers-cf191483] (Karen Hao and Deepa Seetharaman for the Wall Street Journal) * ‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models [https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai] (Niamh Rowe in The Guardian, August 2023) * He Helped Train ChatGPT. It Traumatized Him. [https://www.bigtechnology.com/p/he-helped-train-chatgpt-it-traumatized] (Alex Kantrowitz in his publication Big Technology, May 2023) * https://www.nytimes.com/2023/09/25/technology/chatgpt-rlhf-human-tutors.html [https://www.nytimes.com/2023/09/25/technology/chatgpt-rlhf-human-tutors.html] Big Questions * Open questions for AI engineering [https://simonwillison.net/2023/Oct/17/open-questions/] (Simon Willison, October 2023) ADAM SMITH AND THE PIN FACTORY 📚 An Inquiry into the Nature and Causes of the Wealth of Nations by Adam Smith [https://www.gutenberg.org/ebooks/3300] (via Project Gutenberg) Division of Labor and Specialization [https://www.econlib.org/library/topics/highschool/divisionoflaborspecialization.html] (Econlib) Adam Smith and the Pin Factory [https://www.johnkay.com/2019/12/18/adam-smith-and-the-pin-factory/] (John Kay on his blog, September 2019) The Pin Factory [https://www.adamsmithworks.org/pin_factory.html] (Adam Smith Works, a project of the Liberty Fund) Adam Smith and Pin-making: Some Inconvenient Truths [https://conversableeconomist.com/2022/08/23/adam-smith-and-pin-making-some-inconvenient-truths/] (Timothy Taylor publishing on his blog, Conversable Economist. August 2022) ALSO MENTIONED TRAINING DATASETS The Pile [https://pile.eleuther.ai/] ImageNet [https://www.image-net.org/] ARTICLES AND BOOKS (Alleged) Leaked Google memo mentioned at around 48:05 Google "We Have No Moat, And Neither Does OpenAI [https://www.semianalysis.com/p/google-we-have-no-moat-and-neither] (SemiAnalysis publishing a whitepaper, allegedly written by a Google staff member, which suggested that open source advancements pose an existential threat to both Google and OpenAI. May 2023) Infinite Jest [https://en.wikipedia.org/wiki/Infinite_Jest] (Wikipedia page on the 1996 novel by the late David Foster Wallace), mentioned around 54:30 HISTORY OF FAXES VS. EMAILS In reference to 1:08:30 or thereabouts: A fun fact which blew our mind: The earliest instances of facsimile transmission stretch back to the 1890s, and the core technology matured up through the 1940s. The first telephonic fax patent was filed in 1964 by Xerox Corporation. The first email was sent in 1971.

GENERAL Subscribe to Fully Vested at FullyVested.co or through your podcast app of choice. MARKETS RETRACE THEIR STEPS * Throughout January 2022, US stock market indices gave up nearly a year of gains. * The tech-heavy NASDAQ Composite index closed on January 28th at 13,770, down nearly 14% from highs set in November 2021. * Why? * Nearly a decade of expansionist monetary policy is likely coming to an end. * Inflation is at its highest level in the past 40 years. * US Federal Reserve chair Jerome Powell signaled intent to raise interest rates in an effort to reign in inflation. (Goldman Sachs anticipates 5 rate hikes in 2022 [https://www.reuters.com/world/us/goldman-sachs-expecting-five-rate-hikes-this-year-2022-01-29/], expecting rates to hit between 1.25-1.5% by year's end.) WHAT COULD THIS MEAN FOR THE VC MARKET? * Historically, higher interest rates give more conservative investors a path toward a more risk-off strategy, diverting capital from alternative asset classes like venture capital or private equity * Higher interest rates lead to a higher discount rate on future cash flows, which could depress equity value today * That said, it still feels like investor risk appetite is close to all-time highs. (Might not be quite as hot as it was in Q2-Q3 2021, but still...) * Jason's conjecture: If we're going to see a significant decline in either check size or deal volume, it's probably going to come at late-stage first. So much value is locked into late-stage unicorn companies, and with the IPO market cooling down and a dwindling pool of entities which could afford to buy these companies, we can probably expect investor trepidation to hit late-stage first. SOME READING * The Great Deflate [https://500ish.com/the-great-deflate-6bc847df9809] (M.G. Siegler, on his blog 500ish) * Excerpt: "As such, I might suggest that for the 20th year in a row, we’re not going to see a tech bubble burst. Because it’s not and never was a bubble. Instead, perhaps it’s best to think of it more like a balloon. And while those too can pop, they can also deflate over time. This feels like a more apt analogy for what is happening here. The air which had inflated earnings multiples to the Moon in tech is slowly but surely coming out, returning the balloon closer to Earth." * Vision Fund CEO Says Private Markets Are ‘Overvalued’ [https://www.bloomberg.com/news/articles/2022-01-20/vision-fund-ceo-says-private-markets-are-overvalued] (Sarah McBride, for Bloomberg. January 20, 2022) * Key quotes: "According to CB Insights data, venture dollars spent on startups exceeded $600 billion in 2021, more than double the previous year’s highs. Valuations have soared, too. There are currently more than 900 startups with valuations of over $1 billion, CB Insights found." * "'That gap is going to tighten over the next six months,' [Mishra] said of the discrepancy between public and private markets." * The great startup reset: Why founders should prepare for lower valuations [https://www.geekwire.com/2022/the-great-startup-reset-why-founders-should-prepare-for-lower-valuations/] (Charles Fitzgerald for Geekwire) * 3 views: How should founders prepare for a decline in startup valuations and investor interest? [https://techcrunch.com/2022/01/26/3-views-how-should-founders-prepare-for-a-decline-in-startup-valuations-and-investor-interest/] (Alex Wilhelm, Natasha Mascarenhas, and Mary Ann Azevedo for TechCrunch) * Calling the startup valuation peak [https://www.axios.com/private-market-valuation-venture-capital-751e2e87-dcfa-46c8-bf06-cc9b7ffa342c.html] (Dan Primack for Axios) * Dear VCs: If you want startup prices to come down, stop paying higher prices [https://techcrunch.com/2022/01/13/dear-vcs-if-you-want-startup-prices-to-come-down-stop-paying-higher-prices/] (Alex Wilhelm for TechCrunch) ABOUT THE CO-HOSTS * Jason D. Rowley is a researcher who has previously worked with Uzabase, Golden.com, Crunchbase News and others. He volunteers with startup outreach for the open-source community and sends occasional newsletters from Rowley.Report [http://rowley.report/]. * Graham C. Peck is a Venture Partner with Cultivation Capital [https://cultivationcapital.com/] and additionally helps companies build technology development teams as a partner of FYC Labs [https://www.fyclabs.com/] and other technology development organizations.

GENERAL Subscribe to Fully Vested at FullyVested.co or through your podcast app of choice. STATING THE OBVIOUS ABOUT THE INCREASE * 2021 was a crazy year for the VC market. * Top-line numbers: * CB Insights reports that venture investment reached $621 billion in 2021, up 111% from 2020 levels. * Crunchbase News’ data points in a similar direction, finding $643 billion worth of global venture investment last year, up 92% year on year. * Superbly Supergiant: CB Insights finds there were 1,556 VC rounds of $100M or more, up 147% from 2020's all-time high of 630. * Unicorn Watch: Crunchbase News pegged the rate of new unicorn creation at "more than ten each week" in 2021. CB Insights says there are now 959 unicorns as of the end of Q4 2021, up 69% from the 569 tallied at the end of Q4 2020. * Most active investors include: Tiger Global Management, SoftBank Investment Advisors, Andreessen Horowitz, and Insight Ventures SOME READING * Data show 2021 was a bonkers, record-setting year for venture capital [https://techcrunch.com/2022/01/12/data-show-2021-was-a-bonkers-record-setting-year-for-venture-capital/] (Alex Wilhelm and Anna Heim, for TechCrunch) * Global Venture Funding And Unicorn Creation In 2021 Shattered All Records [https://news.crunchbase.com/news/global-vc-funding-unicorns-2021-monthly-recap/] (Gene Teare, for Crunchbase News) * Venture Capital 2021 Recap–A Record Breaking Year [https://insight.factset.com/venture-capital-2021-recap-a-record-breaking-year] (Haley Bryan, for FactSet) * Six charts that show 2021's record year for US venture capital [https://pitchbook.com/news/articles/2021-record-year-us-venture-capital-six-charts] (Priyamvada Mathur for PitchBook) ABOUT THE CO-HOSTS * Jason D. Rowley is a researcher who has previously worked with Uzabase, Golden.com, Crunchbase News and others. He volunteers with startup outreach for the open-source community and sends occasional newsletters from Rowley.Report [http://rowley.report/]. * Graham C. Peck is a Venture Partner with Cultivation Capital [https://cultivationcapital.com/] and additionally helps companies build technology development teams as a partner of FYC Labs [https://www.fyclabs.com/] and other technology development organizations.

GENERAL Subscribe to Fully Vested at FullyVested.co or through your podcast app of choice. THE PRESENT AND FUTURE OF WORK The first cases of COVID-19, the disease caused by a novel coronavirus called SARS-CoV-2, were identified near Wuhan, China as early as November 2019. A few months later, in early 2020, much of the world went into lockdown. As the COVID-19 crisis approaches its 2nd full year, it's becoming increasingly clear that the way we work has changed, if not forever than for what's likely to be the next 5 or 10 years at least. This is especially true for knowledge workers. TECHNOLOGY AS AN ENABLER * It's kind of hard to imagine what this pandemic would've been like had it happened in, say, 1989. * Tools and services like video conferencing, workplace chat, remote event/conference platforms, cloud storage, and others enable partially or fully-distributed work. CONCERNS ABOUT THE ROLE OF TECH * "Zoom fatigue" and other phenomena are very real. * Some companies are implementing systems to surveil workers. FLEXIBILITY AS A MEGA-TREND IN KNOWLEDGE WORK * Especially before vaccines became widely available, professionals often had to balance work, family, and personal needs all from one place: their homes. For folks with children, working adults needed to meet their professional obligations, while also serving as a teacher or supervisor of school-aged kids engaged in remote learning. These circumstances necessitated increased flexibility in work arrangements. * With more time away from the office, more workers are coming to terms with the fact that commuting to an office and spending the majority of the day there has taken up a lot of mental bandwidth. READING LIST * Is Going To The Office A Broken Way of Working? [https://www.newyorker.com/culture/office-space/is-going-to-the-office-a-broken-way-of-working] (Cal Newport in conversation with Chris Herd, in the New Yorker. September 2021) * WFH Doesn’t Have to Dilute Your Corporate Culture [https://hbr.org/2021/02/wfh-doesnt-have-to-dilute-your-corporate-culture] (Pamela Hinds and Brian Elliott for the Harvard Business Review, in February 2021) * Work can be better post-COVID-19. Here's what employers need to know [https://www.weforum.org/agenda/2021/09/work-can-be-better-post-covid-heres-how/] (Stephen Ratcliffe and Julia Wilson for the World Economic Forum, in September 2021) * Remote Work Persisting and Trending Permanent [https://news.gallup.com/poll/355907/remote-work-persisting-trending-permanent.aspx] (Lydia Saad and Ben Wigert for Gallup) * Remote Work Trends You Should Not Ignore in 2021 [https://medium.com/age-of-awareness/remote-work-trends-you-should-not-ignore-in-2021-44db3a4717d2] (Dmitry Chekalin publishing on Medium in April 2021) * Remote Work Can Be Better for Innovation Than In-Person Meetings [https://www.scientificamerican.com/article/remote-work-can-be-better-for-innovation-than-in-person-meetings/] (Gleb Tsipursky for Scientific American in October 2021) ABOUT THE CO-HOSTS * Jason D. Rowley is a researcher who has previously worked with Uzabase, Golden.com, Crunchbase News and others. He volunteers with startup outreach for the open-source community and sends occasional newsletters from Rowley.Report [http://rowley.report/]. * Graham C. Peck is a Venture Partner with Cultivation Capital [https://cultivationcapital.com/] and additionally helps companies build technology development teams in partnership with Brightgrove [https://www.brightgrove.com/] and other technology development organizations.
Probeer 7 dagen gratis
€ 9,99 / maand na proefperiode.Elk moment opzegbaar.
Exclusieve podcasts
Advertentievrij
Gratis podcasts
Luisterboeken
20 uur / maand