The Podlets - A Cloud Native Podcast
Podcast gratuito

The Podlets - A Cloud Native Podcast

Podcast de The Podlets: A Cloud Native Podcast

Prueba gratuita
The Podlets is a weekly show that explores cloud native, one buzzword at a time. Each week experts in the field will discuss and contrast distributed systems concepts, practices, trade-offs, and lessons learned to help you on your cloud native journey. This space moves fast, and we shouldn’t reinvent the wheel. If you are an engineer, operator, or technically minded decision maker, this podcast is for you! Find us at https://thepodlets.io. 

Este podcast se puede escuchar gratuitamente en todos las plataformas y en la app de Podimo sin necesidad de suscripción.

Todos los episodios

23 episodios
episode Orchestrators and Runtimes (Ep 22) artwork
Orchestrators and Runtimes (Ep 22)

Due to its vast array of capabilities and applications, it can be challenging to pinpoint the exact essence of what Kubernetes does. Today we are asking questions about orchestration and runtimes and trying to see if we can agree on whether Kubernetes primarily does one or the other, or even something else. Kubernetes may rather be described as a platform for instance! In order to answer these questions, we look at what constitutes an orchestrator, thinking about management, workflow, and security across a network of machines. We also get into the nitty-gritty of runtimes and how Kubernetes can fit into this model too. The short answer to our initial question is that defining a platform, orchestrator or a runtime depends on your perspective and tasks and Kubernetes can fulfill any one of these buckets. We also look at other platforms, either past or present that might be compared to Kubernetes in certain areas and see what this might tell us about the definitions. Ultimately, we come away with the message that the exact way you slice what Kubernetes does, is not all-important. Rigid definitions might not serve us so well and rather our focus should be on an evolving understanding of these terms and the broadening horizon of what Kubernetes can achieve. For a really interesting meditation on how far we can take the Kube, be sure to join us today! Follow us: https://twitter.com/thepodlets [https://twitter.com/thepodlets] Website: https://thepodlets.io [https://thepodlets.io] Feeback: info@thepodlets.io [info@thepodlets.io] https://www.notion.so/thepodlets/The-Podlets-Guest-Central-9cec18726e924863b559ef278cf695c9 [https://www.notion.so/thepodlets/The-Podlets-Guest-Central-9cec18726e924863b559ef278cf695c9] Hosts: https://twitter.com/mauilion [https://twitter.com/mauilion] https://twitter.com/joshrosso [https://twitter.com/joshrosso] https://twitter.com/embano1 [https://twitter.com/embano1] https://twitter.com/opowero [https://twitter.com/opowero] Key Points From This Episode: * What defines an orchestrator? Different kinds of management, workflow, and security. * Considerations in a big company that go into licensing, security and desktop content. * Can we actually call Kubernetes and orchestrator or a runtime? * How managing things at scale increases the need for orchestration. * An argument for Kubernetes being considered an orchestrator and a runtime. * Understanding runtimes as part of the execution environment and not the entire environment. * How platforms, orchestration, and runtimes change positions according to perspective. * Remembering the 'container orchestration wars' between Mezos, Swarm, Nomad, and Kubernetes. * The effect of containerization and faster release cycles on the application of updates. * Instances that Kubernetes might not be used for orchestration currently. * The increasingly lower levels at which you can view orchestration and containers. * The great job that Kubetenes is able to do in the orchestration and automation layer. * How Kubernetes removes the need to reinvent everything over and over again. * Breaking down rigid definitions and allowing some space for movement. Quotes: “Obviously, orchestrator is a very big word, it means lots of things but as we’ve already described, it’s hard to fully encapsulate what orchestration means at a lower level.” — @mauilion [https://twitter.com/mauilion] [0:16:30] “I wonder if there is any guidance or experiences we have with determining when you might need an orchestrator.” — @joshrosso [https://twitter.com/joshrosso] [0:28:32] “Sometimes there is an elemental over-automation some people don’t want all of these automation happening in the background.” — @opowero [https://twitter.com/opowero] [0:29:19] Links Mentioned in Today’s Episode: Apache Airflow — https://airflow.apache.org/ [https://airflow.apache.org/] SSCM — https://www.lynda.com/Microsoft-365-tutorials/SSCM-Office-Deployment-Tool/5030992/2805770-4.html [https://www.lynda.com/Microsoft-365-tutorials/SSCM-Office-Deployment-Tool/5030992/2805770-4.html] Ansible — https://www.ansible.com/ [https://www.ansible.com/] Docker — https://www.docker.com/ [https://www.docker.com/] Joe Beda — https://www.vmware.com/latam/company/leadership/joe-beda.html [https://www.vmware.com/latam/company/leadership/joe-beda.html] Jazz Improvisation over Orchestration — https://blog.heptio.com/core-kubernetes-jazz-improv-over-orchestration-a7903ea92ca?gi=5c729e924f6c [https://blog.heptio.com/core-kubernetes-jazz-improv-over-orchestration-a7903ea92ca?gi=5c729e924f6c] containerd — https://containerd.io/ [https://containerd.io/] AWS — https://aws.amazon.com/ [https://aws.amazon.com/] Fleet — https://coreos.com/fleet/docs/latest/launching-containers-fleet.html [https://coreos.com/fleet/docs/latest/launching-containers-fleet.html] Puppet — https://puppet.com/ [https://puppet.com/] HashiCorp Nomad — https://nomadproject.io/ [https://nomadproject.io/] Mesos — http://mesos.apache.org/ [http://mesos.apache.org/] Swarm — https://docs.docker.com/get-started/swarm-deploy/ [https://docs.docker.com/get-started/swarm-deploy/] Red Hat — https://www.redhat.com/en [https://www.redhat.com/en] Zalando — https://zalando.com/ [https://zalando.com/] See omnystudio.com/listener [https://omnystudio.com/listener] for privacy information.

20 jul 2020 - 41 min
episode Kubernetes Sucks for Developers, Right? No. (Ep 21) artwork
Kubernetes Sucks for Developers, Right? No. (Ep 21)

We are joined by Ellen Körbes for this episode, where we focus on Kubernetes and its tooling. Ellen has a position at Tilt where they work in developer relations. Before Tilt, they were doing closely related kinds of work at Garden, a similar company! Both companies are directly related to working with Kubernetes and Ellen is here to talk to us about why Kubernetes does not have to be the difficult thing that it is made out to be. According to her, this mostly comes down to tooling. Ellen believes that with the right set of tools at your disposal it is not actually necessary to completely understand all of Kubernetes or even be familiar with a lot of its functions. You do not have to start from the bottom every time you start a new project and developers who are new to Kubernetes need not becomes experts in it in order to take advantage of its benefits. The major goal for Ellen and Tilt is to get developers code up, running and live in as quick a time as possible. When the system is standing in the way this process can take much longer, whereas, with Tilt, Ellen believes the process should be around two seconds! Ellen comments on who should be using Kubernetes and who it would most benefit. We also discuss where Kubernetes should be run, either locally or externally, for best results and Tilt's part in the process of unit testing and feedback. We finish off peering into the future of Kubernetes, so make sure to join us for this highly informative and empowering chat! Follow us: https://twitter.com/thepodlets [https://twitter.com/thepodlets] Website: https://thepodlets.io [https://thepodlets.io] Feeback: info@thepodlets.io [info@thepodlets.io] https://www.notion.so/thepodlets/The-Podlets-Guest-Central-9cec18726e924863b559ef278cf695c9 [https://www.notion.so/thepodlets/The-Podlets-Guest-Central-9cec18726e924863b559ef278cf695c9] Guest: * Ellen Körbes https://twitter.com/ellenkorbes [https://twitter.com/ellenkorbes] Hosts: * Carlisia Campos [https://twitter.com/carlisia] * Bryan Liles [https://twitter.com/bryanl] * Olive Power [https://twitter.com/opowero] Key Points From This Episode: * Ellen's work at Tilt and the jumping-off point for today's discussion. * The projects and companies that Ellen and Tilt work with, that they are allowed to mention! * Who Ellen is referring to when they say 'developers' in this context. * Tilt's goal of getting all developers' code up and running in the two seconds range. * Who should be using Kubernetes? Is it necessary in development if it is used in production? * Operating and deploying Kubernetes — who is it that does this? * Where developers seem to be running Kubernetes; considerations around space and speed. * Possible security concerns using Tilt; avoiding damage through Kubernetes options. * Allowing greater possibilities for developers through useful shortcuts. * VS Code extensions and IDE integrations that are possible with Kubernetes at present. * Where to start with Kubernetes and getting a handle on the tooling like Tilt. * Using unit testing for feedback and Tilt's part in this process. * The future of Kubernetes tooling and looking across possible developments in the space. Quotes: “You're not meant to edit Kubernetes YAML by hand.” — @ellenkorbes [https://twitter.com/ellenkorbes?lang=en][0:07:43] “I think from the point of view of a developer, you should try and stay away from Kubernetes for as long as you can.” — @ellenkorbes [https://twitter.com/ellenkorbes?lang=en][0:11:50] “I've heard from many companies that the main reason they decided to use Kubernetes in development is that they wanted to mimic production as closely as possible.” — @ellenkorbes [https://twitter.com/ellenkorbes?lang=en][0:13:21] Links Mentioned in Today’s Episode: Ellen Körbes — http://ellenkorbes.com/ [http://ellenkorbes.com/] Ellen Körbes on Twitter — https://twitter.com/ellenkorbes?lang=en [https://twitter.com/ellenkorbes?lang=en] Tilt — https://tilt.dev/ [https://tilt.dev/] Garden — https://garden.io/ [https://garden.io/] Cluster API — https://cluster-api.sigs.k8s.io/ [https://cluster-api.sigs.k8s.io/] Lyft — https://www.lyft.com/ [https://www.lyft.com/] KubeCon — https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/ [https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/] Unu Motors — https://unumotors.com/en [https://unumotors.com/en] Mindspace — https://www.mindspace.me/ [https://www.mindspace.me/] Docker — https://www.docker.com/ [https://www.docker.com/] Netflix — https://www.netflix.com/ [https://www.netflix.com/] GCP — https://cloud.google.com/ [https://cloud.google.com/] Azure — https://azure.microsoft.com/en-us/ [https://azure.microsoft.com/en-us/] AWS — https://aws.amazon.com/ [https://aws.amazon.com/] ksonnet — https://ksonnet.io/ [https://ksonnet.io/] Ruby on Rails — https://rubyonrails.org/ [https://rubyonrails.org/] Lambda – https://aws.amazon.com/lambda/ [https://aws.amazon.com/lambda/] DynamoDB — https://aws.amazon.com/dynamodb/ [https://aws.amazon.com/dynamodb/] Telepresence — https://www.telepresence.io/ [https://www.telepresence.io/] Skaffold Google — https://cloud.google.com/blog/products/application-development/kubernetes-development-simplified-skaffold-is-now-ga [https://cloud.google.com/blog/products/application-development/kubernetes-development-simplified-skaffold-is-now-ga] Python — https://www.python.org/ [https://www.python.org/] REPL — https://repl.it/ [https://repl.it/] Spring — https://spring.io/community [https://spring.io/community] Go — https://golang.org/ [https://golang.org/] Helm — https://helm.sh/ [https://helm.sh/] Pulumi — https://www.pulumi.com/ [https://www.pulumi.com/] Starlark — https://github.com/bazelbuild/starlark [https://github.com/bazelbuild/starlark] Transcript: EPISODE 22 [ANNOUNCER] Welcome to The Podlets Podcast, a weekly show that explores cloud native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision-maker, this podcast is for you. [EPISODE] [0:00:41.8] CC: Hi, everybody. This is The Podlets. We are back this week with a special guest, Ellen Körbes. Ellen will introduce themselves in a little bit. Also on the show, it’s myself, Carlisia Campos, Michael Gasch and Duffie Cooley. [0:00:57.9] DC: Hey, everybody. [0:00:59.2] CC: Today’s topic is Kubernetes Sucks for Developers, right? No. Ellen is going to introduce themselves now and tell us all about what that even means. [0:01:11.7] EK: Hi. I’m L. I do developer relations at Tilt. Tilt is a company whose main focus is development experience when it comes to Kubernetes and multi-service development. Before Tilt, I used to work at Garden. They basically do the same thing, it's just a different approach. That is basically the topic that we're going to discuss, the fact that Kubernetes does not have to suck for developers. You just need to – you need some hacks and fixes and tools and then things get better. [0:01:46.4] DC: Yeah, I’m looking forward to this one. I've actually seen Tilt being used in some pretty high-profile open source projects. I've seen it being used in Cluster API and some of the work we've seen there and some of the other ones. What are some of the larger projects that you are aware of that are using it today? [0:02:02.6] EK: Oh, boy. That's complicated, because every company has a different policy as to whether I can name them publicly or not. Let's go around that question a little bit. You might notice that Lyft has a talk at KubeCon, where they're going to talk about Tilt. I can't tell you right now that they use Tilt, but there's that. Hopefully, I found a legal loophole here. I think they're the biggest name that you can find right now. Cluster API is of course huge and Cluster API is fun, because the way they're doing things is very different. We're used to seeing mostly companies that do apps in some way or another, like websites, phone apps, etc. Then Cluster API is completely insane. It's something else totally. There's tons of other companies. I'm not sure which ones that are large I can name specifically. There are smaller companies. Unu Motors, they do electric motorcycles. It's a company here in Berlin. They have 25 developers. They’re using Tilt. We have very tiny companies, like Mindspace, their studio in Tucson, Arizona. They also use Tilt and it's a three-person team. We have the whole spectrum, from very, very tiny companies that are using Docker for Mac and pretty happy with it, all the way up to huge companies with their own fleet of development clusters and all of that and they're using Tilt as well. [0:03:38.2] DC: That field is awesome. [0:03:39.3] MG: Quick question, Ellen. The title says ‘developers’. Developers is a pretty broad name. I have people saying that okay, Kubernetes is too raw. It's more like a Linux kernel that we want this past experience. Our business developers, our application developers are developing in there. How would you do describe developer interfacing with Kubernetes using the tools that you just mentioned? Is it the traditional enterprise developer, or more Kubernetes developers developing on Kubernetes? [0:04:10.4] EK: No. I specifically mean not Kubernetes developers. You have people work in Kubernetes. For example, the Cluster API folks, they're doing stuff that is Kubernetes specific. That is not my focus. The focus is you’re a back-end developer, you’re a front-end developer, you're the person configuring, I don't know the databases or whatever. Basically, you work at a company, you have your own business logic, you have your own product, your own app, your own internal stuff, all of that, but you're not a Kubernetes developer. It just so happens that if the stuff you are working on is going to be pointing at Kubernetes, it's going to target Kubernetes, then one, you're the target developer for me, for my work. Two, usually you're going to have a hard time doing your job. We can talk a bit about why. One issue is development clusters. If you're using Kubernetes in prod, rule of thumb, you should be using Kubernetes in dev, because you don't want completely separate environments where things work in your environment as a developer and then you push them and they break. You don't want that. You need some development cluster. The type of cluster that that's going to be is going to vary according to the level of complexity that you want and that you can deal with. Like I said, some people are pretty happy with Docker for Mac. I hear all the time these complaints that, “Oh, you're running Kubernetes on your machine. It's going to catch fire.” Okay, there's some truth to that, but also it depends on what you're doing. No one tries to run Netflix, let's say the whole Netflix on their laptop, because we all know that's not reasonable. People try to do similar things on their mini-Kube, or Docker for Mac. Then it doesn't work and they say, “Oh, Kubernetes on the laptop doesn't work.” No. Yeah, it does. Just not for you. That's a complaint I particularly dislike, because it comes from a – it's a blanket statement that has no – let's say, no facts behind it. Yeah, if you're a small company, Docker for Mac is going to work fine for you. Let's say you have a beefy laptop with 30 gigs of ram, you can put a lot of software in 30 gigs. You can put a lot of microservices in 30 gigs. That's going to work up to a point and then it's going to break. When it breaks, you're going to need to move to a cloud, you're going to need to do remote development and then you're going to Go to GCP, or Azure, or Amazon. You're going to set up a cluster there. Some people use the managed Kubernetes options. Some people just spin up a bunch of machines and wire up Kubernetes by themselves. That's going to depend on basically how much you have in terms of resources and in terms of needs. Usually, keeping up a remote cluster that works is going to demand more infrastructure work. You're going to need people who know how to do that, to keep an eye on that. There's all the billing aspect, which is you can run Docker for Mac all day and you're not going to pay extra. If you leave a bunch of stuff running on Google, you're going to have a bill at the end of the month that you need to pay attention to. That is one thing for people to think about. Another aspect that I see very often that people don't know what to do with is config files. You scroll Twitter, you scroll Kubernetes Twitter for five minutes and there's a joke about YAML. We all hate editing YAML. Again, the same way people make jokes about using about Kubernetes setting your laptop on fire, I would argue that you're not meant to edit Kubernetes YAML by hand. The tooling for that is arguably not as mature as the tooling when it comes to Kubernetes clusters to run on your laptop. You have stuff like YAML templates, you have ksonnet. I think there's one called customize, but I haven't used it myself. What I see in every company from the two-person team to the 600 person team is no one writes Kubernetes YAML by hand. Everyone uses a template solution, a templating solution of some sort. That is the first thing that I always tell people when they start making jokes about YAML, is if you’re editing YAML by hand, you're doing it wrong. You shouldn't do that in the first place. It's something that you set up once at some point and you look at it whenever you need to. On your day-to-day when you're writing your code, you should not touch those files, not by hand. [0:08:40.6] CC: We're five minutes in and you threw so much at us. We need to start breaking some of this stuff down. [0:08:45.9] EK: Okay. Let me throw you one last thing then, because that is what I do personally. One more thing that we can discuss is the development feedback loop. You're writing your code, you're working on your application, you make a change to your code. How much work is it for you to see that new line of code that you just wrote live and running? For most people, it's a very long journey. I asked that on Twitter, a lot of people said it was over half an hour. A very tiny amount of people said it was between five minutes and half an hour and only a very tiny fraction of people said it was two seconds or less. The goal of my job, of my work, the goal of Tilt, the tool, which is made by the company I work for, also called Tilt, is to get everyone in that two seconds range. I've done that on stage and talks, where we take an application and we go from, “Okay, every time you make a change, you need to build a new Docker image. You need to push it to a registry. You need to update your cluster, blah, blah, blah, and that's going to take minutes, whole minutes.” We take that from all that long and we dial it down to a couple seconds. You make a change, or save your file, snap your fingers and poof, it's up and running, the new version of your app. It's basically a real-time, perceptually real-time, just like back and when everyone was doing Ruby on Rails and you would just save your file and see if it worked basically instantly. That is the part of this discussion that personally I focus more on. [0:10:20.7] CC: I'm going to love to jump to the how in a little bit. I want to circle back to the beginning. I love the question that Michael asked at the beginning, what is considered developer, because that really makes a difference, first to understand who we are talking about. I think this conversation can go in circles and not that I'm saying we are going circles, but this conversation out in the wild can go in circles. Until we have an understanding of the difference between can you as a developer use Kubernetes in a somewhat not painful way, but should you? I'm very interested to get your take and Michael and Duffie’s take as well as far as should we be doing this and should all of the developers will be using Kubernetes through the development process? Then we also have to consider people who are not using Kubernetes, because a lot of people out there are not using communities. For developers and special, they hear Kubernetes is painful and definitely enough for developers. Obviously, that is not going to qualify Kubernetes as a tool that they’re going to look into. It's just not motivating. If there is anything that that would make people motivated to look into Kubernetes that would be beneficial for them not just for using Kubernetes for Kubernetes sake, but would it be useful? Basically why? Why would it be useful? [0:11:50.7] EK: I think from the point of view of a developer, you should try and stay away from Kubernetes for as long as you can. Kubernetes comes in when you start having issues of scale. It's a production matter, it's not a development matter. I don't know, like a DevOps issue, operations issue. Ideally, you put off moving your application to Kubernetes as long as possible. This is an opinion. We can argue about this forever. Just because it introduces a lot of complexity and if you don't need that complexity, you should probably stay away from it. To get to the other half of the question, which is if you're using Kubernetes in production, should you use Kubernetes in development? Now here, I'm going to say yes a 100% of the time. Blanket statement of course, we can argue about minutiae, but I think so. Because if you don't, you end up having separate environments. Let's say you're using Docker Compose, because you don't like Kubernetes. You’re using Kubernetes in production, so in development you are going to need containers of some sort. Let's say you're using Docker Compose. Now you're maintaining two different environments. You update something here, you have to update it there. One day, it's going to be Friday, you're going to be tired, you're going to update something here, you're going to forget to update something there, or you're going to update something there and it's going to be slightly different. Or maybe you're doing something that has no equivalent between what you're using locally and what you're using in production. Then basically, you're in trouble. I've heard from many companies that the main reason they decided to use Kubernetes in development is that they wanted to mimic production as closely as possible. One argument we can have here is that – oh, but if you're using Kubernetes in development, that's going to add a lot of overhead and you're not going to be able to do your job right. I agree that that was true for a while, but right now we have enough tooling that you can basically make Kubernetes disappear and you just focus on being a developer, writing your code, doing all of that stuff. Kubernetes is sitting there in the background. You don't have to think about it and you can just go on about your business with the advantage that now, your development environment and your production environment are going to very closely mimic each other, so you're not going to have issues with those potential disparities. [0:14:10.0] CC: All right. Another thing too is that I think we're making an assumption that the developers we are talking about are the developers that are also responsible for deployment. Sometimes that's the case, sometimes that's not the case and I'm going to shut up now. It would be interesting to talk about that too between all of us, is that what we see? Is that the case that now developers are responsible? It's like, developers DevOps is just so ubiquitous that we don't even consider differentiating between developers and ops people? All right? [0:14:45.2] DC: I think I have a different spin on that. I think that it's not necessarily that developers are the ones operating the infrastructure. The problem is that if your infrastructure is operated by a platform that may require some integration at the application layer to really hit its stride, then the question becomes how do you as a developer become more familiar? What is the user experience as of, or what I should say, what's the developer experience around that integration? What can you do to improve that, so that the developer can understand better, or play with how service discovery works, or understand better, or play with how the different services in their application will be able to interact without having to redefine that in different environments? Which is I think what Ellen point was. [0:15:33.0] EK: Yeah. At the most basic level, you have issues as such as you made a change to a service here, let's say on your local Docker Compose. Now you need to update your Kubernetes manifest on your cluster for things to make sense. Let's say, I don't know, you change the name of a service, something as simple as that. Even those kinds of things that sounds silly to even describe, when you're doing that every day, one day you're going to forget it, things are going to explode, you're not going to know why, you're going to lose hours trying to figure out where things went wrong. [0:16:08.7] MG: Also the same with [inaudible] maybe. Even if you use Kubernetes locally, you might run a later version of Kubernetes, maybe use kind for local development, but then your cluster, your remote cluster is on three or four versions behind. Shouldn't be because of the versions of product policy, but it might happen, right? Then APIs might be deprecated, or you're using different API. I totally agree with you, Ellen, that your development environment should reflect production as close as possible. Even there, you have to make sure that prod, like your APIs matches, API types matches and all the stuff right, because they could also break. [0:16:42.4] EK: You were definitely right that bugs are not going away anytime soon. [0:16:47.1] MG: Yeah. I think this discussion also remembers me of the discussion that the folks in the cloud will have with AWS Lambda for example, because there's similar, even though there are tools to simulate, or mimic these platforms, like serverless platforms locally, the general recommendation there is to embrace the cloud and develop in the cloud natively in the cloud, because that's something you cannot resemble. You cannot run DynamoDB locally. You could mimic it. You could mimic lambda runtimes locally. Essentially, it's going to be different. That's also a common complaint in the world of AWS and cloud development, which is it's really not that easy to develop locally, where you're supposed to develop on the platform that the code is being shipped and run on to, because you cannot run the cloud locally. It sounds crazy, but it is. I think the same is with Kubernetes, even though we have the tools. I don't think that every developer runs Kubernetes locally. Most of them maybe doesn't even have Docker locally, so they use some spring tools and then they have some pipeline and eventually it gets shipped as a container part in Kubernetes. That's what I wanted to throw in here as more like a question experience, especially for you Ellen with these customers that you work with, what are the different profiles that you see from the maturity perspective and these customers large enterprises might be different and the smaller ones that you mentioned. How do you see them having different requirements, as also Carlisia said, do they do ops, or DevOps, or is it strictly separated there, especially in large enterprises? [0:18:21.9] EK: What I see the most, let's get the last part first. [0:18:24.6] MG: Yeah, it was a lot of questions. Sorry for that. [0:18:27.7] EK: Yeah. When it comes to who operates Kubernetes, who deploys Kubernetes, definitely most developers push their code to Kubernetes themselves. Of course, this involves CI and testing and PRs and all of that, so it's not you can just go crazy and break everything. When it comes to operating the production cluster, then that's separate. Usually, you have someone writing code and someone else operating clusters and infrastructure. Sometimes it's the same person, but they're clearly separate roles, even if it's the same person doing it. Usually, you go from your IDE to PR and that goes straight into production once the whole process is done. Now we were talking about workflows and Lambda and all of that. I don't see a good solution for lambda, a good development experience for Lambda just yet. It feels a bit like it needs some refinement still. When it comes to Kubernetes, you asked do most developers run Kubernetes locally? Do they not? I don't know about the numbers, the absolute numbers. Is it most doing this, or most doing that? I'm not sure. I only know the companies I'm in touch with. Definitely not all developers run Kubernetes on their laptops, because it's a problem of scale. Right now, we are basically stuck with 30 gigs of RAM on our laptops. If your app is bigger than that, tough luck, you're not going to run it on the laptop. What most developers do is they still maintain a local development environment, where they can do development without going to CI. I think that is the main question. They maintain agility in their development process. What we usually see when you don't have Kubernetes on your laptop and you're using remote Kubernetes, so a remote development cluster in some cloud provider. What most people do and this is not the companies I talk to. This is basically everyone else. What most people will do is they make their development environment be the same, or work the same way as their production environment. You make a change to your code, you have to push a PR that has to get tested by CI. It has to get approved. Then it ends up in the Kubernetes cluster. Your feedback loop as a developer is insanely slow, because there's so much red tape between you changing a line of code and you getting a new process running in your cluster. Now when you use tools, I call the category MDX. I basically coined that category name myself. MDX is a multi-service development experience tooling. When you use MDX tools, and that's not just Tilt; it’s Tilt, it’s Garden where I used to work, people use telepresence like that. There is Scaffold from Google and so on. There's a bunch of tools. When you use a tool like that, you can have your feedback loop down to a second like I said before. I think that is the major improvement developers can do if they're using Kubernetes remotely and even if they’re using Kubernetes locally. I would guess most people do not run Kubernetes locally. They use remotely. We have clients who have clients — we have users who don't even have Docker on their local machines, because if you have the right tooling, you can change the files on your machine. You have tooling running that detects those five changes. It syncs those five changes to your cluster. The cluster then rebuilds images, or restarts containers, or syncs live code that's already running. Then you can see those changes reflected in your development cluster right, away even though you don't even have Docker in your machine. There's all of those possibilities. [0:22:28.4] MG: Do you see security issues with that approach with not knowing the architecture of Tilt? Even though it's just the development clusters, there might be stuff that could break, or you could break by bypassing the red tape as you said? [0:22:42.3] EK: Usually, we assign one user per namespace. Usually, every developer has a namespace. Kubernetes itself has enough options that if that's a concern to you, you can make it secure. Most people don't worry about it that much, because it's development clusters. They're not accessible to the public. Usually, there's – you can only access it through a VPN or something of that sort. We haven't heard about security issues so far. I'm sure they’re going to pop out at some point. I'm not sure how severe it’s going to be, or how hard it's going to be to fix. I am assuming, because none of this stuff is meant to be accessible to the wider Internet that it's not going to be a hard problem to tackle. [0:23:26.7] DC: I would like to back up for a second, because I feel we're pretty far down the road on what the value of this particular pattern is without really explaining what it is. I want to back this up for just a minute and talk about some of the things that a tooling like this is trying to solve in a use case model, right? Back in the day when I was learning Python, I remember really struggling with the idea of being able to debug Python live. I came across iPython, which is a REPL and that was – which was hugely eye-opening, because it gave me the ability to interact with my code live and also open me up to the idea that it was an improve over things like having to commit a new log line against a particular function and then push that new function up to a place where it would actually get some use and then be able to go look at that log line and see what's coming out of it, or do I actually have enough logs to even put together what went wrong. That whole set of use case is I think is somewhat addressed by tooling like this. I do think we should talk about how did we get here and how does that actually and how does like this address some of those things, and which use cases specifically is it looking to address. I guess where I'm going with this is to your point, so a tooling like Tilt, for example, is the idea that you can, as far as I understand it, inject into a running application, a new instance that would be able to – that you would have a local development control over. If you make a change to that code, then the instance running inside of your target environment would be represented by that new code change very quickly, right? Basically, solving the problem of making sure that you have some very quick feedback loop. I mean, functionally, that's the killer feature here. I think it’s really interesting to see tooling like that start to develop, right? Another example of that tooling would be the REPL thing, wherein instead of writing your code and compiling your code and seeing the output, you could do a thing where you're actually inside, running as a thread inside of the code and you can dump a data structure and you can modify that data structure and you can see if your function actually does the right thing, without having to go back and write that code while imagining all those data structures in your head. Basic tooling like this, I think is pretty killer. [0:25:56.8] EK: Yeah. I think one area where that is still partially untapped right now where this tooling could go, and I'm pushing it, but it's a process. It's not something we can do overnight, is to have very high-level patterns, the let's say codified. For example, everyone's copying Docker files and Kubernetes manifests and Terraform can take files, which I forgot what they're called. Everyone's copying that stuff from the Internet from other websites. That's cool. Oh, you need a container that does such-and-such and sets up this environment and provides these tools. Just download this image and everything is all set up for you. One area where I see things going is for us to have that same portability, but for development environments. For example, I did this whole talk about how to take your Go code, your Go application from I don't know, a 30-seconds feedback loop where you're rebuilding an image every time you make a code change and all of that, down to 1 second. There's a lot of hacks in there that span all kinds of stuff, like should you use Go vendor, or should you keep your dependencies cached inside a Docker layer? Those kinds of things. Then I went down a bunch of those things and eventually came up with a workflow that was basically the best I could find in terms of development experience. What is the snappiest workflow? Or for example, you could have what is a workflow that makes it really easy to debug my Go app? You would use applications like Squash and that's a debugger that you can connect to a process running in a large container. Those kinds of things. If we can prepackage those and offer those to users and not just for Go and not just for debugging, but for all kinds of development workflows, I think that would be really great. We can offer those types of experiences to people who don't necessarily have the inclination to develop those workflows themselves. [0:28:06.8] DC: Yeah, I agree. I mean, it is interesting. I've had a few conversations lately about the fact that the abstraction layer of coding in the way that we think about it really hasn't changed over time, right? It's the same thing. That's actually a really relevant point. It's also interesting to think about with these sorts of frameworks and this tooling, it might be interesting to think of what else we can – what else we can enable the developer to have a feedback loop on more quickly, right? To your point, right? We talked about how these different environments, your development environment and your production environment, the general consensus is they should be as close as you can get them reasonably, so that the behavior in one should somewhat mimic the behavior in the other. At least that's the story we tell ourselves. Given that, it would also be interesting if the developer was getting feedback from effectively how the security posture of that particular cluster might affect the work that they're doing. You do actually have to define network policy. Maybe you don't necessarily have to think about it if we can provide tooling that can abstract that away, but at least you should be aware that it's happening so that you understand if it's not working correctly, this is where you might be able to see the sharp edges pop up, you know what I mean? That sort of thing. [0:29:26.0] EK: Yeah. At the last KubeCon, where was it? In San Diego. There was this running joke. I was running around with the security crowd and there was this joke about KubeCon applies security.yaml. It was in a mocking tone. I'm not disparaging their joke. It was a good joke. Then I was thinking, “What if we can make this real?” I mean, maybe it is real. I don't know. I don't do security myself. What if we can apply a comprehensive enough set of security measures, security monitoring, security scanning, all of that stuff, we prepackage it, we allow users to access all of that with one command, or even less than that, maybe you pre-configure it as a team lead and then everyone else in your team can just use it without even knowing that it's there. Then it just lets you know like, “Oh, hey. This thing you just did, this is a potential security issue that you should know about.” Yeah, I think coming up with these developer shortcuts, it's my hobby. [0:30:38.4] MG: That's cool. What you just mentioned Ellen and Duffie remembers me on – reminds me on the Spring community, the Spring framework, where a lot of the boilerplate, or beat security stuff, or connections, integrations, etc., is being abstracted away and you just annotate your code a bit and then some framework and Spring obviously, it's a spring framework. In your case Ellen, what you were hinting to is maybe this build environment that gives me these integration hooks where I just annotate. Or even those annotations could be enforced. Standards could be enforced if I don't annotate at all, right? I could maybe override them. Then this build environment would just pick it up, because it scans the code, right? It has the source code access, so I could just scan it and hook into it and then apply security policies, lock it down, see ports being used, maybe just open them up to the application, the other ones will automatically get blocked, etc., etc. It just came to my mind. I have not done any research there, or whether there's already some place or activity. [0:31:42.2] EK: Yeah. Because I won't shut up about this stuff, because I just love it, we are doing a – it's in a very early stage right now. We are doing a thing at Tile, we're calling extensions. Very creative name, I suppose. It's basically Go in parts, but for those were closed. It's still at a very early stage. We still have some road ahead of us. For example, we have – let's say this one user and they did some very special integration of Helm and Tilt. You don't have to use Helm by hand anymore. You can just make all of your Helm stuff happen automatically when you're using Tilt. Usually, you would have to copy I don't know, a 100 lines of code from your Tilt config file and copy that around for other developers to be able to use it. Now we have this thing that it's basically going parts where you can just say load extension and give it a name, it fetches it from a repository and it's running. I think that is basically an early stage of what you just described with Spring, but more geared towards let's say an infra-Kubernetes, like how do you tie infra-Kubernetes, that stuff with a higher level functionality that you might want to use? [0:33:07.5] MG: Cool. I have another one. Speaking of which, is there any other integrations for IDEs with Tilt? Because I know that VS code for example, has Kubernetes integrations, does the fabric aid and may even plugin, which handles some stuff under the covers. [0:33:24.3] EK: Usually, Tilt watches your code files and it doesn't care which IDEs you use. It has its own dashboard, which is just a page that you open on your browser. I have just heard this week. Someone mentioned on Slack that they wrote an extension for Tilt. I'm not sure if it was for VS code or the other VS code-like .NET editors. I don't remember what it’s called, but they have a family of those. I just heard that someone wrote one of those and they shared the repo. We have someone looking into that. I haven't seen it myself. The idea has come up when I was working at Garden, which is in the same area as Tilt. I think it's pertinent. We also had the idea of a VS code extension. I think the question is what do you put in the extension? What do you make the VS code extension do? Because both Tilt and Garden. They have their own web dashboards that show users what should be shown and in the manner that we think should be shown. If you're going to create a VS code extension, you either replicate that completely and you basically take this stuff that was in the browser and put it in the IDE. I don't particularly see much benefit in that. If enough people ask, maybe we'll do it, but it's not something that I find particularly useful. Either you do that and you replicate the functionality, or you come up with new functionality. In both cases, I just don't see a very strong point as to what different and IDE-specific functionality should you want. [0:35:09.0] MG: Yes. The reason why I was asking is that we see all these Pulumi, CDKs, AWS CDKs coming up, where you basically use a programming language to write your application/application infrastructure code and your IDE and then all the templating, that YAML stuff, etc., gets generated under covers. Just this week, AWS announced the CDKs, like the CDK basically for Kubernetes. I was thinking, with this happening where some of these providers abstract the scaffolding as well, including the build. You don't even have to build, because it's abstracted away under the covers. I was seeing this trend. Then obviously, we still have Helm and the templating and the customize and then you still have the manual as I mentioned in the beginning. I do like the IDE integration, because that's where I spend most of my time. Whenever I have to leave the IDE, it's a context switch that I have to go through. Even if it's just for opening another file also that I need to edit somewhere. That's why I think having IDE integration is useful for developers, because that's where they most spend up their time. As you said, there might be reasons to not do it in an IDE, because it's just replicating functionality that might not be useful there. [0:36:29.8] EK: Yeah. In the case of Tilt, all the config is written in Starlark, which is a language and it's basically Python. If your IDE can syntax highlight Python, it can syntax highlight the Tilt config files. About Pulumi and that stuff, I'm not that familiar. It's stuff that I know how it works, but I haven't used it myself. I'm not familiar with the browse and the IDE integration side of it. The thing about tools like Tilt is that usually, if you set it up right, you can just write your code all day and you don't have to look at the tool. You just switch from your IDE to let's say, your browser where your app is running, so you get feedback and that kind of thing. Once you configure it, you don't really spend much time looking at it. You're going to look at it when there are errors. You try to refresh your application and it fails. You need to find that error. By the time that happened, you already lost focus from your code anyway. Whether you're going to look for your error on a terminal, or on the Tilt dashboard, that's not much an issue. [0:37:37.7] MG: That's right. That’s right. I agree. [0:37:39.8] CC: All this talk about tooling and IDEs is making me think to ask you Ellen. If I'm a developer and let's say, my company decides that we’re going to use Kubernetes. What we are advocating here with this episode is to think about well, if you're going to be the point to Kubernetes in production, you should consider running Kubernetes as a local development environment. Now for those developers who don't even – haven't even worked with Kubernetes, where do you suggest they jump in? Should they get a handle on – because it's too many things. I mean, Kubernetes already is so big and there are so many toolings around to how to operate Kubernetes itself. For a developer who is, “Okay, I like this idea of having my own local Kubernetes environment, or a development environment somehow may also be in the cloud,” should they start with a tooling like Tilt, or something similar? Would that make it easier for them to wrap their head around Kubernetes and what Kubernetes does? Or should they first get a handle on Kubernetes and then look at a tool like this? [0:38:56.2] EK: Okay. There are a few sides to this question. If you have a very large team, ideally you should get one or a few people to actually really learn Kubernetes and then make it so that everyone else doesn't have to. Something we have seen is very large company, they are going to do Kubernetes in development. They set up a developer experience team and then for example, they have their own wrapper around Kubectl and then basically, they automate a bunch of stuff so that everyone in the team doesn't have to take a certified Kubernetes application development certificate. Because for people who don't know that certificate, it's basically how much Kubectl can you do off top of your head? That is basically what that certificate is about, because Kubectl is an insanely huge and powerful tool. On the one hand, you should do that. If you have a big team, take a few people, learn all that you can about Kubernetes, write some wrappers so that people don't have to do Kubectl or something, something by hand. Just make very easy functions, like Kubectl, let’s say you know a name of your wrapper, context and the name and then that's going to switch you to a namespace let's say, where some version of your app is running, so that thing. Now about the tooling. Once you have your development environment set up and you're going to need someone who has some experience with Kubernetes to set that up in the first place, but once that is set up, if you have the right tooling, you don't really have to know everything that Kubernetes does. You should have at least a conceptual overview. I can tell you for sure, that there's hundreds of developers out there writing code that is going to be deployed to Kubernetes, writing codes that whenever they make a change to their code, it goes to a Kubernetes development cluster and they don't have the first – well, I’m not going to say the first clue, but they are not experienced Kubernetes users. That's because of all the tooling that you can put around. [0:41:10.5] CC: Yeah, that makes sense. [0:41:12.2] EK: Yeah. You can abstract a bunch of stuff with basically good sense, so that you know the common operations that need to be done for your team and then you just abstract them away, so that people don't have to become Kubectl experts. On the other side, you can also abstract a bunch of stuff away with tooling. Basically, as long as your developer has the basic grasp of containers and basics of Kubernetes, that stuff, they don't need to know how to operate it, with any depth. [0:41:44.0] MG: Hey Ellen, in the beginning you said that it's all about this feedback loop and iterating fast. Part of a feedback loop for a developer is unit testing, integration testing, or all sorts of testing. How do you see that changing, or benefiting from tools like Tilt, especially when it comes to integration testing? Unit tests usually locally, but the integration testing. [0:42:05.8] EK: One thing that people can do when they're using Tilt is once you have Tilt running, you basically have all of your application running. You can just set up one-off tasks with Tilt. You could basically set up a script that there's a bunch of stuff, which would basically be what your test does. If it returns zero, it succeeded. If it doesn’t, it failed. You can set something up like that. It's not something that we have right now in a prepackaged farm that you can use right away. You would basically just say, “Hey Tilt, run this thing for me,” and then you would see if it worked or not. I have to make a plug to the competition right now. Garden has more of that part of it, that part of things set up. They have tests as a separate primitive right next to building and deploying, which is what you usually see. They also have testing. It does basically what I just said about Tilt, but they have a special little framework around it. With Garden, you would say, “Oh, here's a test. Here's how you run the test. Here's what the test depends on, etc.” Then it runs it and it tells you if it failed or not. With Tilt, it would be a more generic approach where you would just say, “Hey Tilt, run this and tell me if it fails or not,” but without the little wrapping around it that's specific for testing. When it comes to how things work, like when you're trying to push the production, let's say you did a bunch of stuff locally, you're happy with it, now it's time to push the production. Then there's all that headache with CI and waiting for tests to run and flaky tests and all of that, that I don't know. That is a big open question that everyone's unhappy about and no one really knows which way to run to. [0:43:57.5] DC: That’s awesome. Where do you see this space going in the future? I mean, as you look at the tooling that’s out there, maybe not specifically to the Tilt particular service or capability, but where do you see some other people exploring that space? We were talking about AWS dropping and CDK and there are different people trying to solve the YAML problem, but more from the developer user experience tooling way, where do you see that space going? [0:44:23.9] EK: For me, it's all about higher level abstractions and well-defined best practices. Right now, everyone is fumbling around in the dark not knowing what to do, trying to figure out what works and what doesn't. The main thing that I see changing is that given enough time, best practices are going to emerge and it's going to be clear for everyone. If you're doing this thing, you should use this workflow. If you're doing that thing, you should use that workflow. Basically, what happened when IDEs emerged and became a thing, that is the best practice aside. [0:44:57.1] DC: That's a great example. [0:44:58.4] EK: Yeah. What I see in terms of things being offered for me tomorrow of — in terms of prepackaged higher level abstractions. I don't think developers should, everyone know how to deal with Kubernetes at a deeper level, the same way as I don't know how to build the Linux kernel even though I use Linux every day. I think things should be wrapped up in a way that developers can focus on what matters to them, which is right now basically writing code. Developers should be able to get to the office in the morning, open up their computer, start writing code, or doing whatever else they want to do and not worry about Kubernetes, not worry about lambda, not worry about how is this getting built and how is this getting deployed and how is this getting tested, what's the underlying mechanism. I'd love for higher-level patterns of those to emerge and be very easy to use for everyone. [0:45:53.3] CC: Yeah, it's going to be very interesting. I think best practices is such an interesting thing to think about, because somebody could sit down and write, “Oh, these are the best practices we should be following in the space.” I think, my opinion it's really going to come out of what worked historically when we have enough data to look at over the years. I think it's going to be as far as tooling goes, like a survival of the fittest. Whatever tool has been used the most, that's what's going to be the best practice way to do things. Yeah, we know today there are so many tools, but I think probably we're going to get to a point where we know what to use for what in the future. With that, we have to wrap-up, because we are at the top of the hour. It was so great to have Ellen, or L, how they I think prefer to be called and to have you on the show, Elle. Thank you so much. I mean, L. See, I can't even follow my own. You're very active on Twitter. We're going to have all the information for how to reach you on the show notes. We're going to have a transcript. As always people, subscribe, follow us on Twitter, so you can be up to date with what we are doing and suggest episodes too on our github repo. With that, thank you everybody. Thank you L. [0:47:23.1] DC: Thank you, everybody. [0:47:23.3] CC: Thank you, Michael and – [0:47:24.3] MG: Thank you. [0:47:24.8] CC: - thank you, Duffie. [0:47:26.2] EK: Thank you. It was great. [0:47:26.8] MG: Until next time. [0:47:27.0] CC: Until next week. [0:47:27.7] MG: Bye-bye. [0:47:28.5] EK: Bye. [0:47:28.6] CC: It really was. [END OF EPISODE] [0:47:31.0] ANNOUNCER: Thank you for listening to the Podlets Cloud Native Podcast. Find us on Twitter @thepodlets and on thepodlets.io website. That is ThePodlets, altogether, where you will find transcripts and show notes. We’ll be back next week. Stay tuned by subscribing. [END] See omnystudio.com/listener [https://omnystudio.com/listener] for privacy information.

11 may 2020 - 47 min
episode Kubernetes Operating Systems (Ep 20) artwork
Kubernetes Operating Systems (Ep 20)

Running Kubernetes on conventional operating systems is time-consuming and labor-intensive. Today’s guests Andrew Rynhard and Timothy Gerla have engineered a product that attempts to provide a solution to this problem. They call it Talos, and it is a modern OS designed specifically to host Kubernetes clusters, managed by a flexible and powerful API. Talos is completely stripped down to the bare components required to run Kubernetes and get information from the system. It stays updated by keeping time with Kubernetes, but also provides the user with a large degree of control in the event that they might need to update a flag. In this episode, Andrew and Timothy get into some of the mechanics and thought processes behind Talos, telling us why they went with a read-only API, how they handle security concerns on the OS, and how a system like theirs might get adopted by the Kubernetes community and layperson more broadly. They get into the advantages provided by a stripped-down solution for systematizing the use of Kubernetes across communities and running new components through clusters rather than on the OS itself. In a space where most participants are largely operating in the dark, it is a pleasure to see innovations like this display such lasting power so make sure you check out this episode. Follow us: https://twitter.com/thepodlets [https://twitter.com/thepodlets] Website: https://thepodlets.io [https://thepodlets.io] Feeback: info@thepodlets.io [info@thepodlets.io] https://github.com/vmware-tanzu/thepodlets/issues [https://github.com/vmware-tanzu/thepodlets/issues] Guests: * Andrew Rynhard https://twitter.com/andrewrynhard * Tim Gerla https://twitter.com/tybstar Hosts: * Carlisia Campos [https://twitter.com/carlisia] * Bryan Liles [https://twitter.com/bryanl] * Olive Power [https://twitter.com/opowero] Key Points From This Episode: * What a Kubernetes OS is: a stripped-down OS that integrates with Kubernetes. * The difficulties of managing and getting Kubernetes installed on regular OSs. * Why a Kubernetes OS? Less attack surface and OS compatibility issues. * What Talos does: quickly makes nodes part of a Kubernetes cluster by being stripped down. * How replacing SSH with an API alleviates some users’ security concerns. * A command-line interface called OSCTL that allows users to explore the API. * What does ‘stripped-down’ mean? Talos runs kubelets and gets information from the OS. * The ability to run new components through clusters rather than from the OS. * How the Kubernetes OS evolves with Kubernetes but gets separately controlled too. * Better integrating into Kubernetes by abstracting OS features into Kubernetes as operators. * Security precautions: kernel hardening, SSH and Bash removal, and a read-only OS. * Usability of Talos for the average Joe, and its consistency across base platforms. * Possibilities for interacting with deeper levels of an OS through an API managed OS. * How Talos might become appealing to laypeople: decreasing costs for porting to it. * Value gained from switching to a purpose-built OS as something which could outweigh costs. * Tendencies to hang onto tried and trusted tech even if its predecessors are superior. Quotes: “To me, it’s just about abstracting away the operating system and not even having to worry about it anymore, and looking at Kubernetes and the entire cluster as an operating system.” — Andrew Rynhard [https://twitter.com/andrewrynhard] [0:05:00] “As rapid as the technology is changing, you need an operating system that is going to evolve with it or at least the operations intelligence to evolve with Kubernetes right alongside it.” — Andrew Rynhard [https://twitter.com/andrewrynhard] [0:13:08] “The challenge I think for us and for anybody changing the way that operating systems work is is it better enough than what I have today or what I had before?” — @tybstar [https://twitter.com/tybstar] [0:26:50] “There’s a lot of companies out there who got us at this point in tech that don’t exist anymore, but if they didn’t do what they did, we would not be here right now.” — @bryanl [https://twitter.com/bryanl/status/1226903249577484288] [0:33:41] Links Mentioned in Today’s Episode: Talos —https://www.talos-systems.com/ [https://www.talos-systems.com/] Timothy Gerla —http://www.gerla.net/ [http://www.gerla.net/] Timothy Gerla on Twitter — https://twitter.com/tybstar [https://twitter.com/tybstar] Andrew Ryndhard on LinkedIn —https://www.linkedin.com/in/andrewrynhard/ [https://www.linkedin.com/in/andrewrynhard/] Andrew Ryndhard on GitHub —https://github.com/andrewrynhard [https://github.com/andrewrynhard] Jed Salazar on LinkedIn —https://www.linkedin.com/in/jedsalazar/ [https://www.linkedin.com/in/jedsalazar/] Bryan Liles on LinkedIn — https://www.linkedin.com/in/bryanliles/ [https://www.linkedin.com/in/bryanliles/] Carlisia Campos on LinkedIn — https://www.linkedin.com/in/carlisia/ [https://www.linkedin.com/in/carlisia/] Red Hat — https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux [https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux] Arch — https://www.archlinux.org/ [https://www.archlinux.org/] Debian —https://www.debian.org/ [https://www.debian.org/] Linux —https://www.linux.org/ [https://www.linux.org/] Bell Labs — http://www.bell-labs.com/ [http://www.bell-labs.com/] AT&T —https://www.att.com/ [https://www.att.com/] Transcript: EPISODE 20 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your Cloud Native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [INTERVIEW] [00:00:41] CC: Hi, everybody. Welcome back to the Podlets. Today we have a special episode, and on the show, we have a special guest, Andrew Ryndhard. Say hi, Andrew. [00:00:53] AR: Hello, how are you? [00:00:55] CC: We also have Timothy Gerla. Say hi, Tim. [00:00:58] TG: Hi. Thanks for having me. [00:01:00] CC: Yeah. Andrew and Timothy are from Talos. Andrew dropped an issue on our GitHub repo and here we are. It was a great suggestion. What we’re going to talk about today is what they are working on, which is a Kubernetes operating system. We have tons of questions for them for sure. We also have a special participant on the episode today as a co-host, Jed Salazar. Hi, Jed. [00:01:28] JS: Hey, everyone. Jed Salazar here from the CRE team here at VMware. [00:01:31] CC: And Bryan Liles. [00:01:32] BL: Hi. [00:01:33] CC: Hi. And me, Carlisia Campos. Who’d like to get the party started and kick this off? [00:01:41] BL: Oh, I’m here. Let’s throw the gauntlet down. We’re talking about Kubernetes operating systems today. I have an operating system, a Mac, or I have Linux. I can run Kubernetes. What is a Kubernetes operating system and why should I even be thinking about this? [00:01:58] AR: Sure. I’d like to think about Kubernetes operating system as an operating system that has stripped down the absolute bare minimum to run Kubernetes. Everything that is required to run the kubelet, and essentially that’s it, at least in my opinion. It should be super minimal to start with. Second of all, I also think that it should integrate with Kubernetes as well. The combination of just being able to strip down Linux as we know as small as possible and then actually integrating with Kubernetes itself using APIs to figure out things about itself, whatever. I think that that, in my opinion, is what I would call a Kubernetes OS. [00:02:42] BL: Interesting. Okay. Now that we know a little bit about Kubernetes operating systems, and like I said, I’m starting in early today as the devil’s advocate. Now, like I said before, I have a Mac and I have Linux and I have Windows on my desktop. There’re been lots of efforts from lots of people trying to get Kubernetes running up on Ubuntu or Fedora, and it’s cool that you’re trying to slim this down, but really why would I look at a Kubernetes operating system over my Linux that I’m familiar with? I like Ubuntu with Debian. [00:03:17] AR: Sure. That’s a great question. It’s one we get a lot. I like to think that you actually just get less operational overhead when you actually have a Kubernetes-specific operating system. I think that Kubernetes itself is a job, managing it, getting it installed, unfortunately. It’s getting better, but it’s still a job at the end of the day. Having to manage Kubernetes and the operating system, everything that you need to pass compliance on the operating system, get all the packages installed, these are all things that we kind of know that Kubernetes needs already and yet we’re still having to go in and app install whatever we might need to get Kubernetes up and running. The idea with a Kubernetes operating system in my mind is that we should stop worrying about the individual node, the underlying operating system and start looking at Kubernetes as a whole as a giant machine and we just add machines, nodes to this giant machine that give us extra resources. The less that we have to care about the machine or the underlying operating system, the better, in my mind. We get to focus on Kubernetes. Not only that, but because it’s minimal, you get a smaller attack surface. There’re just not things there that you would otherwise have to worry about. I’ve done Kubernetes for three years now and having to go in and worry about updating packages that are just completely unrelated, it’s something that I think we shouldn’t have to do anymore. If you’re dedicated to running your apps and your stack in Kubernetes, then why are we going in and managing the nodes on an individual basis. For that matter, managing things that don’t really have any relevance for running Kubernetes. To me, it’s just about abstracting away the operating system and not even having to worry about it anymore and looking at Kubernetes and the entire whole cluster as an operating system. We can’t really get there if we’re having to worry about the two jobs of managing both at the same time. [00:05:17] JS: Andrew, can I ask a follow up question? [00:05:18] AR: Sure. [00:05:20] JS: I fully agree with all of those statements. I think a general purpose operating system might not be the best job for a specific role, like being a Kubernetes node. As you mentioned, you have to deal with kind of all the various packages that might be beneficial to you if you’re running it for some general purpose. It’s really supposed to be running a workload as a Kubernetes node so you can kind of scope that down. I’m just wondering when you kind of make this pitch or kind of let these folks know, how do you get folks to kind of relinquish their desire to have full control over their operating system from being able to install their own security management processes on it or being a little bit shy about not being able to SSH or kind of use their common patterns of operating system management? [00:06:09] AR: Oh, that’s a great question. I think the biggest thing that I always answer back is – I can take this in two parts. Let me first of all talk about what – People, they do want to run things on the host. My answer always back is can you run it in Kubernetes? Kubernetes is sort of your package manager, if you will. They sit back usually and they’re like, “Hmm. Yeah, I probably could.” If you need to run something on every host, Kubernetes has something for that. It’s a daemon set. Run it on Kubernetes and call it a day. This isn’t something that’s going to work for absolutely everything I imagine. Nothing in the world is like that. But I think for the majority of the use cases out there and for the things that people want to run on the host, you could actually just run it in Kubernetes itself. As far as SSH and for those that don’t really know what we’ve done in Talos, in Talos we’ve actually stripped down just the kernel and a small Go lang binary that’s our – That, basically, its whole goal is to create a Kubernetes cluster or make a node part of a Kubernetes cluster as fast as possible, and that’s really it. We’ve gone so far as ripping out Bash and SSH and we’ve actually replaced that with an API. My answer always back to the SSH question is what is it that you really trying to get out of SSH? 9 times out of 10, it’s, “I want to get information about what’s wrong. I want to do troubleshooting.” If our answer back to them is, “Oh! We have an API for that,” you still at the end of the day – it’s really the information that you’re after. It’s not necessarily that you need SSH to do that. You need a way to get this information and not necessarily have to sit there and wait for a Prometheus metric, see it pulled it every minute. You want something right on the spot. You want to ask a question and you want to get an immediate answer. I feel like we can answer that with an API. That tends to satisfy the desire for wanting SSH most of the time. I mean, as you said, people are still going to want to hold on to it, but I think over time we’re going to have to educate people that this is a better way. It’s a read-only API that gives operations engineers a way to get that information that they would otherwise get by SSH-ing and asking via Unix utilities what you want to know. [00:08:27] CC: When you say an API, are you also giving them a command line tool or like in the case of Talos, or only an actual API? [00:08:37] TG: Yeah, we do provide a command line interface to the API. It’s called OSCTL and it basically wraps our API, and our intention is that that will be used for exploration of the system, automation through scripting languages, etc. Then as you get more sophisticated with your environment, you might begin to build your own tools that interact directly with that API. [00:08:56] CC: Cool. Yeah, this is a really cool subject. I wasn’t even aware that Kubernetes operating system was a thing until really recently, and I don’t remember how I came across it. One question I have is, Andrew, you were saying, “Well, we strip down Kubernetes to the bare minimum.” How opinionated is it in your case in specific? When you say you – it’s a stripped down version to the bare minimum, this statement of bare minimum, would there be a consensus in the community that, yes, this set of functionality is the bare minimum? Is it your opinion of what the bare minimum should be? [00:09:38] AR: Sure. I think at the absolute bare minimum, we need to run the kubelet. In my mind, that’s really all we need, but you still have this practical issue of, like you said, you need to get information off that machine. You need to be able to kind of manage Kubernetes without having to need Kubernetes as a chicken and egg’s problem. That’s where the API was actually born. When I started Talos, I actually just built a very minimal strip down route-fs that all that did was run the kubelet. But figuring out why the kubelet wasn’t running successfully obviously was not very easy. I figured, “You know what? Let’s put an API in front of this. I want to keep this as minimal as possible. I want to keep this read-only.” I threw an API in front of it. I think you need two things, really. You need to have what’s required by the kubelet. You need a CNI. You need all the utilities that the kubelet will run and you also need a way to query the system. If that is – If in the case of other operating systems that are minimal operating systems, they have decided to do SSH and all the classic utilities that we all know and love, we went another route with an API. But I don’t think the operating system, the route-fs should have any more than what’s required by the kubelet. That would be the pie in the sky dream right there. [00:11:01] CC: The two questions that come to my mind are if I wanted to add Kubernetes components to that, would it be possible? If I wanted to add anything to the operating system, would it possible? I think the second question you already answered, which is, well, if you need to run – Correct me if I’m wrong. If you need to run something on the operating system that’s not there, you can run it in the actual cluster. [00:11:27] AR: Yeah, that’s the idea, is that Kubernetes gives us the APIs to do – We could schedule to specific nodes. We can schedule to a class of nodes. We can schedule to every single node. I think that you can actually handle a lot of the use cases out there for any kind of application with Kubernetes itself. I think that that’s really strong because you get one single consistent API in managing your infrastructure. I want to deploy applications for this team or this team. At the end of the day, everything is just declarative and Kubernetes will make it happen. You don’t have to worry about the scheduling and all of these different things. The only thing that the operating system is concerned about is making that machine available to the Kubernetes cluster. [00:12:10] BL: This idea of slimmed down operating systems, it’s not a new one. CoreOS was doing this years ago. One issue that CoreOS ran into was like, “Well, what’s current?” Well, it depends on what stream you’re on. How do you manage keeping everything up-to-date? [00:12:28] AR: Our goal is to keep pace with Kubernetes essentially. I know that, traditionally, there’s long-term support and there’s all these different ways of releasing different versions of an operating system, but Kubernetes isn’t really there yet. There is no notion that I know of of LTS in Kubernetes yet. There’s just, I believe, it’s N-2 or something like that where they actually offer official support. I think that the operating system is bound to that. I think that it needs to follow Kubernetes as close as possible. There’re constantly different feature gates being opened up. There’re things being graduated to GA. I think especially at this time right now, as rapid as the technology is changing, you need an operating system that is going to evolve with it or at least the operations intelligence to evolve with Kubernetes right alongside it. [00:13:20] BL: So that brings up an interesting point. I mean, there are two things here. There’s the operating system itself and there’s Kubernetes. Do they upgrade in lockstep or are they upgraded separately? [00:13:29] AR: I could only speak for ourselves. There are people that I think they actually have upgrades kind of be one and the same, where the operating system and a Kubernetes upgrade both happen. We’ve decided started to go the other route where we actually want to evolve our APIs sort of independently, but then give you a way to still manage Kubernetes on its own. We’ve actually done self-hosted Kubernetes. In Talos, we’ll actually bootstrap a lightweight control plane, small control plane and then we’ll spin up another control plane using the Kubernetes API. Then now, Kubernetes upgrades simply look like a kubectl edit. I’m going to update my daemon step for my API server. Then from there, you will have to basically update the kub. We use hyperkub for the kubelet. You have to tell Talos, “Use this kubelet image next time you boot.” We’ve separated the two I think for good reason. I think that the two should be able to evolve independently to give a little bit more power back to the user. If you combine them, if you couple them really closely, it becomes really, really opinionated. I think we should at least support what Kubernetes supports, and that’s the N-2 and leave it up to the user to kind of configure Kubernetes, but we still have same best practices out of the box. [00:14:54] BL: Yeah, that makes sense, because yesterday, what did we get? We got a Kubernetes 1.15.10, and I don’t know 16, but we got 1.17.3 yesterday too. You might not want to move, because you might not – 1.17 introduced a whole bunch of deprecations and for custom resource definition. You’re not ready to move yet. We’re on beta 1 for a while for CRDs. I totally see why you had moved that direction. [00:15:20] AR: Yeah, that’s exactly it. We can’t impose too much opinion, but I think that we should drive – The opinion at least up until like, “Hey, don’t worry about what’s on this machine. I’m going to make it a Kubernetes node for you. Just tell me which version you want.” I think that’s where we should draw the boundary and then we should still give the controls back to the user as far as what flags do I want to specify. What kind of feature gates? All these various things that you don’t get out of a lot of the different managed products out there. Hopefully we’ll be tittering right on the line of having that convenience of managed but still giving you that power and flexibility to update a flag if you need to. [00:16:04] CC: This episode is so in the style of an interrogation. It’s hilarious. [00:16:08] BL: That’s me. I’m digging in. [00:16:09] CC: I feel like – No. We are all digging in. It’s just because – At least speaking from myself. I’m super curious. I wanted to ask you, Andrew, at the beginning you were saying that a Kubernetes operating system needs to integrate with Kubernetes and I was sitting here thinking, “Operate? It’s supposed to be Kubernetes.” What did you have in mind when you said that? Did you mean to be able to interface with another Kubernetes cluster? Was that what you meant? [00:16:36] AR: Not quite. What I meant by that is there’s this really powerful thing that Kubernetes gives us in CRDs and this idea of operators or controllers. If you can actually have a way to use an operator controller, say, for upgrading your operating system, which we have in Talos, it’s just an upgrade operator lives in Kubernetes and knows how to talk to Kubernetes and it knows how to talk to our API and sort of orchestrate upgrades across the board. Part of that is, for example, when you receive the upgrade API on a Talos node, it actually is aware, “Hey, I’m running Kubernetes. I’m going to cordon myself, because I know I’ve gotten this and I know that I’m not going to be able to schedule workload on me.” I think that that’s just one example, but we could probably take that a lot farther one day. But I would like to see everything that we know and love about our operating systems today essentially be abstracted and pushed up into Kubernetes as operators. There’s a lot of power in that where you can actually orchestrate things, like I said, like upgrades. I think that that’s one example of how we can integrate better with Kubernetes as how an operating system should, at least. [00:17:45] CC: Got you. [00:17:46] JS: I was wondering if we can kind of maybe just pivot a little bit, like maybe to satisfy my own curiosities, but I was kind of hoping we could talk a little bit about like some of the selling features. Imagine if I’m a hardened sys admin or security team and basically someone comes up and says, “Hey, I want to run this Kubernetes operating system.” Knowing what I know about the state of security today and operating systems, there’s a lot of efforts to basically kind of contain things. No pun intended, but we have user space operates out of some type of sandbox. We have seccomp to limit sys calls. How does Talos approach security maybe like philosophically or maybe even down to the implementation details? What is security in Talos look like? [00:18:33] AR: Yeah. Again, our goal is to basically – We want people to forget about the operating system. But to forget about the operating system, you have to know it’s secure. You have to go to great lengths to secure that because you can’t forget about it for that reason. We actually go down to the kernel, we actually apply what’s called the kernel self-protection project. We basically try to harden the kernel, and at boot time, we do a bunch of checks to make sure that your kernel is running at least most of those configurations. I think we have a little bit of work to do as far as enforcing all of them. But we do some checks to ensure that your kernel is compatible with KSPP, for example. That alone has a ton of benefits to it. It’s a statically compiled kernel so you it can’t do any kernel module loading and stuff like that. That’s completely prohibited. That alone just kind of cuts off a lot of security issues in itself. Then going up the stack further, we’ve actually stripped out SSH. We stripped out Bash. So you have nothing that you can really log on to anymore. Again, that’s just flat out removes a lot of – A whole category of potential attacks potentially. Going even further than that, we’ve actually have Talos running completely out of RAM and it’s a squash-fs. So it’s a read-only file system. The only thing that actually uses a disk is the kubelet. The idea is that we want to make the operating system, again, just have it go away. Having it read-only I think is a really strong thing, and squash-fs in particular, because you can’t remount it, rewrite if you’re a user or something like that. Then up in Kubernetes we actually – Out of the box, we try to deploy it with all of the security best practices, the CIS benchmarks and all of that. We go to all the way from the kernel, to our user LAN and even to Kubernetes itself. We try to bring out security best practices out of the box. I think that’s something I’d love to see for Kubernetes itself upstream, but for now that’s what we’re doing. [00:20:33] BL: Can we go back to the interrogation? No. Let’s not go back to an interrogation. Thinking of – If we take the concept of a Kubernetes operating system, that can be updated in a different cadence, then the Kubernetes running on it – Who is Talos for? Who does it make – Could Joe as a neophyte or someone who doesn’t really know the space, will this make their life any easier or is there a special set of expertise that we would need to be fruitful with this? [00:21:06] TG: I think from our perspective, we hope that everybody who uses Kubernetes would find something useful in Talos, or a system like Talos. Number one, I think Talos would be a great way to get started on your laptop or workstation. I got some basic features to standup a small Kubernetes cluster there. That’s one place to start. As you move further into production, I think that a Kubernetes OS-based platform would be particularly useful in an environment where you might have multiple clusters spread across different geographical locations, spread across different teams. Maybe spread across different hosting environments. We’ve talked to a number of folks who have been running Kubernetess in production for a couple of years now, and these clusters kind of come up organically within a larger organization in different areas, doing different things for the business, managed by different teams. Now that a little bit of time has passed, these organizations are realizing that, “Hey, we’ve got kind of a Kubernetes sprawl problem. We have this team over here on Amazon managing and running Kubernetes one way. We have a separate team managing and running Kubernetes a different way over here on a different kind of platform.” I think anything that – anywhere where we can drive some consistency across the tooling, consistency across the base platforms would be useful. We also think that the minimal aspect of our system and some of the design decisions we’ve made around security and make it particularly useful in maybe a regulated environment. I think that that claim would hold true for any sort of special purpose operating system or minimal operating system designed for a specific task. [00:22:35] BL: Interesting. Just thinking about a concept of a Kubernetes operating system, what’s next? I’m not asking what’s next from Talos, but given all the opportunity all the time and all the knowledge. What should we be doing that we’re not doing right now? [00:22:49] AR: Specifically around operating systems or Kubernetes? [00:22:52] BL: Well, you know what? You can start with operating systems. I mean, you can go to Kubernetes and then we’ll see if our lists match. [00:22:57] AR: That’s a good question. Right off the bat, I’m going to say I don’t really know. I think this is new space. I think that we have a big task in front of us already in getting people to use these kinds of operating systems, hopefully not too big of a task. I’m hoping to see – Because you find these big companies, “Oh! We can’t do this. We can’t do that,” because getting a new OS is hard. I think we first of all need to win people over on just these even more minimal operating systems beyond what CoreOS has done. Personally, I don’t know if I could answer that question honestly without just owing something. [00:23:33] TG: I’ve got a thought here. One of the things that I’m really interested in beyond just Kubernetes and beyond just the operating system – what is computing going to look like in 5, 10, 15 years? I don’t know if Kubernetes is going to be around. I’m kind of a tech-cynic, right? I’ve seen a lot of fads in my career and things that pop up and are very popular for a couple of years and then sort of disappear. I don’t think Kubernetes is one of those. I think Kubernetes and the concepts and the layers of abstraction that Kubernetes has provided, all of that will remain useful and powerful in distant future whether or not it’s called Kubernetes or if it’s called something else, some new paradigm. But what I’m really interested in is seeing what can we do with this idea of an API-managed OS? If you look at the general purpose operating systems out there, some aspects of the system might expose an API. But for the most part, you’re still interacting and interfacing with this system like you were 30 years ago, 35, 40 years ago even. That’s fine. What works works, but everything else today has an API. Kubernetes has a powerful and extensible API and I think that your operating system should have something similar, something comparable, something that you can interact with using the same tools and the same processes and the same ideas that you can at the top of the stack and move some of those concepts down to the host OS level where we’re talking about today. [00:24:51] CC: This brings up a point that I’m so curious about, not only the idea of having a Kubernetes operating system, but any idea that is new that you were just talking about, Tim, is – So what works works. For example, every year or every couple of years, I am evaluating a new code editor or I am evaluating a new note taking app, or do-to-list app, those three things. I’m continuously finding something to reevaluate because what I have has never worked for me just the way I think. Actually, recently I found a couple of things that are really good. In any case, the thing is they just never worked for years. They’re very limited. They don’t match my thinking. But operating system, I would never – Well, I’m not an administrator also, but just like from having my own laptops forever, I’m not going to go out there. That’s not true either, but I was going to say I’m not going to go out there and try a new operating system to see if it’s offering that I already have, then it might be better for me. But that’s not true, because I have done that many times too. So never mind. But I think the idea of my question is stance, is how are you communicating to people out there that, “Hey, there is this new thing that maybe it’s working for you – Maybe you think it’s working for you, but you just don’t know that there is a new different way of doing.” When you do try to do that, how are people responding? I mean, of course, there are those cases where people just know they get it and they immediately resonate with them. But I’m talking about the people who like might benefit from this but they don’t quite grasp. How do you break through that barrier? [00:26:38] TG: Sure. Maybe the lay majority. [00:26:40] CC: Yeah, and how are people responding? [00:26:42] TG: Yeah. The great thing about Talos is that people understand pretty immediately what it is, how it works and why we’ve done it. The challenge I think for us and for anybody changing the way that operating systems work. Is it better enough than what I have today or what I had before? Is it worth the switch in costs? I think that switching cost is something that’s pretty well understood in the industry. People have gone through this process and they’ve moved from virtualization to containers, from Docker to Kubernetes, etc. They understand that process and they understand there’s a technical cost. There’s a people cost, etc. We have to show that value. I think that progress in our industry is incremental. Our industry is young. We’re not building bridges. We’re not at the level of like the internal combustion engine where the engineering is understood and we know how to build it and we know how to make it so that it doesn’t fall over and explode. Clearly, we’re not quite there yet in the broader world of computing. I think anywhere where we can show a little bit of incremental improvement where we can tackle one narrow slice of a problem and make it a little bit better and get to a point where computing is just a little bit safer and a little bit easier and a little bit faster. I think that’ll be a pretty compelling argument and there’s a lot of details involved and we have to talk about how do you get your applications from one operating system to the next? 15 years ago, it may have been a very big ask to ask someone to port their enterprise application from one operating system to another. They’re so inextricably linked. There’re a lot of connections between the OS and the applications, but today, we have these levels of abstraction. We have containers. We have the Kubernetes orchestration mechanisms and I think that switching cost is going down every release of Kubernetes and every step along the way as people change the way that applications are deployed that switching cost gets a little bit cheaper. It will be easier for us to prove that the value you gain by moving to a purpose-built operating system is greater than the switching cost. [00:28:41] CC: Very good points. [00:28:42] JS: I feel like there’s a lot of emphasis and focus on the move over. The first steps toward migrating to something new. There’s a lot of emphasis on bootstrapping a cluster. There’s a lot of emphasis on how do I get started. I’m part of a team called customer reliability engineering and we see operators running Kubernetes environments that are durable and have been in the field for many years. I think that there’s kind of a hidden cost in these day two operations where, like today, to effectively be a Kubernetes operator, you need to also have a great deal of understanding of Linux internal operating systems or Linux operating systems internals. These are abstractions on top, but sometimes those abstractions are leaky. So you need to be able to parse IP tables rules. You need to be able to understand how traffic gets routed, all of these aspects of it. I’m just wondering how do we kind of get folks shifted from this mindset of I’m going to start with something that’s general purpose and then I’m going to basically make it do what I want it to do by making all of these configuration changes and installing things on top of it to kind of make that not general purpose, but kind of specific focus on it and kind of get people to move back more fundamentally and think, “Well, what if we just started with something that is strictly for running workloads?” We don’t have to worry about installing a security suite on top of this or making this configuration change or hardening requirements or what have you. We’re fundamentally in a better place because we’ve started with something that’s arguably more secure. [00:30:21] BL: You know that – I mean, I’m old. I’m old now. I’m realizing this. When I started – Back in my day when we started with Linux, we went through this whole thing of Linux installers and there’re many iterations of Linux installers and it depended on, “Well, did you like what Red Hat was doing? Did you like what Debian project was doing? Oh! Did you like what Arch was doing? Oh! Did you want to do it yourself? Do you want to merge the world with gen 2?” Really, we come to this point now, no one ever talks about Linux installers anymore. You just put it on there. I think what I’m getting at is that we don’t actually know what we want. I mean we say that we want it to be simpler. We say we want it to be more secure, but we don’t know. Only time will tell, and I think it’s going to be a lot of chipping away at problems. Then people who are wanting to have the bold ideas are saying, “I’m going to out there and create a Kubernetes operating system.” In reality, it may work. We hope it works, or it may not work, but at least we gained just a little tiny bit more knowledge on how we want to run this thing. I think – And I’ll just say one more last thing, is that if you look at like Bell Labs, Bell Labs created the vacuum tube, and then like 20 years later, 20 or 30 years later, they created the transistor, twice actually. It took a long time to get the vacuum tube out because it kind of just worked and they just said, “We can’t throw it back. We just can’t throw that away.” Maybe we’re seeing a lot of that in Kubernetes. We’re holding on to some good things even though some greater things are going to come, but it might not be here this year or next year. It might be 18 months. It might be 24 months. We just got to really pay attention to that. [00:32:02] AR: Brian, when you said you were old, I was going to shake my head internally and then you brought up the vacuum tube and I’m like, “Okay.” [00:32:07] BL: I mean, I’m not that old. [00:32:09] JS: Yeah, I think that’s a good point, Brian. The thing I like to point out is the allegory of the cave. People have been living a certain way for so long they think that these shadows are real and they just know that way of life until some crazy comes along and says, “Hey, there’s a whole world out there,” and no one believes him. I think we just need to do – Like you said, we just need to do it. When you just create and make it happen and hopefully educate people in the process and just keep chipping away at it. Do the good work. [00:32:38] BL: That’s the important piece and that was the power of Bell Labs. You probably can tell. I just read a book about Bell Labs. I’m an expert now. But that was the power of Bell Labs. They didn’t just focus on making product for AT&T. They focused on changing the world, like literally. Who creates a transistor if you knew what one was? You just don’t create that. That’s like some really crazy stuff. I try to bring the parallel back to what we’re doing here. We can’t just create this perfect Kubernetes thing, because really, we don’t know what it is. I mean, we can be smart and say, “Well, it needs to be secure. It needs to be networking,” and all these stuff. But you know what? We don’t even have cgroups v2 support yet. We don’t even know where we are. Let’s figure out – Let’s just keep going down the path, but we will suss out these better patterns. [00:33:23] CC: Yeah, I like that. [00:33:24] BL: That’s it. It is incremental. Here’s the crazy part, and this is the real tough part. You know what? It is incremental, and reality says that not everybody can win. Don’t take your failures as a loss. Take them as, “well, maybe we shouldn’t have done that,” and keep on moving forward because there’s a lot of companies out there who got us at this point in tech that don’t exist anymore, but if they didn’t do what they did, we would not be here right now. It’s not [inaudible 00:33:52]. [00:33:53] CC: Why are we talking about failures? [00:33:55] BL: I’m sorry. It’s the ultimate success. [00:34:01] CC: Oh gosh! Let’s not end the show in such a downer. [00:34:04] BL: No. That’s a happy point though. Let me put the bow on the happy point and then I will stop talking. The thing is, is it’s not the glass is half empty. It is glass is half full. The path to success is littered with failure and it’s not a bad thing. It’s a good thing, because it’s good that we can continue making those failures because we know they lead to successes. That is actually a happy thing. [00:34:29] CC: I wonder if Andrew and Tim want to do a little bit of interrogating of us. I think that would be fair. [00:34:36] AR: I wouldn’t know what to interrogate you guys about. [00:34:40] CC: Well, we are coming up at the top of the hour. So it’s time to say goodbye. It was great having you, Andrew, and you, Tim, on the call. Jed, thank you for participating as well. I think it was very informative. With that, I will say, until next week. Bye everybody. [00:34:59] TG: Bye. Thanks for having us. [00:35:00] CC: My pleasure. [00:35:00] AR: Bye-bye. Thank you. [00:35:02] JS: Thank you. Bye. [END OF INTERVIEW] [0:35:05.3] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END] See omnystudio.com/listener [https://omnystudio.com/listener] for privacy information.

09 mar 2020 - 35 min
episode Application Transformation with Chris Umbel and Shaun Anderson (Ep 19) artwork
Application Transformation with Chris Umbel and Shaun Anderson (Ep 19)

Today on the show we are very lucky to be joined by Chris Umbel and Shaun Anderson from Pivotal to talk about app transformation and modernization! Our guests help companies to update their systems and move into more up-to-date setups through the Swift methodology and our conversation focusses on this journey from legacy code to a more manageable solution.  We lay the groundwork for the conversation, defining a few of the key terms and concerns that arise for typical clients and then Shaun and Chris share a bit about their approach to moving things forward. From there, we move into the Swift methodology and how it plays out on a project before considering the benefits of further modernization that can occur after the initial project. Chris and Shaun share their thoughts on measuring success, advantages of their system and how to avoid roll back towards legacy code. For all this and more, join us on The Podlets Podcast, today! Follow us: https://twitter.com/thepodlets [https://twitter.com/thepodlets] Website: https://thepodlets.io [https://thepodlets.io] Feeback: info@thepodlets.io [info@thepodlets.io] https://github.com/vmware-tanzu/thepodlets/issues [https://github.com/vmware-tanzu/thepodlets/issues] Hosts: * Carlisia Campos [https://twitter.com/carlisia] * Josh Rosso [https://twitter.com/joshrosso] * Duffie Cooley [https://twitter.com/mauilion] * Olive Power [https://twitter.com/opowero] Key Points From This Episode: * A quick introduction to our two guests and their roles at Pivotal. * Differentiating between application organization and application transformation. * Defining legacy and the important characteristics of technical debt and pain. * The two-pronged approach at Pivotal; focusing on apps and the platform. * The process of helping companies through app transformation and what it looks like. * Overlap between the Java and .NET worlds; lessons to be applied to both. * Breaking down the Swift methodology and how it is being used in app transformation. * Incremental releases and slow modernization to avoid roll back to legacy systems. * The advantages that the Swift methodology offers a new team. * Possibilities of further modernization and transformation after a successful implementation. * Measuring success in modernization projects in an organization using initial objectives. Quotes: “App transformation, to me, is the bucket of things that you need to do to move your product down the line.” — Shaun Anderson [0:04:54] “The pioneering teams set a lot of the guidelines for how the following teams can be doing their modernization work and it just keeps rolling down the track that way.” — Shaun Anderson [0:17:26] “Swift is a series of exercises that we use to go from a business problem into what we call a notional architecture for an application.” — Chris Umbel [0:24:16] “I think what's interesting about a lot of large organizations is that they've been so used to doing big bang releases in general. This goes from software to even process changes in their organizations.” — Chris Umbel [0:30:58] Links Mentioned in Today’s Episode: Chris Umbel — https://github.com/chrisumbel [https://github.com/chrisumbel] Shaun Anderson — https://www.crunchbase.com/person/shaun-anderson [https://www.crunchbase.com/person/shaun-anderson] Pivotal — https://pivotal.io/ [https://pivotal.io/] VMware — https://www.vmware.com/ [https://www.vmware.com/] Michael Feathers — https://michaelfeathers.silvrback.com/ [https://michaelfeathers.silvrback.com/] Steeltoe — https://steeltoe.io/ [https://steeltoe.io/] Alberto Brandolini — https://leanpub.com/u/ziobrando [https://leanpub.com/u/ziobrando] Swiftbird — https://www.swiftbird.us/ [https://www.swiftbird.us/] EventStorming — https://www.eventstorming.com/book/ [https://www.eventstorming.com/book/] Stephen Hawking — http://www.hawking.org.uk/ [http://www.hawking.org.uk/] Istio — https://istio.io/ [https://istio.io/] Stateful and Stateless Workload Episode — https://thepodlets.io/episodes/009-stateful-and-stateless/ [https://thepodlets.io/episodes/009-stateful-and-stateless/] Pivotal Presentation on Application Transformation: https://content.pivotal.io/slides/application-transformation-workshop [https://content.pivotal.io/slides/application-transformation-workshop] Transcript: EPISODE 19 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.0] CC: Hi, everybody. Welcome back to The Podlets. Today, we have an exciting show. It's myself on, Carlisia Campos. We have our usual guest hosts, Duffie Cooley, Olive Power and Josh Rosso. We also have two special guests, Chris Umbel. Did I say that right, Chris? [0:01:03.3] CU: Close enough. [0:01:03.9] CC: I should have checked before. [0:01:05.7] CU: Umbel is good. [0:01:07.1] CC: Umbel. Yeah. I'm not even the native English speaker, so you have to bear with me. Shaun Anderson. Hi. [0:01:15.6] SA: You said my name perfectly. Thank you. [0:01:18.5] CC: Yours is more standard American. Let's see, the topic of today is application modernization. Oh, I just found out word I cannot pronounce. That's my non-pronounceable words list. Also known as application transformation, I think those two terms correctly used alternatively? The experts in the house should say something. [0:01:43.8] CU: Yeah. I don't know that I would necessarily say that they're interchangeable. They're used interchangeably, I think by the general population though. [0:01:53.0] CC: Okay. We're going to definitely dig into that, how it does it not make sense to use them interchangeably, because just by the meaning, I would think so, but I'm also not in that world day-to-day and that Shaun and Chris are. By the way, please give us a brief introduction the two of you. Why don't you go first, Chris? [0:02:14.1] CU: Sure. I am Chris Umbel. I believe it was probably actually pronounced Umbel in Germany, but I go with Umbel. My title this week is the – I think .NET App Transformation Journey Lead. Even though I focus on .NET modernization, it doesn't end there. Touch a little bit of everything with Pivotal. [0:02:34.2] SA: I'm Shaun Anderson and I share the same title of the week as Chris, except for where you say .NET, I would say Java. In general, we play the same role and have slightly different focuses, but there's a lot of overlap. [0:02:48.5] CU: We get along, despite the .NET and Java thing. [0:02:50.9] SA: Usually. [0:02:51.8] CC: You both are coming from Pivotal, yeah? As most people should know, but I'm sure now everybody knows, Pivotal was just recently as of these date, which is what we are? End of January. This episode is going to be a while to release, but Pivotal was just acquired by VMware. Here we are. [0:03:10.2] SA: It's good to be here. [0:03:11.4] CC: All right. Somebody, one of you, may be let's say Chris, because you've brought this up, how this application organization differs from application transformation? Because I think we need to lay the ground and lay the definitions before we can go off and talk about things and sound experts and make sure that everybody can follow us. [0:03:33.9] CU: Sure. I think you might even get different definitions, even from within our own practice. I'll at least lay it out as I see it. I think it's probably consistent with how Shaun's going to see it as well, but it's what we tell customers anyway. At the end of the day, there are – app transformation is the larger [inaudible] bucket. That's going to include, say just the re-hosting of applications, taking applications from point A to some new point B, without necessarily improving the state of the application itself. We'd say that that's not necessarily an exercise in paying down technical debt, it's just making some change to an application or its environment. Then on the modernization side, that's when things start to get potentially a little more architectural. That's when the focus becomes paying down technical debt and really improving the application itself, usually from an architectural point of view and things start to look maybe a little bit more like rewrites at that point. [0:04:31.8] DC: Would you say that the transformation is more in-line with re-platforming, those of you that might think about it? [0:04:36.8] CU: We'd say that app transformation might include re-platforming and also the modernization. What do you think of that, Shaun? [0:04:43.0] SA: I would say transformation is not just the re-platforming, re-hosting and modernization, but also the practice to figure out which should happen as well. There's a little bit more meta in there. Typically, app transformation to me is the bucket of things that you need to do to move your product down the line. [0:05:04.2] CC: Very cool. I have two questions before we start really digging to the show, is still to lay the ground for everyone. My next question will be are we talking about modernizing and transforming apps, so they go to the clouds? Or is there a certain cut-off that we start thinking, “Oh, we need to – things get done differently for them to be called native.” Is there a differentiation, or is this one is the same as the other, like the process will be the same either way? [0:05:38.6] CU: Yeah, there's definitely a distinction. The re-platforming bucket, that re-hosting bucket of things is where your target state, at least for us coming out of Pivotal, we had definitely a product focus, where we're probably only going to be doing work if it intersects with our product, right? We're going to be doing both re-platforming targeted, say typically at a cloud environment, usually Cloud Foundry or something to that effect. Then modernization, while we're usually doing that with customers who have been running our platform, there's nothing to say that you necessarily need a cloud, or any cloud to do modernization. We tend to based on who we work for, but you could say that those disciplines and practices really are agnostic to where things run. [0:06:26.7] CC: Sorry, I was muted. I wanted to ask Shaun if you wanted to add to that. Do you have the same view? [0:06:33.1] SA: Yeah. I have the same view. I think part of what makes our process unique that way is we're not necessarily trying to target a platform for deployment, when we're going through the modernization part anyway. We're really looking at how can we design this application to be the best application it can be. It just so happens that that tends to be more 12-factor compliant that is very cloud compatible, but it's not necessarily the way that we start trying to aim for a particular platform. [0:07:02.8] CC: All right. If everybody allows me, after this next question, I'll let other hosts speak too. Sorry for monopolizing, but I'm so excited about this topic. Again, in the spirit of understanding what we're talking about, what do you define as legacy? Because that's what we're talking about, right? We’re definitely talking about a move up, move forwards. We're not talking about regression and we're not talking about scaling down. We're talking about moving up to a modern technology stack. That means, that implies we're talking about something that's legacy. What is legacy? Is it contextual? Do we have a hard definition? Is there a best practice to follow? Is there something public people can look at? Okay, if my app, or system fits this recipe then it’s considered legacy, like a diagnosis that has a consensus. [0:07:58.0] CU: I can certainly tell you how you can't necessarily define legacy. One of the ways is by the year that it was written. You can certainly say that there are certainly shops who are writing legacy code today. They're still writing legacy code. As soon as they're done with a project, it's instantly legacy. There's people that are trying to define, like another Michael Feathers definition, which is I think any application that doesn't have tests, I don't know that that fits what – our practice necessarily sees legacy as. Basically, anything that's occurred a significant amount of technical debt, regardless of when the application was written or conceived fits into that legacy bucket. Really, our work isn't necessarily as concerned about whether something's legacy or not as much as is there pain that we can solve with our practice? Like I said, we've modernized things that were in for all intents and purposes, quite modern in terms of the year they were written. [0:08:53.3] SA: Yeah. I would double down on the pain. Legacy to us often is something that was written as a prototype a year ago. Now it's ready to prove itself. It's going to be scaled up, but it wasn't built with scale in mind, or something like that. Even though it may be the latest technology, it just wasn't built for the load, for example. Sometimes legacy can be – the pain is we have applications on a mainframe and we can't find Cobol developers and we're leasing a giant mainframe and it's costing a lot of money, right? There's different flavors of pain. It also could be something as simple as a data center move. Something like that, where we've got all of our applications running on Iron and we need to go to a virtual data center somewhere, whether it's cloud or on-prem. Each one of those to us is legacy. It's all about the pain. [0:09:47.4] CU: I think is miserable as that might sound, that's really where it starts and is listening to that pain and hearing directly from customers what that pain is. Sounds terrible when you think about it that you're always in search of pain, but that isn't indeed what we do and try to alleviate that in some way. That pain is what dictates the solution that you come up with, because there are certain kinds of pain that aren't going to be solved with say, modernization approach, a more a platformed approach even. You have to listen and make sure that you're applying the right medicine to the right pain. [0:10:24.7] OP: Seems like an interesting thing bringing what you said, Chris, and then what you said earlier, Shaun. Shaun you had mentioned the target platform doesn't necessarily matter, at least upfront. Then Chris, you had implied bringing the right thing in to solve the pain, or to help remedy the pain to some degree. I think what's interesting may be about the perspectives for those on this call and you too is a lot of times our entry points are a lot more focused with infrastructure and platform teams, where they have these objectives to solve, like cost and ability to scale and so on and so forth. It seems like your entry point, at least historically is maybe a little bit more focused on finding pain points on more of the app side of the house. I'm wondering if that's a fair assessment, or if you could speak to how you find opportunities and what you're really targeting. [0:11:10.6] SA: I would say that's a fair assessment from the perspective of our services team. We're mainly app-focused, but it's almost there's a two-pronged approach, where there's platform pain and application pain. What we've seen is often solving one without the other is not a great solution, right? I think that's where it's challenging, because there's so much to know, right? It's hard to find one team or one person who can point out the pain on both sides. It just depends on often, how the customer approaches us. If they are saying something like, “We’re a credit card company and we're getting our butts kicked by this other company, because they can do biometrics and we can't yet, because of the limitations of our application.” Then we would approach it from the app-first perspective. If it's another pain point, where our operations, day two operations is really suffering, we can't scale, where we have issues that the platform is really good at solving, then we may start there. It always tends to merge together in the end. [0:12:16.4] CU: You might be surprised how much variety there is in terms of the drivers for people coming to us. There are a lot of cases where the work came to us by way of the platform work that we've done. It started with our sister team who focuses on the platform side of things. They solve the infrastructure problems ahead of us and then we close things out on the application side. We if our account teams and our organization is really listening to each individual customer that you'll find that there – that the pain is drastically different, right? There are some cases where the driver is cost and that's an easy one to understand. There are also drivers that are usually like a date, such as this data center goes dark on this date and I have to do something about it. If I'm not out of that data center, then my apps no longer run. The solution to that is very different than the solution you would have to, "Look, my application is difficult for me to maintain. It takes me forever to ship features. Help me with that." There's two very different solutions to those problems, but each of which are things that come our way. It's just that former probably comes in by way of our platform team. [0:13:31.1] DC: Yeah, that’s an interesting space to operate in in the application transformation and stuff. I've seen entities within some of the larger companies that represent this field as well. Sometimes that's called production engineering or there are a few other examples of this that I'm aware of. I'm curious how you see that happening within larger companies. Do you find that there is a particular size entity that is actually striving to do this work with the tools that they have internally, or do you find that typically, most companies are just need something like an application transformation so you can come in and help them figure out this part of it out? [0:14:09.9] SA: We've seen a wide variety, I think. One of them is maybe a company really has a commitment to get to the cloud and they get a platform and then they start putting some simple apps up, just to learn how to do it. Then they get stuck with, “Okay. Now how do we with trust get some workloads that are running our business on it?” They will often bring us in at that point, because they haven't done it before. Experimenting with something that valuable to them is — usually means that they slow down. There's other times where we've come in to modernize applications, whether it's a particular business unit for example, that may have been trying to get off the mainframe for the last two years. They’re smart people, but they get stuck again, because they haven't figured out how to do it. What often happens and Chris can talk about some examples of this is once we help them figure out how to modernize, or the recipes to follow to start getting their systems systematically on to the platform and modernize, that they tend to like forming a competency area around it, where they'll start to staff it with the people who are really interested and they take over where we started from. [0:15:27.9] CU: There might be a little bit of bias to that response, in that typically, in order to even get in the door with us, you're probably a Fortune 100, or at least a 500, or government, or something to that effect. We're going to be seeing people that one, have a mainframe to begin with. Two, would have say, capacity to fund say a dedicated transformation team, or to build a unit around that. You could say that the smaller an organization gets, maybe the easier it is to just have the entire organization just write software the modern way to begin with. At least at the large side, we do tend to see people try to build a – they'll use different names for it. Try to have a dedicated center of excellence or practice around modernization. Our hope is to help them build that and hopefully, put them in a position that that can eventually disappear, because eventually, you should no longer need that as a separate discipline. [0:16:26.0] JR: I think that's an interesting point. For me, I argue that you do need it going forward, because of the cognitive overhead between understanding how your application is going to thrive on today's complex infrastructure models and understanding how to write code that works. I think that one person that has all of that in their head all the time is a little too much, a little too far to go sometimes. [0:16:52.0] CU: That's probably true. When you consider the size the portfolios and the size of the backlog for modernization that people have, I mean, people are going to be busy on that for a very long time anyway. It's either — even if it is finite, it still has a very long life span at a minimum. [0:17:10.7] SA: At a certain point, it becomes like painting the Golden Gate Bridge. As soon as you finish, you have to start again, because of technology changes, or business needs and that thing. It's probably a very dynamic organization, but there's a lot of overlap. The pioneering teams set a lot of the guidelines for how the following teams can be doing their modernization work and it just keeps rolling down the track that way. It may be that people are busy modernizing applications off of WebLogic, or WebSphere, and it takes a two years or more to get that completed for this enterprise. It was 20, 50 different projects. To them, it was brand-new each time, which is cool actually to come into that. [0:17:56.3] JR: I'm curious, I definitely love hear it from Olive. I have one more question before I pass it out and I think we’d love to hear your thoughts on all of this. The question I have is when you're going through your day-to-day working on .NET and Java applications and helping people figure out how to go about modernizing them, what we've talked about so far is that represents some of the deeper architectural issues and stuff. You've already mentioned 12 factor after and being able to move, or thinking about the way that you frame the application as far as inputs of those things that it takes to configure, or to think with the lifecycle of those things. Are there some other common patterns that you see across the two practices, Java and .NET, that you think are just concrete examples of stuff that people should take away maybe from this episode, that they could look at their app – and they’re trying to get ahead of the game a little bit? [0:18:46.3] SA: I would say a big part of the commonality that Chris and I both work on a lot is we have a methodology called the SWIFT methodology that we use to help discover how the applications really want to behave, define a notional architecture that is again, agnostic of the implementation details. We’ll often come in with a the same process and I don't need to be a .NET expert and a .NET shop to figure out how the system really wants to be designed, how you want to break things into microservices and then the implementation becomes where those details are. Chris and I both collaborate on a lot of that work. It makes you feel a little bit better about the output when you know that the technology isn't as important. You get to actually pick which technology fits the solution best, as opposed to starting with the technology and letting a solution form around it, if that makes sense. [0:19:42.4] CU: Yeah. I'd say that interesting thing is just how difficult it is while we're going through the SWIFT process with customers, to get them to not get terribly attached to the nouns of the technology and the solution. They've usually gone in where it's not just a matter of the language, but they have something picked in their head already for data storage, for messaging, etc., and they're deeply attached to some of these decisions, deeply and emotionally attached to them. Fundamentally, when we're designing a notional architecture as we call it, really you should be making decisions on what nouns you're going to pick based on that architecture to use the tools that fit that. That's generally a bit of a process the customers have to go through. It's difficult for them to do that, because the more technical their stakeholders tend to be, often the more attached they are to the individual technology choices and breaking that is the principal role for us. [0:20:37.4] OP: Is there any help, or any investment, or any coordination with those vendors, or the purveyors of the technologies that perhaps legacy applications are, or indeed the platforms they're running on, is there any help on that side from those vendors to help with application transformation, or making those applications better? Or do organizations have to rely on a completely independent, so the team like you guys to come in and help them with that? Do you understand my point? Is there any internal – like you mentioned WebLogic, WebSphere, do the purveyors of those platforms try and drive the transformation from within there? Or is it organizations who are running those apps have to rely on independent companies like you, or like us to help them with that? [0:21:26.2] SA: I think some of it depends on what the goal of the modernization is. If it's something like, we no longer want to pay Oracle licensing fees, then of course, obviously they – WebLogic teams aren't going to be happy to help. That's not always the case. Sometimes it's a case where we may have a lot of WebLogic. It's working fine, but we just don't like where it's deployed and we'd like to containerize it, move it to Kubernetes or something like that. In that case, they're more willing to help. At least in my experience, I've found that the technology vendors are rightfully focused just on upgrading things from their perspective and they want to own the world, right? WebLogic will say, “Hey, we can do everything. We have clustering. We have messaging. We've got good access to data stores.” It's hard to find a technology vendor that has that broader vision, or the discipline to not try to fit their solutions into the problem, when maybe they're not the best fit. [0:22:30.8] CU: I think it's a broad generalization, but specifically on the Java side it seems that at least with app server vendors, the status quo is usually serving them quite well. Quite often, we’re adversary – a bit of an adversarial relationship with them on occasion. I could certainly say that within the .NET space, we've worked a relatively collaboratively with Microsoft on things like Steeltoe, which is a I wouldn't say it's a springboot analog, but at least a microservice library that helps people achieve 12-factor cloud nativeness. That's something where I guess Microsoft represents both the legacy side, but also the future side and were part of a solution together there. [0:23:19.4] SA: Actually, that's a good point because the other way that we're seeing vendors be involved is in creating operators on Kubernetes side, or Cloud Foundry tiles, something that makes it easy for their system to still be used in the new world. That's definitely helpful as well. [0:23:38.1] CC: Yeah, that's interesting. [0:23:39.7] JR: Recently, myself me people on my team went through a training from both Shaun and Chris, interestingly enough in Colorado about this thing called the SWIFT methodology. I know it's a really important methodology to how you approach some of the application transformation-like engagements. Could you two give us a high-level overview of what that methodology is? [0:24:02.3] SA: I want to hear Chris go through it, since I always answer that question first. [0:24:09.0] CU: Sure. I figured since you were the inventor, you might want to go with it Shaun, but I'll give it a stab anyway. Swift is a series of exercises that we use to go from a business problem into what we call a notional architecture for an application. The one thing that you'll hear Shaun say all the time that I think is pretty apt, which is we're trying to understand how the application wants to behave. This is a very analog process, especially at the beginning. It's one where we get people who can speak about the business problem behind an application and the business processes behind an application. We get them into a room, a relatively large room typically with a bunch of wall space and we go through a series of exercises with them, where we tease that business process apart. We start with a relatively lightweight version of Alberto Brandolini’s event storing method, where we map out with the subject matter experts, what all of the business events that occur in a system are. That is a non-technical exercise, a completely non-technical exercise. As a matter of fact, all of this uses sticky notes and arts and crafts. After we've gone through that process, we transition into Boris diagram, which is an exercise of Shaun's design that we take the domains that we've, or at least service candidates that we've extrapolated from that event storming and start to draw out a notional architecture. Like an 80% idea of what we think the architecture is going to look like. We're going to do this for slices of – thin slices of that business problem. At that point, it starts to become something that a software developer might be interested in. We have an exercise called Snappy that generally occurs concurrently, which translates that message flow, Boris diagram thing into something that's at least a little bit closer to what a developer could act upon. Again, these are sticky note and analog exercises that generally go on for about a week or so, things that we do interactively with customers to try to get a purely non-technical way, at least at first, so that we can understand that problem and tell you what an architecture is that you can then act on. We try to position this as a customer. You already have all of the answers here. What we're going to do as facilitators of these is try to pull those out of your head. You just don't know how to get to the truth, but you already know that truth and we're going to design this architecture together. How did I do, Shaun? [0:26:44.7] SA: I couldn't have said it better myself. I would say one of the interest things about this process is the reason why it was developed the way it was is because in the world of technology and especially engineers, I definitely seen that you have two modes of thought when you come from the business world to the to the technical world. Often, engineers will approach a problem in a very different way and a very focused, blindered way than business folks. Ultimately, what we try to think of is that the purpose for the software is to enable the business to run well. In order to do that, you really need to understand at least at a high-level, what the heck is the business doing? Surprisingly and almost consistently, the engineering team doing the work is separated from the business team enough that it's like playing the telephone game, right? Where the business folks say, “Well, I told them to do this.” The technical team is like, “Oh, awesome. Well then, we're going to use all this amazing technology and build something that really doesn't support you.” This process really brings everybody together to discover how the system really wants to behave. Also as a side effect, you get everybody agreeing that yes, that is the way it's supposed to be. It's exciting to see teams come together that actually never even work together. You see the light bulbs go on and say, “Oh, that's why you do that.” The end result is in a week, we can go from nobody really knows each other, or quite understands the system as a whole, to we have a backlog of work that we can prioritize based on the learnings that we have, and feel pretty comfortable that the end result is going to be pretty close to how we want to get there. Then the biggest challenge is defining how do we get from point A to point B. That's part of that layering of the Swift method is knowing when to ask those questions. [0:28:43.0] JR: A micro follow-up and then I'll keep my mouth shut for a little bit. Is there a place that people could go online to read about this methodology, or just get some ideas of what you just described? [0:28:52.7] SA: Yeah. You can go to swiftbird.us. That has a high-level overview of more the public facing of how the methodology works. Then there's also internal resources that are constantly being developed as well. That's where I would start. [0:29:10.9] CC: That sounds really neat. As always, we are going to have links on the show notes for all of this. I checked out the website for the EventStorming book. There is a resources page there and has a list of a bunch of presentations. Sounds very interesting. I wanted to ask Chris and Shaun, have you ever seen, or heard of a case where a company went through the transformation, or modernization process and then they roll back to their legacy system for any reason? [0:29:49.2] SA: That's actually a really good question. It implies that often, the way people think about modernization would be more of a big bang approach, right? Where at a certain point in time, we switch to the new system. If it doesn't work, then we roll back. Part of what we try to do is have incremental releases, where we're actually putting small slices into production where you're not rolling back a whole – from modern back to legacy. It's more of you have a week's worth of work that's going into production that's for one of the thin slices, like Chris mentioned. If that doesn't work where there's something that is unexpected about it, then you're rolling back just a small chunk. You're not really jumping off a cliff for modernization. You're really taking baby steps. If it's a two step forward and one step back, you're still making a lot of really good progress. You're also gaining confidence as you go that in the end in two years, you're going to have a completely shiny new modern system and you're comfortable with it, because you're getting there an inch of the time, as opposed to taking a big leap. [0:30:58.8] CU: I think what's interesting about a lot of large organizations is that they've been so used to doing big bang releases in general. This goes from software to even process changes in their organizations. They’ve become so used to that that it often doesn't even cross their mind that it's possible to do something incrementally. We really do often times have to get spend time getting buy-in from them on that approach. You'd be surprised that even in industries that you’d think would be fantastic with managing risk, when you look at how they actually deal with deployment of software and the rolling out of software, they’re oftentimes taking approaches that maximize their risk. There's no way to make something riskier by doing a big bang. Yeah, as Shaun mentioned, the specifics of the swift are to find a way, so that you can understand where and get a roadmap for how to carve out incremental slices, so that you can strangle a large monolithic system slowly over time. That's something that's pretty powerful. Once someone gets bought in on that, they absolutely see the value, because they're minimizing risk. They're making small changes. They're easy to roll back one at a time. You might see people who would stop somewhere along the way, and we wouldn't necessarily say that that's a problem, right? Just like not every app needs to be modernized, maybe there's portions of systems that could stay where they are. Is that a bad thing? I wouldn't necessarily say that it is. Maybe that's the way that – the best way for that organization. [0:32:35.9] DC: We've bumped into this idea now a couple of different times and I think that both Chris and Shaun have brought this up. It's a little prelude to a show that we are planning on doing. One of the operable quotes from that show is the greatest enemy of knowledge is not the ignorance, it is the illusion of knowledge. It's a quote by Stephen Hawking. It speaks exactly to that, right? When you come to a problem with a solution in your mind that is frequently difficult to understand the problem on its merit, right? It’s really interesting seeing that crop up again in this show. [0:33:08.6] CU: I think even oftentimes, the advantage of a very discovery-oriented method, such as Swift is that it allows you to start from scratch with a problem set with people maybe that you aren't familiar with and don't have some of that baggage and can ask the dumb questions to get to some of the real answers. It's another phrase that I know Shaun likes to use is that our roles is facilitator to this method are to ask dumb questions. I mean, you just can't put enough value on that, right? The only way that you're going to break that established thinking is by asking questions at the root. [0:33:43.7] OP: One question, actually there was something recently that happened in the Kubernetes community, which I thought was pretty interesting and I'd like to get your thoughts on it, which is that Istio, which is a project that operates as a service mesh, I’m sure you all are familiar with it, has recently decided to unmodernize itself in a way. It was originally developed as a set of microservices. They have had no end of difficulty in getting in optimizing the different interactions between those services and the nodes. Then recently, they decided this might be a good example of when to monolith, versus when to microservice. I'm curious what your thoughts are on that, or if you have familiarity with it. [0:34:23.0] CU: What's actually quite – I'm not going to necessarily speak too much to this. Time will tell as to if the monolithing that they're doing at the moment is appropriate or not. Quite often, the starting point for us isn't necessarily a monolith. What it is is a proposed architecture coming from a customer that they're proud of, that this is my microservice design. You'll see a simple system with maybe hundreds of nano-services. The surprise that they have is that the recommendation from us coming out of our Swift sessions is that actually, you're overthinking this. We're going to take that idea that you have any way and maybe shrink that down and to save tens of services, or just a handful of services. I think one of the mistakes that people make within enterprises, or on microservices at the moment is to say, “Well, that's not a microcservice. It’s too big.” Well, how big or how small dictates a microservice, right? Oftentimes, we at least conceptually are taking and combining services based on the customers architecture very common. [0:35:28.3] SA: Monoliths aren't necessarily bad. I mean, people use them almost as a pejorative, “Oh, you have a monolith.” In our world it's like, well monoliths are bad when they're bad. If they're not bad, then that's great. The corollary to that is micro-servicing for the sake of micro-servicing isn't necessarily a good thing either. When we go through the Boris exercise, really what we're doing is we're showing how domain-based, or capabilities relate to each other. That happens to map really well in our opinion to first, cut microservices, right? You may have an order service, or a customer service that manages some of that. Just because we map capabilities and how they relate to each other, it doesn't mean the implementation can't even be as a single monolith, but componentized inside it, right? That's part of what we try really hard to do is avoid the religion of monolith versus microservices, or even having to spend a lot of time trying to define what a microservice is to you. It's really more of well, a system wants to behave this way. Now, surprise, you just did domain-driven design and mapped out some good 12-factor compliant microservices should you choose to build it that way, but there's other constraints that always apply at that point. [0:36:47.1] OP: Is there more traction in organizations implementing this methodology on a net new business, rather than current running businesses or applications? Is there are situations more so that you have seen where a new project, or a new functionality within a business starts to drive and implement this methodology and then it creeps through the other lines of business within the organization, because that first one was successful? [0:37:14.8] CU: I'd say that based on the nature of who our customers are as an app transformation practice, based on who those customers are and what their problems are, we're generally used to having a starting point of a process, or software that exists already. There's nothing at all to mandate that it has to be that way. As a matter of fact, with folks from our labs organization, we've used these methods in what you could probably call greener fields. At the end of the day when you have a process, or even a candidate process, something that doesn't exist yet, as long as you can get those ideas onto sticky notes and onto a wall, this is a very valid way of getting – turning ideas into an architecture and an architecture into software. [0:37:59.4] SA: We've seen that happen in practice a couple times, where maybe a piece of the methodology was used, like EventStorming just to get a feel for how the business wants to behave. Then to rapidly try something out in maybe more of a evolutionary architecture approach, MVP approach to let's just build something from a user perspective just to solve this problem and then try it out. If it starts to catch hold, then iterate back and now drill into it a little bit more and say, “All right. Now we know this is going to work.” We're modernizing something that may be two weeks old just because hooray, we proved it's valuable. We didn't necessarily have to spend as much upfront time on designing that as we would in this system that's already proven itself to be of business value. [0:38:49.2] OP: This might be a bit of a broad question, but what defines success of projects like this? I mean, we mentioned earlier about cost and maybe some of the drivers are to move off certain mainframes and things like that. If you're undergoing an application transformation, it seems to me like it's an ongoing thing. How do enterprises try to evaluate that return on investment? How does it relate to success criteria? I mean, faster release times, etc., potentially might be one, but how was that typically evaluated and somebody internally saying, “Look, we are running a successful project.” [0:39:24.4] SA: I think part of what we tried to do upfront is identify what the objectives are for a particular engagement. Often, those objectives start out with one thing, right? It's too costly to keep paying IBM or Oracle for WebLogic, or WebSphere. As we go through and talk through what types of things that we can solve, those objectives get added to, right? It may be the first thing, our primary objective is we need to start moving workloads off of the mainframe, or workloads off of WebLogic, or WebSphere, or something like that. There's other objectives that are part of this too, which can include things as interesting as developer happiness, right? They have a large team of a 150 developers that are really just getting sick of doing the same old thing and having new technology. That's actually a success criteria maybe down the road a little bit, but it's more of a nice to have. In a long-winded answer of saying, when we start these and when we incept these projects, we usually start out with let's talk through what our objectives are and how we measure success, those key results for those objectives. As we're iterating through, we keep measuring ourselves against those. Sometimes the objectives change over time, which is fine because you learn more as you're going through it. Part of that incremental iterative process is measuring yourself along the way, as opposed to waiting until the end. [0:40:52.0] CC: Yeah, makes sense. I guess these projects are as you say, are continuous and constantly self-adjusting and self-analyzing to re-evaluate success criteria to go along. Yeah, so that's interesting. [0:41:05.1] SA: One other interesting note though that personally we like to measure ourselves when we see one project is moving along and if the customers start to form other projects that are similar, then we know, “Okay, great. It's taking hold.” Now other teams are starting to do the same thing. We've become the cool kids and people want to be like us. The only reason it happens for that is when you're able to show success, right? Then other teams want to be able to replicate that. [0:41:32.9] CU: The customers OKRs, oftentimes they can be a little bit easier to understand. Sometimes they're not. Typically, they involve time or money, where I'm trying to take release times from X to Y, or decrease my spend on X to Y. The way that we I think measure ourselves as a team is around how clean do we leave the campsite when we're done. We want the customers to be able to run with this and to continue to do this work and to be experts. As much as we'd love to take money from someone forever, we have a lot of people to help, right? Our goal is to help to build that practice and center of excellence and expertise within an organization, so that as their goals or ideas change, they have a team to help them with that, so we can ride off into the sunset and go help other customers. [0:42:21.1] CC: We are coming up to the end of the episode, unfortunately, because this has been such a great conversation. It turned out to be a more of an interview style, which was great. It was great getting the chance to pick your brains, Chris and Shaun. Going along with the interview format, I like to ask you, is there any question that wasn't asked, but you wish was asked? The intent here is to illuminates what this process for us and for people who are listening, especially people who they might be in pain, but they might be thinking this is just normal. [0:42:58.4] CU: That's an interesting one. I guess to some degree, that pain is unfortunately normal. That's just unfortunate. Our role is to help solve that. I think the complacency is the absolute worst thing in an organization. If there is pain, rather than saying that the solution won't work here, let’s start to talk about solutions to that. We've seen customers of all shapes and sizes. No matter how large, or cumbersome they might be, we've seen a lot of big organizations make great progress. If your organization's in pain, you can use them as an example. There is light at the end of the tunnel. [0:43:34.3] SA: It's usually not a train. [0:43:35.8] CU: Right. Usually not. [0:43:39.2] SA: Other than that, I think you asked all the questions that we always try to convey to customers of how we do things, what is modernization. There's probably a little bit about re-platforming, doing the bare minimum to get something onto to the cloud. We didn't talk a lot about that, but it's a little bit less meta, anyway. It's more technical and more recipe-driven as you discover what the workload looks like. It's more about, is it something we can easily do a CF push, or just create a container and move it up to the cloud with minimal changes? There's not conceptually not a lot of complexity. Implementation-wise, there's still a lot of challenges there too. They're not as fun to talk about for me anyway. [0:44:27.7] CC: Maybe that's a good excuse to have some of our colleagues back on here with you. [0:44:30.7] SA: Absolutely. [0:44:32.0] DC: Yeah, in a previous episode we talked about persistence and state of those sorts of things and how they relate to your applications and how when you're thinking about re-platforming and even just where you're planning on putting those applications. For us, that question comes up quite a lot. That's almost zero trying to figure out the state model and those sort of things. [0:44:48.3] CC: That episode was named States in Stateless Apps, I think. We are at the end, unfortunately. It was so great having you both here. Thank you Duffie, Shaun, Chris and I'm going by the order I'm seeing people on my video. Josh and Olive. Until next time. Make sure please to let us know your feedback. Subscribe. Give us a thumbs up. Give us a like. You know the drill. Thank you so much. Glad to be here. Bye, everybody. [0:45:16.0] JR: Bye all. [0:45:16.5] CU: Bye. [END OF EPISODE] [0:45:17.8] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END] See omnystudio.com/listener [https://omnystudio.com/listener] for privacy information.

02 mar 2020 - 45 min
episode Should I Kubernetes? (Ep 18) artwork
Should I Kubernetes? (Ep 18)

The question of diving into Kubernetes is something that faces us all in different ways. Whether you are already on the platform, are considering transitioning, or are thinking about what is best for your team moving forward, the possibilities and the learning-curve make it a somewhat difficult question to answer. In this episode, we discuss the topic and ultimately believe that an individual is the only one who can answer that question well. That being said, the capabilities of Kubernetes can be quite persuasive and if you are tempted then it is most definitely worth considering very seriously, at least. In our discussion, we cover some of the problems that Kubernetes solves, as well as some of the issues that might arise when moving into the Kubernetes space. The panel shares their thoughts on learning a new platform and compare it with other tricky installations and adoption periods. From there, we look at platforms and how Kubernetes fits and does not fit into a traditional definition of what a platform constitutes. The last part of this episode is spent considering the future of Kubernetes and how fast that future just might arrive. So for all this and a bunch more, join us on The Podlets Podcast, today! Follow us: https://twitter.com/thepodlets [https://twitter.com/thepodlets] Website: https://thepodlets.io [https://thepodlets.io] Feeback: info@thepodlets.io [info@thepodlets.io] https://github.com/vmware-tanzu/thepodlets/issues [https://github.com/vmware-tanzu/thepodlets/issues] Hosts: * Carlisia Campos [https://twitter.com/carlisia] * Josh Rosso [https://twitter.com/joshrosso] * Duffie Cooley [https://twitter.com/mauilion] * Bryan Liles [https://twitter.com/bryanl] Key Points From This Episode: * The main problems that Kubernetes solves and poses. * Why you do not need to understand distributed systems in order to use Kubernetes. * How to get around some of the concerns about installing and learning a new platform. * The work that goes into readying a Kubernetes production cluster. * What constitutes a platform and can we consider Kubernetes to be one? * The two ways to approach the apparent value of employing Kubernetes. * Making the leap to Kubernetes is a personal question that only you can answer. * Looking to the future of Kubernetes and its possible trajectories. * The possibility of more visual tools in the UI of Kubernetes. * Understanding the concept of conditions in Kubernetes and its objects. * Considering appropriate times to introduce a team to Kubernetes. Quotes: “I can use different tools and it might look different and they will have different commands but what I’m actually doing, it doesn’t change and my understanding of what I’m doing doesn’t change.” — @carlisia [https://twitter.com/carlisia] [0:04:31] “Kubernetes is a distributed system, we need people with expertise across that field, across that whole grouping of technologies.” — @mauilion [https://twitter.com/mauilion] [0:10:09] “Kubernetes is not just a platform. Kubernetes is a platform for building platforms.” — @bryanl [https://twitter.com/bryanl] [0:18:12] Links Mentioned in Today’s Episode: Weave — https://www.weave.works/docs/net/latest/overview/ [https://www.weave.works/docs/net/latest/overview/] AWS — https://aws.amazon.com/ [https://aws.amazon.com/] DigitalOcean — https://www.digitalocean.com/ [https://www.digitalocean.com/] Heroku — https://www.heroku.com/ [https://www.heroku.com/] Red Hat — https://www.redhat.com/en [https://www.redhat.com/en] Debian — https://www.debian.org/ [https://www.debian.org/] Canonical — https://canonical.com/ [https://canonical.com/] Kelsey Hightower — https://github.com/kelseyhightower [https://github.com/kelseyhightower] Joe Beda — https://www.vmware.com/latam/company/leadership/joe-beda.html [https://www.vmware.com/latam/company/leadership/joe-beda.html] Azure — https://azure.microsoft.com/en-us/ [https://azure.microsoft.com/en-us/] CloudFoundry — https://www.cloudfoundry.org/ [https://www.cloudfoundry.org/] JAY Z — https://lifeandtimes.com/ [https://lifeandtimes.com/] OpenStack — https://www.openstack.org/ [https://www.openstack.org/] OpenShift — https://www.openshift.com/ [https://www.openshift.com/] KubeVirt — https://kubevirt.io/ [https://kubevirt.io/] VMware — https://www.vmware.com/ [https://www.vmware.com/] Chef and Puppet — https://www.chef.io/puppet/ [https://www.chef.io/puppet/] tgik.io — https://www.youtube.com/playlist?list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa [https://www.youtube.com/playlist?list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa] Matthias Endler: Maybe You Don't Need Kubernetes - https://endler.dev/2019/maybe-you-dont-need-kubernetes [https://endler.dev/2019/maybe-you-dont-need-kubernetes] Martin Tournoij: You (probably) don’t need Kubernetes - https://www.arp242.net/dont-need-k8s.html [https://www.arp242.net/dont-need-k8s.html] Scalar Software: Why most companies don't need Kubernetes - https://scalarsoftware.com/blog/why-most-companies-dont-need-kubernetes [https://scalarsoftware.com/blog/why-most-companies-dont-need-kubernetes] GitHub: Kubernetes at GitHub - https://github.blog/2017-08-16-kubernetes-at-github [https://github.blog/2017-08-16-kubernetes-at-github] Debugging network stalls on Kubernetes - https://github.blog/2019-11-21-debugging-network-stalls-on-kubernetes/ [https://github.blog/2019-11-21-debugging-network-stalls-on-kubernetes/] One year using Kubernetes in production: Lessons learned - https://techbeacon.com/devops/one-year-using-kubernetes-production-lessons-learned [https://techbeacon.com/devops/one-year-using-kubernetes-production-lessons-learned] Kelsey Hightower Tweet: Kubernetes is a platform for building platforms. It's a better place to start; not the endgame - https://twitter.com/kelseyhightower/status/935252923721793536?s=2 [https://twitter.com/kelseyhightower/status/935252923721793536?s=2] Transcript: EPISODE 18 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.9] JR: Hello everyone and welcome to The Podlets Podcast where we are going to be talking about should I Kubernetes? My name is Josh Rosso and I am very pleased to be joined by, Carlisia Campos. [0:00:55.3] CC: Hi everybody. [0:00:56.3] JR: Duffy Cooley. [0:00:57.6] DC: Hey folks. [0:00:58.5] JR: And Brian Lyles. [0:01:00.2] BL: Hi. [0:01:03.1] JR: All right everyone. I’m really excited about this episode because I feel like as Kubernetes has been gaining popularity over time, it’s been getting its fair share of promoters and detractors. That’s fair for any piece of software, right? I’ve pulled up some articles and we put them in the show notes about some of the different perspectives on both success and perhaps failures with Kub. But before we dissect some of those, I was thinking we could open it up more generically and think about based on our experience with Kubernetes, what are some of the most important things that we think Kubernetes solves for? [0:01:44.4] DC: All right, my list is very short and what Kubernetes solves for my point of view is that it allows or it actually presents an interface that knows how to run software and the best part about it is that it doesn’t – the standard interface. I can target Kubernetes rather than targeting the underlying hardware. I know certain things are going to be there, I know certain networking’s going to be there. I know how to control memory and actually, that’s the only reason that I really would give, say for Kubernetes, we need that standardization and you don’t want to set up VM’s, I mean, assuming you already have a cluster. This simplifies so much. [0:02:29.7] BL: For my part, I think it’s life cycle stuff that’s really the biggest driver for my use of it and for my particular fascination with it. I’ve been in roles in the past where I was responsible for ensuring that some magical mold of application on a thousand machines would magically work and I would have all the dependencies necessary and they would all agree on what those dependencies were and it would actually just work and that was really hard. I mean, getting to like a known state in that situation, it’s very difficult. Having something where either both the abstractions of containers and the abstraction of container orchestration, the ability to deploy those applications and all those dependencies together and the ability to change that application and its dependencies, using an API. That’s the killer part for me. [0:03:17.9] CC: For me, from a perspective of a developer is very much what Duffy just said but more so the uniformity that comes with all those bells and whistles that we get by having that API and all of the features of Kubernetes. We get such a uniformity across such a really large surface and so if I’m going to deploy apps, if I’m going to allow containers, what I have to do for one application is the same for another application. If I go work for another company, that uses Kubernetes, it is the same and if that Kubernetes is a hosted Kubernetes or if it’s a self-managed, it will be the same. I love that consistency and that uniformity that even so I can – there are many tools that help, they are customized, there’s help if you installing and composing specific things for your needs. But the understanding of what you were doing is it’s the same, right? I can use different tools and it might look different and they will have different commands but what I’m actually doing, it doesn’t change and my understanding of what I’m doing doesn’t change. I love that. Being able to do my work in the same way, I wish, you know, if that alone for me makes it worthwhile. [0:04:56.0] JR: Yeah, I think like my perspective is pretty much the same as what you all said and I think the one way that I kind of look at it too is Kubernetes does a better job of solving the concerns you just listed, then I would probably be able to build myself or my team would be able to solve for ourselves in a lot of cases. I’m not trying to say that specialization around your business case or your teams isn’t appropriate at times, it’s just at least for me, to your point Carlisia, I love that abstraction that’s consistent across environments. It handles a lot of the things, like Brian was saying, about CPU, memory, resources and thinking through all those different pieces. I wanted to take what we just said and maybe turn it a bit at some of the common things that people run in to with Kubernetes and just to maybe hit on a piece of low hanging fruit that I think is oftentimes a really fair perspective is Kubernetes is really hard to operate. Sure, it gives you all the benefits we just talked about but managing a Kubernetes cluster? That is not a trivial task. And I just wanted to kind of open that perspective up to all of us, you know? What are your thoughts on that? [0:06:01.8] DC: Well, the first thought is it doesn’t have to be that way. I think that’s a fallacy that a lot of people fall into, it’s hard. Guess what? That’s fine, we’re in the sixth year of Kubernetes, we’re not in the sixth year of stability of a stable release. It’s hard to get started with Kubernetes and what happens is we use that as an excuse to say well, you know what? It’s hard to get started with so it’s a failure. You know something else that was hard to get started with? Whenever I started with it in the 90s? Linux. You download it and downloading it on 30 floppy disks. There was the download corruption, real things, Z modem, X modem, Y modem. This is real, a lot of people don’t know about this. And then, you had to find 30 working flopping disk and you had to transfer 30, you know, one and a half megabyte — and it still took a long time to floppy disk and then you had to run the installer. And then most likely, you had to build a kernel. Downloading, transferring, installing, building a kernel, there was four places where just before you didn’t have windows, this was just to get you to a log in prompt, that could fail. With Kubernetes, we had this issue. People were installing Kubernetes, there’s cloud vendors who are installing it and then there’s people who were installing it on who knows what hardware. Guess what? That’s hard and it’s not even now, it’s not even they physical servers that’s networking. Well, how are you going to create a network that works across all your servers, well you’re going to need an overlay, which one are you going to use, Calico? Use Weave? You’re going to need something else that you created or something else if it works. Yeah, just we’re still figuring out where we need to be but these problems are getting solved. This will go away. [0:07:43.7] BL: I’m living that life right now, I just got a new laptop and I’m a Linux desktop kind of guy and so I’m doing it right now. What does it take to actually get a recent enough kernel that the hardware that is shipped with this laptop is supported, you know? It’s like, those problems continue, even though Linux has been around and considered stable and it’s the underpinning of much of what we do on the internet today, we still run into these things, it’s still a very much a thing. [0:08:08.1] CC: I think also, there’s a factor of experience, for example. This is not the first time you have to deal with this problem, right Duffy? Been using Linux on a desktop so this is not the first hardware that you had to setup Linux on. So you know where to go to find that information. Yeah, it’s sort of a pain but it’s manageable. I think a lot of us are suffering from gosh, I’ve never seen Kubernetes before, where do I even start and – or, I learned Kubernetes but it’s quite burdensome to keep up with everything as opposed to let’s say, if 10 years from now, we are still doing Kubernetes. You’ll be like yeah, okay, whatever. This is no big deal. So because we have done these things for a few years that we were not possibly say that it’s hard. I don’t’ think we would describe it that way. [0:09:05.7] DC: I think there will still be some difficulty to it but to your point, it’s interesting, if I look back like, five years ago, I was telling all of my friends. Look, if you’re a system’s administrator, go learn how to do other things, go learn how to be, go learn an API centric model, go play with AWS, go play with tools like this, right? If you’re a network administrator, learn to be a system’s administrator but you got to branch out. You got to figure out how to ensure that you’re relevant in the coming time. With all the things that are changing, right? This is true, I was telling my friend this five years ago, 10 years ago, continues, I continue to tell my friends that today. If I look at the Kubernetes platform, the complexity that represents in operating it is almost tailor made to those people though did do that, that decided to actually branch out and to understand why API’s are interesting and to understand, you know, can they have enough of an understanding in a generalist way to become a reasonable systems administrator and a network administrator and you know, start actually understanding the paradigms around distributed systems because those people are what we need to operate this stuff right now, we’re building – I mean, Kubernetes is a distributed system, we need people with expertise across that field, across that whole grouping of technologies. [0:10:17.0] BL: Or, don’t. Don’t do any of that. [0:10:19.8] CC: Brian, let me follow up on that because I think it’s great that you pointed that out Duffy. I was thinking precisely in terms of being a generalist and understanding how Kubernetes works and being able to do most of it but it is so true that some parts of it will always be very complex and it will require expertise. For example, security. Dealing with certificates and making sure that that’s working, if you want to – if you have particular needs for networking, but, understanding the whole idea of this systems, as it sits on top of Kubernetes, grasping that I think is going to – have years of experience under their belt. Become relatively simple, sorry Brian that I cut you off. [0:11:10.3] BL: That’s fine but now you gave me something else to say in addition to what I was going to say before. Here’s the killer. You don’t need to know distributed systems to use Kubernetes. Not at all. You can use a deployment, you can use a [inaudible] set, you can run a job, you can get workloads up on Kubernetes without having to understand that. But, Kubernetes also gives you some good constructs either in the Kubernetes API's itself or in its client libraries where you could build distributed systems in easier way but what I was going to say before that though is I can’t build a cluster. Well don’t. You know what you should do? Use a cloud vendor, use AWS, use Google, use Microsoft or no, I mean, did I say Microsoft? Google and Microsoft. Use Digital Ocean. There’s other people out there that do it as well, they can take care of all the hard things for you and three, four minutes or 10 minutes if you’re on certain clouds, you can have Kubernetes up and running and you don’t even have to think about a lot of these networking concerns to get started. I think that’s a little bit of the thud that we hear, "It’s hard to install." Well, don’t install it, you install it whenever you have to manage your own data centers. Guess what? When you have to manage your own data centers and you’re managing networking and storage, there’s a set of expertise that you already have on staff and maybe they don’t want to learn a new thing, that’s a personal problem, that’s not really a Kubernetes problem. Let’s separate those concerns and not use our lack or not wanting to, to stop us from actually moving forward. [0:12:39.2] DC: Yeah. Maybe even taking that example step forward. I think where this problem compounds or this perspective sometimes compounds about Kubernetes being hard to operate is coming from of some shops who have the perspective of are operational concerns today, aren’t that complex. Why are we introducing this overhead, this thing that we maybe don’t need and you know, to your point Brian, I wonder if we’d all entertain the idea, I’m sure we would that maybe even, speaking to the cloud vendors, maybe even just a Heroku or something. Something that doesn’t even concern itself with Kube but can get your workload up and running and successful as quickly as possible.  Especially if you’re like, maybe a small startup type persona, even that’s adequate, right? It could have been not a failure of Kubernetes but more so choosing the wrong tool for the job, does that resonate with you all as well, does that make sense? [0:13:32.9 DC: Yeah, you know, you can’t build a house with a screwdriver. I mean, you probably could, it would hurt and it would take a long time. That’s what we’re running into. What you’re really feeling is that operationally, you cannot bridge the gap between running your application and running your application in Kubernetes and I think that’s fair, that’s actually a great thing, we prove that the foundations are stable enough that now, we can actually do research and figure out the best ways to run things because guess what? RPM’s from Red Hat and then you have devs from the Debian project, different ways of getting things, you have Snap from Canonical, it works and sometimes it doesn’t, we need to actually figure out those constructs in Kubernetes, they’re not free. These things did not exist because someone says, "Hey, I think we should do this." Many years. I was using RPM in the 90s and we need to remember that. [0:14:25.8] JR: On that front, I want to maybe point a question to you Duffy, if you don’t mind. Another big concern that I know you deal with a lot is that Kubernetes is great. Maybe I can get it up no problem. But to make it a viable deployment target at my organization, there’s a lot of work that goes into it to make a Kubernetes cluster production ready, right? That could be involving how you integrate storage and networking and security and on and on. I feel like we end up at this tradeoff of it’s so great that Kubernetes is super extensible and customizable but there is a certain amount of work that that kind of comes with, right? I’m curious Duff, what’s your perspective on that? [0:15:07.3] DC: I want to make a point that bring back to something Brian mentioned earlier, real quick, before I go on to that one. The point is that, I completely agree that yo do not have to actually be a distributed systems person to understand how to use Kubernetes and if that were a bar, we would have set that bar and incredibly, the inappropriate place. But from the operational perspective, that’s what we were referring to. I completely also agree that especially when we think about productionalizing clusters, if you’re just getting into this Kubernetes thing, it may be that you want to actually farm that out to another entity to create and productionalize those clusters, right? You have a choice to make just like you had a choice to make what when AWS came along. Just like you had a choice to make — we’re thinking of virtual machines, right? You have a choice and you continue to have a choice about how far down that rabbit hole as an engineering team of an engineering effort your company wants to go, right? Do you want farm everything out to the cloud and not have to deal with the operations, the day to day operations of those virtual machines and take the constraints that have been defined by that platformer, or do you want to operate that stuff locally, are you required by the law to operate locally? What does production really mean to you and like, what are the constraints that you actually have to satisfy, right? I think that given that choice, when we think about how to production Alize Kubernetes, it comes down to exactly that same set of things, right? Frequently, productionalizing – I’ve seen a number of different takes on this and it’s interesting because I think it’s actually going to move on to our next topic in line here. Frequently I see that productionizing or productionalizing Kubernetes means to provide some set of constraints around the consumption of the platform such that your developers or the focus that are consuming that platform have to operate within those rails, right? They could only define deployments and they can only define deployments that look like this. We’re going to ask them a varied subset of questions and then fill out all the rest of it for them on top of Kubernetes. The entry point might be CICD, it might be a repository, it might be code repository, very similar to a Heroku, right? The entry point could be anywhere along that thing and I’ve seen a number of different enterprises explore different ways to implement that. [0:17:17.8] JR: Cool. Another concept that I wanted to maybe have us define and think about, because I’ve heard the term platform quite a bit, right? I was thinking a little bit about you know, what the term platform means exactly? Then eventually, whether Kubernetes itself should be considered a platform. Backing u, maybe we could just start with a simple question, for all of us, what makes something a platform exactly? [0:17:46.8] BL: Well, a platform is something that provides something. That is a Brian Lyles exclusive. But really, what it is, what is a platform, a platform provides some kind of service that can be used to accomplish some task and Kubernetes is a platform and that thing, it provides constructs through its API to allow you to perform tasks. But, Kubernetes is not just a platform. Kubernetes is a platform for building platforms. The things that Kubernetes provides, the workload API, the networking API, the configuration and storage API’s. What they provide is a facility for you to build higher level constructs that control how you want to run the code and then how you want to connect the applications. Yeah, Kubernetes is actually a platform for platforms. [0:18:42.4] CC: Wait, just to make sure, Brian. You’re saying, because Kelsey Hightower for example is someone who says Kubernetes is a platform of platforms. Now, is Kubernetes both a platform of platforms, at the same time that it’s also a platform to run apps on? [0:18:59.4] BL: It’s both. Kelsey tweeted that there is some controversy on who said that first, it could have been Joe Beda, it could have been Kelsey. I think it was one of those two so I want to give a shout out to both of those for thinking in the same line and really thinking about this problem. But to go back to what you said, Carlisia, is it a platform for providing platforms and a platform? Yes, I will explain how. If you have Kubernetes running and what you can do is you can actually talk to the API, create a deployment. That is platform for running a workload. But, also what you can do is you can create through Kubernetes API mechanisms, ie. CRD’s, custom resource definitions. You can create custom resources that I want to have something called an application. You can basically extend the Kubernetes API. Not only is Kubernetes allowing you to run your workloads, it’s allowing you to specify, extend the API, which then in turn can be run with another controller that’s running on your platform that then gives you this thing when you cleared an application. Now, it creates deployment which creates a replica set, which creates a pod, which creates containers, which downloads images from a container registry. It actually is both. [0:20:17.8] DC: Yeah, I agree with that. Another quote that I remember being fascinated by which I think kind of also helps define what a platform is Kelsey put on out quote that said, Everybody wants platform at a service with the only requirement being that they’ve built it themselves." Which I think is awesome and it also kind of speaks, in my opinion to what I think the definition of a platform is, right? It’s an interface through which we can define services or applications and that interface typically will have some set of constraints or some set of workflows or some defined user experience on top of it. To Brian's point, I think that Kubernetes is a platform because it provides you a bunch of primitive s on the back end that you can use to express what that user experience might be. As we were talking earlier about what does it take to actually – you might move the entry point into this platform from the API, the Kubernetes API server, back down into CICD, right? Perhaps you're not actually defining us and called it a deployment, you’re just saying, I want so many instances off this, I don’t want it to be able to communicate with this other thing, right? It becomes – so my opinion, the definition about of a platform it is that user experience interface. It’s the constraints that we know things that you're going to put on top of that platform. [0:21:33.9] BL: I like that. I want to throw out a disclaimer right here because we’re here, because we’re talking about platforms. Kubernetes is not a platform, it’s as surface. That is actually, that’s different, a platform as a service is – from the way that we look at it, is basically a platform that can run your code, can actually make your code available to external users, can scale it up, can scale it down and manages all the nuances required for that operation to happen. Kubernetes does not do that out of the box but you can build a platform as a surface on Kubernetes. That’s actually, I think, where we’ll be going next is actually people, stepping out of the onesy-twosy, I can deploy a workload, but let’s actually work on thinking about this level. And I’ll tell you what. DEUS who got bought by Azure a few years ago, they actually did that, they built a pass that looks like Heroku. Microsoft and Azure thought that was a good idea so they purchased them and they’re still over there, thinking about great ideas but I think as we move forward, we will definitely see different types of paths on Kubernetes. The best thing is that I don’t think we’ll see them in the conventional sense of what we think now. We have a Heroku, which is like the git-push Heroku master, we share code through git. And then we have CloudFoundry idea of a paths which is, you can run CFPush and that actually is more of an extension of our old school Java applications, where we could just push [inaudible] here but I think at least I am hoping and this is something that I am actually working on not to toot my own horn too much but actually thinking about how do we actually – can we build a platform as a service toolkit? Can I actually just build something that’s tailing to my operation? And that is something that I think we’ll see a lot more in the next 18 months. At least you will see it from me and people that I am influencing. [0:23:24.4] CC: One thing I wanted to mention before we move onto anything else, in answering “Is Kubernetes right for me?” We are so biased. We need to play devil’s advocate at some point. But in answering that question that is the same as in when we need to answer, “Is technology x right for me?” and I think there is at a higher level there are two camps. One camp is very much of the thinking that, "I need to deliver value. I need to allow my software and if the tools I have are solving my problem I don’t need to use something else. I don’t need to use the fancy, shiny thing that’s the hype and the new thing." And that is so right. You definitely shouldn't be doing that. I am divided on this way of thinking because at the same time at that is so right. You do have to be conscious of how much money you’re spending on things and anyway, you have to be efficient with your resources. But at the same time, I think that a lot of people who don’t fully understand what Kubernetes really can do and if you are listening to this, if you maybe could rewind and listen to what Brian and Duffy were just saying in terms of workflows and the Kubernetes primitives. Because those things they are so powerful. They allow you to be so creative with what you can do, right? With your development process, with your roll out process and maybe you don’t need it now. Because you are not using those things but once you understand what it is, what it can do for your used case, you might start having ideas like, “Wow, that could actually make X, Y and Z better or I could create something else that could use these things and therefore add value to my enterprise and I didn’t even think about this before.” So you know two ways of looking at things. [0:25:40.0] BL: Actually, so the topic of this session was, “Should I Kubernetes” and my answer to that is I don’t know. That is something for you to figure out. If you have to ask somebody else I would probably say no. But on the other side, if you are looking for great networking across a lot of servers. If you are looking for service discovery, if you are looking for a system that can restart workloads when they fail, well now you should probably start thinking about Kubernetes. Because Kubernetes provides all of these things out of the box and are they easy to get started with though? Some of these things are harder. Service discovery is really easy but some of these things are a little bit harder but what Kubernetes does is here comes my hip-hop quote, Jay Z said this, basically he’s talking about difficult things and he basically wants difficult things to take a little bit of time and impossible things or things we thought that were impossible to take a week. So basically making difficult things easy and making things that you could not even imagine doing, attainable. And I think that is what Kubernetes brings to the table then I’ll go back and say this one more time. Should you use Kubernetes? I don’t know that is a personal problem that is something you need to answer but if you’re looking for what Kubernetes provides, yes definitely you should use it. [0:26:58.0] DC: Yeah, I agree with that I think it is a good summary there. But I also think you know coming back to whether you should Kubernetes part, from my perspective the reason that I Kubernetes, if you will, I love that as a verb is that when I look around at the different projects in the infrastructure space, as an operations person, one of the first things I look for is that API that pattern around consumption, what's actually out there and what’s developing that API. Is it a the business that is interested in selling me a new thing or is it an API that’s being developed by people who are actually trying to solve real problems, is there a reasonable way to go about this. I mean when I look at open stack, OpenStack was exactly the same sort of model, right? OpenStack existed as an API to help you consume infrastructure and I look at Kubernetes and I realize, “Wow, okay well now we are developing an API that allows us to think about the life cycle and management of applications." Which moves us up the stack, right? So for my part, the reason I am in this community, the reason I am interested in this product, the reason I am totally Kubernetes-ing is because of that. I realized that fundamentally infrastructure has to change to be able to support the kind of load that we are seeing. So whether you should Kubernetes, is the API valuable to you? Do you see the value in that or is there more value in continuing whatever paradigm you’re in currently, right? And judging that equally I think is important. [0:28:21.2] JR: Two schools of thoughts that I run into a lot on the API side of thing is whether overtime Kubernetes will become this implementation detail, where 99% of users aren’t even aware of the API to any extent. And then another one that kind of talks about the API is consistent abstraction with tons of flexibility and I think companies are going in both directions like OpenShift from Red Hat is perhaps a good example. Maybe that is one of those layer two platforms more so Brian that you were talking about, right? Where Kubernetes is the platform that was used to build it but the average person that interacts with it might not actually be aware of some of the Kubernetes primitives and things like that. So if we could all get out of our crystal balls for a second here, what do you all think in the future? Do you see the Kubernetes API becoming just a more prevalent industry standard or do you see it fading away in favor of some other abstraction that makes it easier? [0:29:18.3] BL: Oh wow, well I already see it as I don’t have to look too far in the future, right? I can see the Kubernetes API being used in ways that we could not imagine. The idea that I will think of is like KubeVirt. KubeVirt allows you to boot basically pods on whatever implements that it looks like a Kubelet. So it looks like something that could run pods. But the neat thing is that you can use something like KubeVirt with a virtual Kubelet and now you can boot them on other things. So ideas in that space, I don’t know VMware is actually going on that, “Wow, what if we can make virtual machines look like pods inside of Kubernetes? Pretty neat." Azure has definitely led work on this as now, we can just bring up either bring up containers, we can bring up VM’s and you don’t actually need a Kube server anymore. Now but the crazy part is that you can still use a workloads API’s, storage API’s with Kubernetes and it does not matter what backs it. And I’ll throw out one more suggestion. So there is also projects like AWS operators in [inaudible] point and what they allow you to do is to use the Kubernetes API or actually in cluster API, I'll use all three. But I use the Kubernetes API to boot things that aren’t even in the cluster and this will be AWS services or this could be databases across multiple clouds or guess what? More Kubernetes services. Yeah, so we are on that path but I just can’t wait to see what people are going to do with that. The power of Kubernetes is this API, it is just so amazing. [0:30:50.8] DC: For my part, I think is that I agree that the API itself is being extended in all kinds of amazing ways but I think that as I look around in the crystal ball, I think that the API will continue to be foundational to what is happening. If I look at the level two or level three platforms that are coming, I think those will continue to be a thing for enterprises because they will continue to innovate in that space and then they will continue to consume the underlying API structure and that portability Kubernetes exposes to define what that platform might look like for their own purpose, right? Giving them the ability to effectively have a platform as a service that they define themselves but using and under – you know, using a foundational layer that it’s like consistent and extensible and extensive I think that that’s where things are headed. [0:31:38.2] CC: And also more visual tools, I think is in our future. Better, actual visual UI's that people can use I think that’s definitely going to be in our future. [0:31:54.0] BL: So can I talk about that for a second? [0:31:55.9] CC: Please, Brian. [0:31:56.8] BL: I am wearing my octant hoodie today, which is a visual tool for Kubernetes and I will talk now as someone who has gone down this path to actually figure this problem out. As a prediction for the future, I think we’ll start creating better API’s in Kubernetes to allow for more visual things and the reason that I say that this is going to happen and it can’t really happen now is because for inside of an octant and whenever creating new eye views, pretty much happened now what that optic is. But what is going to happen and I see the rumblings from the community, I see the rumblings from K-native community as well is that we are going to start standardizing on conditions and using conditions as a way that we can actually say what’s going on. So let me back it up for a second so I can explain to people what conditions are. So Kubernetes, we think of Kubernetes as YAML and in a typical object in Kubernetes, you are going to have your type meta data. What is this, you are going to have your object meta data, what’s name this and then you are going to have a spec, how is this thing configured and then you are going to have a status and the status generally will say, “Well what is the status of this object? Is it deployment? How many references out? If it is a pod, am I ready to go?" But there is also this concept and status called conditions, which are a list of things that say how your thing, how your object is working. And right now, Kubernetes uses them in two ways, they use them in the negative way and the positive way. I think we are actually going to figure out which one we want to use and we are going to see more API’s just say conditions. And now from a UI developer, from my point of view, now I can just say, “I don’t really care what your optic is. You are going to give me conditions in a format that I know and I can just basically report on those in the status and I can tell you if the thing is working or not.” That is going to come too. And that will be neat because that means that we get basically, we can start building UI’s for free because we just have to learn the pattern. [0:33:52.2] CC: Can you talk a little bit more about conditions? Because this is not something I hear frequently and that I might know but then not know what you are talking about by this name. [0:34:01.1] BL: Oh yeah, I will give you the most popular one. So everything in Kubernetes is an object and that even means that the nodes that your workloads run on, are objects. If you run KubeControl, KubeCuddle, Kube whatever, git nodes, it will show you all the nodes in your cluster if you have permission to see that and if you do KubeCTL, gitnode, node name and then you actually have the YAML output what you will see in the bottom is an object called 'conditions'. And inside of there it will be something like is there sufficient memory, is the node – I actually don’t remember all of them but really what it is, they’re line items that say how this particular object is working. So do I have enough memory? Do I have enough storage? Am I out of actual pods that can be launched on me and what conditions are? It is basically saying, “Hey Brian, what is the weather outside?” I could say it's nice. Or I could be like, “Well, it’s 75 degrees, the wind is light but variable. It is not humid and these are what the conditions are.” They allow the object to specify things about itself that might be useful to someone who is consuming it. [0:35:11.1] CC: All right that was useful. I am actually trying to bring one up here. I never paid attention to that. [0:35:18.6] BL: Yeah and you will see it. So the two ones that are most common right now, there is some competition going on in Kubernetes architecture, trying to figure out how they are going to standardize on this but with pods and nodes you will see conditions on there and those are just telling you what is going on but the problem is that a condition is a type, a message, a status and something else but the problem is that the status can be true of false — oh and a reason, the status can be true or false but sometimes the type is a negative type where it would be like “node not ready”. And then it will say false because it is. And now whenever you’re inspecting that with automated code, you really want the positive condition to be true and the negative condition to be false and this is something that the K-native community is really working on now. They have the whole facility of this thing called duck typing. Which they can actually now pattern-match inside of optics to find all of these neat things. It is actually pretty intriguing. [0:36:19.5] CC: All right, it is interesting because I very much status is everything for objects and that is very much a part of my work flow. But I never noticed that there was some of the objects had conditions. I never noticed that and just a plug, we are very much going to have the K-native folks here to talk about duck typing. I am really excited about that. [0:36:39.9] BL: Yeah, they’re on my team. They’ll be happy to come. [0:36:42.2] CC: Oh yes, they are awesome. [0:36:44.5] JR: So I was thinking maybe we could wrap this conversation up and I think we have acknowledged that “Should I Kubernetes?” is a ridiculously hard question for us to answer for you and we should clearly not be the ones answering it for you but I was wondering if we could give some thoughts around — for the Podlet listener who is sitting at their desk right now thinking like, “Is now the right time for my organization to bring this in?” And I will start with some thought and then open it all up to you. So one common thing I think that I run into a lot is you know your current state and you know your desired state to steal a Kubernetes concept for a moment. And the desired state might be more decoupled services that are more scalable and so on and I think oftentimes at orgs we get a little bit too obsessed with the desired state that we forget about how far the gap is between the current state and the desired state. So as an example, you know maybe your shop’s biggest issue is the primary revenue generating application is a massive dot-net framework monolith, which isn’t exactly that easy to just port over into Kubernetes, right? So if a lot of your friction right now is teams collaborating on this tool, updating this tool, scaling this tool, maybe before even thinking about Kubernetes, being honest with the fact that a lot of value can be derived right now from some amount of application architecture changes. Or even sorry to use a buzzword but some amount of modernization of aspects of that application before you even get to the part of introducing Kubernetes. So that is one common one that I run into with orgs. What are some other kind of suggestion you have for people who are thinking about, “Is it the right time to introduce Kube?” [0:38:28.0] BL: So here is my thought, if you work for a small startup and you’re working on shipping value and you have no Kubernetes experience and staff and you don’t want to use for some reason you don’t want to use the cloud, you know go figure out your other problems then come back. But if you are an enterprise and especially if you work in a central enterprise group and you are thinking about “modernization”, I actually do suggest that you look at Kubernetes and here is the reason why. My guess is that if you’re a business of a certain size, you run VMware in your data center. I am just guessing that because I haven’t been to a company that doesn’t. Because we learned a long time ago that using virtual machines in many cases is way more efficient than just running hardware because what happens is we can’t use our compute capacity. So if you are working for a big company or even like a medium sized company, I don’t think – I am not telling you to run for it but I am telling you to at least have someone go look at it and investigate if this could ultimately be something that could make your stack easier to run. [0:39:31.7] DC: I think I am going to take the kind of the operations perspective. I think if you are in the business of coming up with a way to deploy applications on the servers and you are looking at trying to handle the lifecycle of that and you’re pretty fed up with the tooling that is out there and things like Puppet and Chef and tooling like that and you are looking to try and understand is there something in Kubernetes for me? Is there some model that could help me improve the way that I actually handle a lifecycle of those applications, be they databases or monoliths or compostable services? Any which way you want to look at it like are there tools there that can be expressed. Is the API expressive enough to help me solve some of those problems? In my opinion the answer is yes. I look at things like DaemonSet and the things like scheduling [inaudible] that are exposed by Kubernetes. And there is actually quite a lot of power there, quite a lot of capability in just the traditional model of how do I get this set of applications onto that set of servers or some subset they’re in. So I think it is worth evaluating if that is the place you’re in as an organization and if you are looking at fleets of equipment and trying to handle that magical recipe of multiple applications and dependencies and stuff. See what is the water is like on this side, it is not so bad. [0:40:43.1] CC: Yes, I don’t think there is a way to answer this question. It is Kubernetes for me without actually trying it, giving it a try yourself like really running something of maybe low risk. We can read blogposts to the end of the world but until you actually do it and explore the boundaries is what I would say, try to learn what else can you do that maybe you don’t even need but maybe might become useful once you know you can use. Yeah and another thing is maybe if you are a shop that has one or two apps and you don’t need full blown, everything that Kubernetes has to offer and there is a much more scaled down tool that will help you deploy and run your apps, that’s fine. But if you have more, a certain number, I don’t know what that number would be but multiple apps and multiple services just think about having that uniformity across everything. Because for example, I’ve worked in shops where the QA machines were taking care by a group of dev ops people and the production machines, oh my god they were taken care by other groups and now the different group of people and the two sides of these groups used were different and I as a developer, I had to know everything, you know? How to deploy here, how to deploy there and I had to have my little notes and recipes because whenever I did it – First of all I wasn’t doing that multiple times a day. I had to read through the notes to know what to do. I mean just imagine if it was one platform that I was deploying to with the CLI comments there, it is very easy to use like Kubernetes has, gives us with Kubes ETL. You know you have to think outside of the box. Think about these other operations that you have that people in your company are going to have to do. How is this going to be taught in the future? Having someone who knows your stack because your stack is the same that people in your industry are also using. I think about all of these things not just – I think people have to take it across the entire set of problems. [0:43:01.3] BL: I wanted to mention one more thing and this is we are producing lots of content here with The Podlets and with our coworkers. So I want to actually give a shout out to the TGIK. We want to know what you can do in Kubernetes and you want to have your imagination expanded a little bit. Every Friday we make a new video and actually funny enough, three fourths of the people on this call have actually done this. Where, on Friday, we pick a topic and we go in and it might be something that would be interesting to you or it might not and we are all over the place. We are not just doing applications but we are applications low level, mapping applications on Kubernetes, new things that just came out. We have been doing this for a 101 episodes now. Wow. So you can go look at that if you need some examples of what things you could do on Kubernetes. [0:43:51.4] CC: I am so glad to tgik.io maybe somebody, an English speaker should repeat that because of my accent but let me just say I am so glad you mentioned that Brian because I was sitting here as we are talking and thinking there should be a catalog of used cases of what Kubernetes can do not just like the rice and beans but a lot of different used cases, maybe things that are unique that people don’t think about to use because they haven’t run into that need yet. But they could use it as a pause, okay that would enable me to do these thing that I didn’t even think about. That is such a great catalog of used cases. It is probably the best resource. Somebody say the website again? Duffy what is it? [0:44:38.0] DC: tgik.io and it is every Friday at 1 PM Pacific. [0:44:43.2] CC: And it is live. It’s live and it’s recorded, so it is uploaded to the VMware Cloud Native YouTube and everything is going to be on the show notes too. [0:44:52.4] DC: It’s neat, you can come ask us questions there is a live chat inside of that and you can use that live chat. You can ask us questions. You can give us ideas, all kinds of crazy things just like you can with The Podlets. If you have an idea for an episode or something that you want us to cover or if you have something that you are interested in, you can go to thepodlets.io that will link you to our GitHub pages where you can actually open an issue about things you’d love to hear more about. [0:45:15.0] JR: Awesome and then maybe on that note, Podlets, is there anything else you all would like to add on “Should I Kubernetes?” or do you think we’ve – [0:45:22.3] BL: As best as our bias will allow it I would say. [0:45:27.5] JR: As best as we can. [0:45:27.9] CC: We could go another hour. [0:45:29.9] JR: It’s true. [0:45:30.8] CC: Maybe we’ll have “Should I Kubernetes?” Part 2. [0:45:34.9] JR: All right everyone, well that wraps it up for at least Part 1 of “Should I Kubernetes?” and we appreciate you listening. Thanks so much. Be sure to check out the show notes as Duffy mentioned for some of the articles we read preparing for this episode and TGIK links and all that good stuff. So again, I am Josh Russo signing out, with us also Carlisia Campos. [0:45:55.8] CC: Bye everybody, it was great to be here. [0:45:57.7] JR: Duffy Coolie. [0:45:58.5] DC: Thanks you all. [0:45:59.5] JR: And Brian Lyles. [0:46:00.6] BL: Until next time. [0:46:02.1] JR: Bye. [END OF EPISODE] [0:46:03.5] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END] See omnystudio.com/listener [https://omnystudio.com/listener] for privacy information.

24 feb 2020 - 46 min
Soy muy de podcasts. Mientras hago la cama, mientras recojo la casa, mientras trabajo… Y en Podimo encuentro podcast que me encantan. De emprendimiento, de salid, de humor… De lo que quiera! Estoy encantada 👍
MI TOC es feliz, que maravilla. Ordenador, limpio, sugerencias de categorías nuevas a explorar!!!
Me suscribi con los 14 días de prueba para escuchar el Podcast de Misterios Cotidianos, pero al final me quedo mas tiempo porque hacia tiempo que no me reía tanto. Tiene Podcast muy buenos y la aplicación funciona bien.
App ligera, eficiente, encuentras rápido tus podcast favoritos. Diseño sencillo y bonito. me gustó.
contenidos frescos e inteligentes
La App va francamente bien y el precio me parece muy justo para pagar a gente que nos da horas y horas de contenido. Espero poder seguir usándola asiduamente.

Disponible en todas partes

¡Escucha Podimo en tu móvil, tablet, ordenador o coche!

Un universo de entretenimiento en audio

Miles de podcast y audiolibros exclusivos

Sin anuncios

No pierdas tiempo escuchando anuncios cuando escuches los contenidos de Podimo.

Tu oferta:

Acceso ilimitado a todos los podcasts exclusivos
Sin anuncios
Descubre miles de audiolibros
Después de la prueba 4,99 € / mes. Sin compromiso

Otros podcasts exclusivos

La Ruina
Bestias
Mimicidios
La Vida y Tal
The Wild Project
Queridas hermanas
Tenía la duda
Gente Muerta
El Sentido De La Birra
Malas personas

Audiolibros populares

La chica del verano
Recupera tu mente, reconquista tu vida
Volver a empezar (It Starts with Us)
El Señor de los Anillos nº 01/03 La Comunidad del Anillo
Una corte de niebla y furia
Una corte de rosas y espinas
Romper el círculo (It Ends with Us)
Cómo hacer que te pasen cosas buenas
La Sombra del Viento
Por si las voces vuelven