GKE Turns Six with Anthony Bushong, Gari Singh, and Kaslin Fields
Kaslin Fields and Mark Mirchandani host this week’s episode of the podcast as we celebrate one of our favorite Google products, Google Kubernetes Engine! Anthony Bushong and Gari Singh join the party to talk about GKE’s life journey, what’s new, and what’s coming up for the service.
Each guest brings their unique perspective to the show starting with their definitions of Kubernetes and GKE, tailored to a six year old. How does Kubernetes relate to cookies, Disney World, and Pokemon? Kaslin, Anthony, and Gari break it down.
Next, each guest talks their favorite GKE features and what makes the service different. Kaslin loves the persistent storage feature, among others. Anthony appreciates the evolution of GKE to a place where it supports many different workloads and situations for clients and how easy it is to get started with GKE. GKE’s ability to connect easily to other GCP products and features and GKE Autopilot are some of Gari’s favorite things about the service.
Best practices and advice for new users are shared by our guests. From practical tips for project planning to encouragement with long-term problem solving, our guests offer listeners resources and ideas for a successful GKE project. Kaslin, Anthony, and Gari share their favorite customer stories with us and talk about the fun, interesting events Google has hosted for GKE customers.
The future of GKE looks bright, with new options for Windows containers, expanded Autopilot uses, and multicluster support. KubeCon is coming up soon, so keep an eye out for more Kubernetes and GKE news!
Anthony Bushong has been working in the field with production GKE users, both large and small, for almost 5 years now. In Kubernetes time, since Kubernetes v1.3!
Gari Singh is an Outbound Product Manager focused on GKE and Anthos. In this role, he has the opportunity to work with many customers and help align their needs with overall product direction.
Kaslin Fields is a Developer Advocate at Google Cloud, a Cloud Native Computing Foundation (CNCF) Ambassador, and a contributor to Open Source Kubernetes. As a Developer Advocate, she engages with Open Source communities both as a member, and as an advocate for their needs as users (or potential users) of Google Cloud’s products. She is passionate about making technology accessible to a broad audience through making the information available in many forms, such as videos, blogs, documentation, and even comics which she illustrates herself!
Cool things of the week
- Try a tutorial in the Google Cloud Console blog
- GCP Podcast Episode 180: Firebase with Jen Person podcast
- Migrate, Manage & Modernize: Windows Workloads Powered by GKE and Anthos site
- GKE site
- Kubernetes site
- Explain Like I’m 5: Containers VS VMs blog
- Introducing GKE Autopilot: a revolution in managed Kubernetes blog
- Best practices for running cost-optimized Kubernetes applications on GKE docs
- Discover and invoke services across clusters with GKE multi-cluster services blog
- The evolution of Kubernetes networking with the GKE Gateway controller blog
- Bringing Pokémon GO to life on Google Cloud blog
- Bayer Crop Science seeds the future with 15000-node GKE clusters blog
- Helping researchers at CERN to analyze powerful data and uncover the secrets of our universe blog
- Optimize costs in GKE with monitoring systems video
- Monitoring for efficient cluster binpacking in GKE video
- Monitoring for app right-sizing in GKE video
- Cloud Next 2021 site
- KubeCon site
- GCP Podcast Episode 210: Kubernetes Config Connector with Emily Cai podcast
- GCP Podcast Episode 234: GKE Turns Five with Alex Zakonov and Drew Bradstock podcast
- GCP Podcast Episode 252: GKE Cost Optimization with Kaslin Fields and Anthony Bushong podcast
- GCP Podcast Episode 257: GKE Autopilot with Yochay Kiriaty and William Denniss podcast
What’s something cool you’re working on?
Transcriptshow full transcript
[MUSIC PLAYING] MARK: Hey there, everybody. And welcome to episode number 272 of the weekly Google Cloud Platform Podcast. I'm Mark, and today I'm here with Kaslin Fields. Hey, Kaslin. How you doing?
KASLIN: Hey, everyone. I'm excited to be here.
MARK: Yeah, absolutely. It's been a minute since we've had you on here, too. But we have a super exciting episode today, right?
KASLIN: Yes. We're going to be talking about GKE's sixth birthday.
KASLIN: GKE actually-- yay-- [LAUGHS] was released six years ago in August. So we're doing a sixth birthday thing here, which is pretty exciting. If you look at the history of Kubernetes, that's quite a significant portion of it. So there's a lot to cover.
MARK: Yeah, that covers a huge chunk of basically how long Kubernetes has been around. And it kind of shows you exactly how long, obviously, the community has been working on Kubernetes ever since it got into the spotlight, and then from there all the different companies that have kind of said, hey, let's try and bring Kubernetes to all the folks who need it.
And it's really been cool to see how that community has grown. We had a great podcast last year where we actually talked about the fifth birthday of GKE. So it's super exciting to be able to follow that up and see all of the cool things that have changed.
KASLIN: Yeah. And we're going to have some great guests on. We're going to talk about what's been going on with the GKE over the last six years, and some insights into how it's growing and what's going on now with GKE.
MARK: Very exciting. But before we get into that, why don't we go ahead and do our Cool Things of the Week?
All right. Well, for my cool thing of the week, I'm going to link to this blog post written by one of the great members of our team, Jen Person-- actually, I think all the members of our team are great. But Jen is awesome. We actually had her on the podcast two, maybe three years ago talking about Firebase.
But today, she's giving us a blog post on a really cool system that you may have seen if you've been in the Google Cloud Console lately. These are actually built-in walkthroughs that are integrated right into the console. You'll see them on the right-hand side. And our team has actually been working a lot with them lately.
And she published a blog post talking a bit more about what they are, how they kind of work, what they walk you through, and gives a bunch of examples for them. I think these tutorials are just really, really cool ways to kind of integrate the learning experience while also getting you a bit more familiar.
Because the console is a cool place, but it's also an enormous tool. And there's tons of stuff going on in there. So I definitely recommend checking out the blog post for a quick preview of what these are. And then if you head into the console, there's just no shortage of tutorials to try-- everything from different products like compute and storage to BigQuery. I've done some work with billing stuff. There's just a lot of resources out there. So definitely recommend checking those out, because they're just really cool resources.
KASLIN: If you've ever been in a situation where you found this really cool tutorial online, and you're like, oh, I'm just going to try this out, and then you have to connect that tutorial to whatever technology you're going to have to run the tutorial on, you kind of have to jump back and forth between things a bit. Tutorials kind of get rid of that, because you're in the Cloud Console, you see something maybe that you want to try out. If you just go into it, and-- like, you go into the GKE tab in the console, it'll just pop up, do you want to try out GKE? Do you want to get hands-on with it? Here's how you can do it. And we'll walk you through it right here.
So for my cool thing of the week, I actually gave a webinar, if you all want to check that out. I'm going to talk more about it during our show today. But it's about GKE and Anthos and running Windows Server containers on GKE. So I do a little demo of spinning up your first Windows Server container, just on a Windows Server virtual machine, and then how you would move that Windows Server container and run it in GKE.
So it's really cool, and I find that area really exciting. So if you're interested in Windows containers and Windows containers in Kubernetes, check that out. There will be a link.
MARK: Yeah. Actually, that's probably a perfect segue into our interview, because we do talk a lot about not only some of the great content that you've been working on, Kaslin, but also what are the things happening in Kubernetes and GKE that are super exciting. So why don't we just go ahead and jump in our interview now?
Thanks, everyone, for joining. Today we're going to do something a little bit different. Since we're celebrating the sixth anniversary of GKE, we figured we would try out to do six questions about GKE. And to get started, I'd love for everyone to just introduce themselves. I guess it's kind of a cheat, but hey. The first question I'll ask is, who are you, and what do you do?
KASLIN: Thanks for having me, Mark. I'm Kaslin Fields. And I'm a developer advocate here at Google Cloud, where I focus on Google Kubernetes Engine.
ANTHONY: My name is Anthony Bushong, and I'm a specialist customer engineer focused on Kubernetes. So I've been working in the field with a lot of customers using GKE since 2016.
GARI: Yeah, Gari Singh. I'm an outbound product manager in the application modernization portfolio, primarily focused on GKE. I get to have fun working with our product managers, customers, developers, everybody. So it's a great crosscutting role.
MARK: Yeah. And I think that's such an important part of having this conversation as well, is having different perspectives on what this looks like. And we're going to dive really deep into some of the cool things going around GKE. And I'll let you, obviously, all talk about your different perspectives and the different folks you've worked with.
So I'll just kick it off with the second question, which is, we've talked a lot about Kubernetes But knowing that we are on the sixth anniversary of GKE, it seems fitting to ask, how would you explain GKE to a six-year-old? And this is knowing that as a six-year-old myself, I'm sure I wouldn't be able to grasp any of these technical concepts. So what do you got?
KASLIN: I actually made a little comic blog post for my blog site a while ago called "Explain Like I'm Five-- Containers versus VMs." I have another fun comic analogy that I've done in presentations but I haven't actually gotten up as a comic on my site yet, where I explained Kubernetes.
So I tend to explain Kubernetes as being kind of like a cookie factory. But that kind of requires explaining containers first. But the idea of explaining it like a cookie factory is to say that there are a lot of businesses that need to run a lot of applications to do their work. So if we imagine those applications like cookies, what the cookie factory does for those businesses is the logistics of running a business at scale. It helps you run a lot of applications all over the world and run them at scale.
And GKE is a way of running this platform for running applications at scale, and it's through Google Cloud. So it takes some of the hard parts about running that cookie factory, Kubernetes, and makes it a little bit easier by having Google take on some of the responsibility.
ANTHONY: Yeah, I love that. I mean, you've got to hit the kids where they're most interested. So cookies is great. I think if I were to take a swing at this, I would also-- thinking back to what I was really into as a six-year-old, thinking back to Pokemon.
So I would try to ask, have you heard of Pokemon Go? Do you have a Pokemon Go Kid's account? If you do, you know that there's probably millions of players around the world that are catching Pokemon. And so in order to do that, you use an app on your phone.
And the way that it actually works, how you actually see those Pokemon, how you see the map, we have to connect hundreds, if not thousands, of computers around the world. And so GKE is basically a technology that makes it easier to make sure that those hundreds or thousands of computers are all working, to make sure that all those players can see the Pokemon, can see their maps, can catch Pokemon.
And it's not just Pokemon Go. That's just one example. But GKE and Kubernetes helps you do this with pretty much anything that requires computers. So lots of other apps that kids are using. Hopefully it lands with them-- Pokemon or cookies. You know, we're trying to cover the full gamut here.
GARI: You stole my idea, Anthony. And by the way, I know you still like Pokemon now.
ANTHONY: Oh, I'm a level 42 Pokemon Go player. I'm deep in it.
KASLIN: Me, too.
GARI: I would expect no less. I guess I'll have to go with a different approach. I would explain GKE maybe like Disneyland or Disney World, whichever depending on which coast you're from. Disney World, you have a lot of rides and attractions. You've got like Magic Mountain, Epcot. So Disneyland itself organizes all these rides.
And when you come in, you have to come in through a main gate. And then we have to be able to route you to whatever ride you might want to go to. And if you think, going back to what Anthony said or even Kaslin said, think of those as every ride could be a different game, like a Pokemon or a card of Pokemon. So when you come in, it's going to make sure that all those things kind of work together, they're all up and running. And you'll have a path that will route you to follow a sign, route you through a particular ride.
When you get to that ride, there will actually be a line. And that line is actually designed to control the actual traffic that goes into there, because only a few people can go in at a time. So as that resource is available-- it has 10 seats-- we'll let 10 people in, queue up the next people, and then let 10 in.
Sometimes maybe these apps need to communicate with each other. So you might go from one ride to another ride. And eventually, you circle back and exit the park. GKE and Kubernetes basically do that for computers.
And we see this with trying to explain this to people who have been in the field for a long time just as much, and they may not have been familiar with containerize technology, or they may not be familiar with Kubernetes. And trying to explain to them a very different way of thinking about running these applications or services can be a huge jump for folks. I think that kind of naturally leads us into the next question. There's tons of episodes we have-- and we'll link to some in the show notes-- where we've talked about Kubernetes in a variety of different details.
But thinking more specifically about GKE, Google Kubernetes Engine, what's special about GKE? What makes this one really, really cool, basically?
KASLIN: I feel like there's a couple of different ways that you could come at that question. One is, what is exciting about Google Kubernetes Engine itself? And there's also kind of the more subjective opinion of, what is exciting you currently about Google Kubernetes Engine?
I kind of think of that perspective first. And I want to say that a couple of things that I've been working on recently with GKE that are really exciting to me are things like persistent storage, running persistent applications on Kubernetes. And I've also been playing around with Windows containers, which I haven't played with for quite a while.
And I love how Kubernetes and GKE, this one platform, like we were talking about, for running applications, can work in so many different ways for so many different businesses. So I'm talking about Windows containers. I usually talk about Linux containers.
So there's businesses that only use one or the other or use a combination. And those applications can all be represented within Kubernetes. There's stateful applications, and there's these newer kinds of architectures for stateless applications, like microservice-style architectures that sometimes don't require data storage.
And there's just this huge array of different use cases that businesses can use Kubernetes for. And that's something that really excites me about it.
ANTHONY: I absolutely love that. I think GKE has really evolved to start to support many different types of workloads. I think early on, it was easy to say, OK, containers, Kubernetes, this is really suitable for stateless types of workloads. Everything else gets kind of difficult.
But if you look at some of the types of workloads that Kaslin mentioned-- running Windows and Linux nodes side by side, very easy to configure in GKE; the ability to run a stateful application. GKE integrates with regional persistent disks, so you have high availability for your stateful application across two zones in a region. It's done an incredible job at fulfilling the promises of Kubernetes-- making workloads portable.
It's still conforming to the Kubernetes API for the most part, with the exception of cloud vendor implementation details that fulfills things like storage or load balancers. You can pick up your application from a Kubernetes provider, run it in GKE, and vice versa, with minimal tweaks.
But I think on top of that, what GKE has done is building in integrations with the differentiators of GCP that make it a really strong platform on top of what base Kubernetes provides. So from cluster lifecycle, whether it's turning what typically would be Bash scripts or something like that to manage upgrades and various upgrading of nodes that's built-in, turnkey features in GKE, like maintenance windows, surge upgrades, things like that. Or if you're having trouble with DNS infrastructure, you can actually offload that into Cloud DNS within Google so that your DNS infrastructure isn't running withinside your cluster.
And the user interface-- I mean, I think just getting started with GKE, it's really easy to understand the different resources, how they interact, and also take a look at your workloads, quickly get to logs within a couple of clicks. I mean, so really across the board, those features for expert users all the way to folks who are just trying to understand how to run their first application on GKE, it's all built into the user interface there.
So I think across the board, it's fulfilling the promise of Kubernetes while adding a ton of goodness that makes for a better user experience for all types of users.
GARI: Awesome. As a side thing, when I came to Google, the only thing I wanted to work on in Google was actually GKE, which is odd, I guess, because people interviewing pick different positions. But I really wanted to work on GKE, because I love Kubernetes. And I have been working on Kubernetes, running it myself at another company for the previous four or five years. I got the opportunity to use GKE a few times in our multicloud experiments, and I just loved it. It was great.
Playing off a little bit of what Anthony said, you know, there's a few things that I love. I mean, taking advantage of some of the things in GCP is phenomenal. And some of them seem so simple. Like, GCP actually has flat networks. So you can actually have within a single VPC, Virtual Private Cloud, you can actually have a network that spans regions.
So why do I like that? Because some use cases dictate that I may have to have not just a cluster available within a region and multizones, but I may want to have geo-availability or geo-based routing. So we have the ability to easily configure that combining with load balancing. So that's a great feature.
You can piggyback on top of that things like multicluster services. While we tout 15,000 nodes in a cluster, the reality is at some point, you're always going to break the boundary of a cluster, whether because of team dynamics, just scale, whatever have you. And then you're like, well, I've got my stuff in a cluster, you've got your stuff in a cluster. I need to expose things in Ingress, it's a security mess. Who wants that?
But no, there's a spec out there which we were key contributors to called multicluster services. So I can now expose services across clusters transparently. And it just looks like just a simple service export. It looks just like another service across clusters.
It's features like that. And Multicluster Ingress, again. So you can see my theme here, right? I really come from having operated these things. I'm like, I want to have this stuff available, where I want it available. I want to have many clusters look like one, and I want to route traffic in and out of them. And GKE has all those capabilities based on GCP.
And finally, I'd be remiss with the fact of-- I used to like to call it laziness, but it's not really. When you build applications and run and support things, all I ever wanted to do is work on my workload-- like, optimize my workload, my deployment, that type of stuff. I love the infrastructure, don't get me wrong. But if somebody else can run the infrastructure for me, I'm all for that.
But when I found out that GKE had this Autopilot mode-- because I was literally sitting there in November of last year thinking, why can't somebody run my nodes for me? Great that they run my masters, but why am I responsible for running my nodes? I've got to upgrade them.
Come over here and I'm like, oh, yeah, there's Autopilot mode. So when I get out there, I sometimes have to preface this myself to the customers, like, you'll have to excuse me. I truly am a fan of this, because I get so excited about Autopilot. Because I just want to know it's Kubernetes. I believe Google knows how to run it for me. And I take my app, take my deployment. I take an autoscaling policy, deploy it, magic happens.
I'm done. And then I can yell at Google if my app goes down, the regions go down. So those are the things that I really love about GKE, so.
KASLIN: And I want to throw in a comment here that you've heard these different answers from the different folks on today. Something else that excites me about GKE that might not be visible from the outside is that these are the people working on and with GKE here at Google. They're really excited about GKE, and what it can do, and what their users are doing.
And I love every meeting I get to be in where I'm with these people who are just so excited about what they're working on, and they're so passionate about building things that are going to work well within GKE for their users.
MARK: Yeah. I mean, that's a really great point. I think a lot of people have seen really, really heavily growing communities around Kubernetes. We've seen a lot of interest in it. And it's really cool to hear about what six years of developing GKE on top of that has been.
But these are features that are trying to make people's lives easier when working with Kubernetes, whether it's the independent developer or the independent technical practitioner, all the way up to these mega enterprises that have to look at multicluster setups across the globe. There's tons of features in here. And so it is definitely super exciting.
I want to get back to talking about some of the stories that you can share from customers and folks you've talked to. I think it's really important to also talk about, what are some of the best practices you all can recommend for folks who really are getting started with what can be a very intimidating technology to start using? I think Autopilot, as you were just saying, Gari, is a very, very cool way to think about that. But what else?
KASLIN: So I think there's a couple of different ways to take this again. One is, what's the advice for new people? And also, what are the resources that people can use? I think those are both important areas to touch on.
In terms of advice for anyone getting started with Kubernetes, maybe you're in a situation where you're hearing some whispers, and it sounds like your team is going to have to start picking up Kubernetes, or maybe you just find it really interesting and learn to explore it on your own. Whatever your perspective may be in getting involved with Kubernetes, something I say in the open-source community about getting involved with the open-source work is to not give up. Because it can be a little intimidating at first when you start learning it.
Like we said, it's a pretty complex thing. And we've been explaining it with these analogies and in kind of fun ways that are trying to be approachable. And in some ways, they oversimplify things. And sometimes, when you start getting into the deeper depths and into the greater complexities of the topic, it can be a little intimidating, and you can feel like you need to step away.
As long as you come back and take it one piece at a time, learn a little bit at a time, you'll make progress. So don't get scared off from it. It can be a little challenging at first, but it's definitely worth the effort to keep with it.
ANTHONY: I think one of the keys when it comes to not giving up is really just start with your problem statement. What am I trying to solve for my team, for my company, for my organization? By the way, I love the Disneyland analogy, so I'm going to kind of lean in on that a little bit.
But some folks are going to Disneyland for one specific attraction that they really are into right. Maybe they want to see Star Wars Land. And so you go in, and you really try to start and make sure that you cover what your purpose for attending Disneyland was. And then from there, you can start to explore what else can I get out of this experience?
And the same goes for GKE. The types of workloads that you already have containerized, you may be trying to solve a challenge with deploying multiple containers at scale that have to communicate with one another and trying to secure some of that. So maybe you deploy a set of services that are specific to what's already been built, and you're trying to build network policies between those and just rely on DNS, and get familiar with how Kubernetes base use case works.
And then from there, you can start to extend out as far as like, OK, now I can start to push the boundaries of what's within a single cluster. This is where it gets challenging to force quotas for teams and things like that, like really starting to build that strong foundation that solves the problem that you're looking to kind of rectify with adopting Kubernetes. And then grow and expand, and start to plan out some of more complex architectures using Multicluster Ingress, multicluster services, defining how you run multiple clusters. Like, it's really important that you don't get bogged down early in your journey by trying to figure that out from day zero.
GARI: Definitely great points on both, I think. Don't give up. Don't be afraid. And yeah, don't try to solve everything on day one.
From my experience, I think learning this stuff is-- maybe start with the problems you solve, like Anthony said. But find some base reference. I guess throughout my many years of learning, I always refer back to something which I previously understood.
So the community is very open. If you ask questions about Kubernetes, no one's ever going to say don't be a dummy and ask questions about Kubernetes. You bash Kubernetes, that might be a different story. But if you ask questions about it, people are always willing to help.
So if you want to say, I've been running my applications x, y, z, or, I'm a Windows person doing .NET, or I only know how to run things on VMs, then you kind of have a point of reference right that you can work from. And a lot of times, people will explain, like, what are the differences, what are the advantages, what are the places that you should sort of start to look at.
So get kind of a base understanding of the problems that Kubernetes is trying to solve. And of course, I think we got into it. I'm always a hands-on person. GKE, like we said, we have Autopilot. So your cluster is up and running, so that's awesome. And then we have some great resources by our friend Kaslin.
So on Cloud OnAir, Kaslin's onboarding video is great. And there's lots of little training resources out there. But it kind of follows you. Building an app, taking you through it, how do you deploy it, what are the things that you should care about.
So that to me is that. But I'll just reiterate. Don't be afraid. And honestly, be happy that you have the opportunity to learn this. Even if you don't use it and it's not the right fit for you, fundamentally, I think, understanding this-- where it's come from and where it's at-- is where the next generation of cloud computing is going. So take the time to at least get a little base level understanding or learning of it.
MARK: Yeah, I think there's a lot to be said there from being able to take something like GKE Autopilot-- again, maybe a great way to take a containerized workload and get it up and running very quickly, without having to do a tremendous amount of research and understanding into all the complexity that is Kubernetes-- but then taking that as a good starting point, getting hands-on, kind of focusing on the ease-of-use factor, while also being able to say, let's learn about it.
And there's tons of content. We'll make sure to include as much as we can in the show notes, because, of course, Kaslin, you've been doing quite a bit in this space. And Anthony, you just released a couple of the videos where we look at optimizing, which are things that are really important for Kubernetes. Because just as easy as it is to get up and running with Kubernetes, it's also very easy to consume a tremendous amount of resources with Kubernetes. So we like to look at the optimization side as an important part, which I guess is another way of also looking at Autopilot being a great solution, because it takes care of a lot of the optimization part there.
So with that in mind, you know, I think it's really important to kind of talk about, you've all been working in this space for a while. You're all excited about different areas. You've also worked with a ton of different people and customers on these kind of on implementing Kubernetes, on implementing GKE, on helping them learn and figure out if this is the right solution for them.
I would love to hear just some stories of the customers you've worked with. Like, what's your favorite story? And kind of, what about the customer made it such a cool experience for you?
KASLIN: In terms of groups that-- and users that I've actually worked with recently, I did a series of workshops with Anthony recently for a customer. And that was such an awesome experience. It was a bunch of different groups within the same customer, so each experience was a little bit different. And their experience levels with Kubernetes varied within each group. And their goals, and what they wanted to learn about Kubernetes varied within each group, even within the same company. So that was a really cool experience to get to see the different perspectives as people were learning.
And we got to take people through some hands-on exercises with GKE. And it's always really cool to see how people respond to using something for the first time. And they'll ask questions about edge cases and barriers that they start to see as they start to use it.
So that was a really great experience for me. I love to see groups and users as they start to explore these things and figure out how they fit into their use cases.
ANTHONY: Yeah. I think that, along the same theme of trying to approach different types of users, there's always that metaphor of an elephant and you're trying to figure out what it is by touching either its trunk, or its tusks, or its feet. I think it's the same with developers versus operators versus network admins versus security. So I think that experience of trying to show that GKE has features that will be relevant to all of these different individuals and users who will interact with it at some point is definitely an experience.
I think it also goes for understanding-- again, going back to what I was talking about earlier-- some of the different types of workloads that GKE supports. I think Niantic is one of my favorite stories about users and GKE when they were, what, I think 50x their expected traffic, and 10x their worst-case scenario traffic when launching Pokemon Go in the beginning. So I think typical-- either serving backend services to mobile app or something like that, this is sort of in line with that stateless type of workloads that I think Kubernetes and containers were often focused on, especially early on in its infancy.
And so I think as time has gone on, we've actually seen some whole different profiles of workloads and service teams or owners of those workloads become customer references, from Bayer Crop Sciences doing a lot of the processing that is required to help farmers out and understand what seeds they'll be utilizing for their crops. And then for CERN, a lot of the work that they'd done with the Large Hadron Collider has been expanded upon by Cloud Infrastructure and GKE.
So these aren't your typical serving types of workloads. These are either stateful or batch processing types of workloads that GKE is really starting to enable as well. Like, back to Gari's point, how do we make sure that folks are building what's important to their business or their research and less focused on spending time with the infrastructure? GKE has really enabled that across all different types of workloads, from Niantic, Bayer Crop Sciences, CERN. I mean, it's amazing.
GARI: I've only been here, I guess, like three months or so. But I've spoken with a lot of customers. I think that in general what impresses me most is their sort of passion for Kubernetes, the expertise that they've built up in Kubernetes, and for GKE specifically.
And I'm also impressed by the number of forerunners out there who have been with GKE since the beginning who actually take advantage of a lot of the advanced capabilities. So it's kind of cool to actually see.
Because sometimes, you think that sometimes you engineer for yourself. And you're like, oh, no one will ever really have this problem. And it's kind of interesting to say, no, there actually are people who are pushing horizontal pod autoscaling, for example, and then they're pushing it, but they actually find limits that are not necessarily ours, but they're limits from like Kubernetes itself, because you just never thought that it would scale to that level, and then trying to sort of solve those problems.
And Anthony just mentioned some of [INAUDIBLE] scaling on that. We have customers in the retail business or e-commerce business-- Shopify as a reference. I mean, clearly, looking at how they use autoscaling was really interesting. It's like a perfect use case. They don't want to be overdeployed and overprovisioned, so they're scaling on the regular stuff and during the big periods of burst, autoscaling is working, both from a perspective of using things like horizontal pod autoscaling, but also things like our node autoscaling, which basically just pops up new nodes when you need it based on your workload. So to me, it's exciting to see when people use some of those advanced features.
And I guess on the other side, I think there's some interesting use cases. I spent some time doing some work with our GKE and our Anthos customers as well who are GKE running kind of anywhere. You know I get the honor to meet with them on a quite a regular basis.
Like, Major League Baseball is an interesting one. Who would have ever thought that they have the main clusters running within GCP on GKE, kind of serving their workloads in mlb.com, but they actually have Anthos clusters running in every Major League Baseball stadium, which is kind of interesting, processing statistics and analytics that come there, and pushing them back to the cloud into their other sort of processing.
So kind of looking at that model, too, like how we're expanding just not outside the cloud, but just for a different use case. You never would have thought that we'd be using this for that.
And why do they want that? Because they take advantage of the fact that Kubernetes is configuration-driven. You've got policies. You distribute common policies, common management. So once you have that kind of runtime out of those nodes, is becomes this really interesting, you kind of think it as the new app runtime that you can just push out this sort of configuration and drive out. And you're like, oh, yeah, I get what you're doing because.
That would have been-- old-school, that would have been-- I don't know. Way back in my early days-- I don't know what it would have been-- but we were pushing like a config file, or somebody would have written some state machine themselves running at a low level.
And now you're just doing this with standard Kubernetes open-source software.
ANTHONY: Yeah, one thing I definitely picked up off of that, Gari, any time I go to a baseball game, now I'm thinking about where Anthos is in the stadium. And I'm just like, I need to make sure that I avoid it or don't cause any issues there.
But one thing that I really loved hearing was talking about how a lot of users will think, oh, maybe this one feature is something that's specific to me or unique to my problem set. What is really fascinating about my role is I get to connect the dots and work with folks like you and the engineering teams, where we actually identify like, oh, this is a feature request across multiple customers.
And that goes for GKE. Like, we can start to push this forward to help GKE users. But oftentimes, there's a lot of requests that actually belong in upstream Kubernetes. So we actually get to connect with a lot of the same leaders who are in our engineering team and actually talk about making improvements in the upstream code for Kubernetes.
So I think that's one of the best parts about being at Google, too, is that we do have a tighter feedback loop for both our managed solution but also connecting customers with open-source and collaborating there. So I love that about that.
MARK: It's a really cool place, I guess, for GKE to exist, right? The goal of bringing Kubernetes to a lot of folks for being able to run whatever type of workload it is. I think it's really important to also take a look at what you all were just mentioning about, what does it look like to be a company that needs to run Kubernetes?
It could be MLB trying to crunch a bunch of numbers. It could be an e-commerce site like Shopify trying to do a bunch of serving workloads. It could be other groups that are doing actual science and data and analytics on these things that you wouldn't necessarily be aware of.
And I think, just like you were mentioning, Kaslin, these companies, they don't have one development team. They have 10, or 50, or 100, or 100,000-- whatever it is, tons of people that are developing, that are operating, that are keeping these things running. And they're all trying to do different things.
And so it's also a really cool perspective when you get to work at a company and not just talk to, oh, this is our web development team, or this is our operations team. Like, here's 10 different groups. All have different levels of experience. They all have different kinds of applications they're running. But they all want the same thing. They want to be able to focus on their applications, they want to be able to run them in an optimal way, and they want to be able to get them out there scaled, without having probably to worry too much about it, and just make it work.
So it's really, really cool to work with these customers, and hear their stories, and, as you were just mentioning, Anthony, actually helped them solve those problems. I love to hear it.
GARI: One interesting thing that we get to do-- you know, I think a favorite customer story would be multiple customers, to where we both on the open-source side, but just specific to kind of GKE, we have these customer panels, monthly. It's really interesting. So we bring in some of our top customers, some of the ones you've heard of already.
And some of them could even be competitors or whatnot with each other, right? But it's great, because it just becomes a discussion about how do we make the product better, and with sometimes being, here's what open-source needs. You need to go-- can you guys make Kubernetes do x, y, z? We need this in GKE.
And then you'll hear another customer say, yeah, you know, we're doing that. Or, oh, you're doing it that way? We never thought of that. So it's always nice when you kind of build--
And I think that's the same thing that happens in the open-source community itself, too. You always find people who are doing that, playing off each other. But it's really interesting when you bring some of these more typical closed-door kind of companies together, and they just start collaborating.
And that's actually exciting to see, right? Because a lot of times, it's all proprietary, I'm not telling you how I used the technology. We think this knowledge is power. And then it's even better when we actually drive that into a product and actually get it out. That's-- to complete loop in that circle and make everybody happy is always super satisfying.
MARK: Yeah. And a lot of that goes into kind of what makes GKE special. And I think that is a good segue into our next question, which is, we have had GKE being worked on for six years. I'm sure it'll go for plenty more years after this. What's exciting that's coming up in the future of GKE? And what is this going to enable businesses to do that they couldn't before?
KASLIN: Coming back to some of the areas I talked about when I was talking about what's exciting about GKE-- for me I was talking about Windows containers, and I was talking about stateful applications, especially Windows containers.
So Windows container support was added to open-source Kubernetes in 2019. So it's still relatively recent. And Windows containers have been around for a few more years than that. But since they're just kind of getting integrated into Kubernetes, I think there's a lot of room to grow there, to see how Kubernetes adoption with businesses that primarily work with Windows, what might that look like, and how might that evolve how Kubernetes functions, not just in Windows but also in Linux and other use cases.
So that's definitely an area that I'm excited to see kind of how it grows. And I know GKE is doing some cool stuff there. Like we mentioned that you can run both Linux and Windows workloads side by side in the same GKE cluster. I've been exploring that recently, and that's super cool. So there's a lot of new features and development going on in the Windows container world.
I've also heard some surveys, some stats, not just from Google but from outside in the community, about the use of stateful apps in Kubernetes. And I'm hearing that stateful workloads and databases are becoming really popular use cases on Kubernetes, which, like we said, early on in the early days of Kubernetes, it was very focused on stateless types of architectures. So that's a bit of a shift that kind of has happened right under our noses, and of starting to learn kind of why is that happening, and what gaps is that solving.
So I think there's some interesting stuff there. And of course, as Gari mentioned, Autopilot is still relatively new. Came out in March of 2021. So that is kind of a new way of thinking about how you interact with Kubernetes, where Google is taking care of the nodes for you as well as the control plane. And I'm looking forward to seeing what new use cases that opens up.
So there's a few areas that I'm keeping an eye on. Oh, and one more to add in there is multicluster. We've talked about that a couple of times it's come up in here. I'm going to be doing some more exploration in that area and talking about that in some coming up events.
But the way that businesses, especially large enterprises, are starting to adopt Kubernetes means that there's an even more urgent need for more work in the area of multicluster services, and multicluster in terms of the open-source Kubernetes project as well. So that kind of evolution has been really interesting, because earlier days of Kubernetes, I think multicluster wasn't as much of something that the people working on Kubernetes worried about. They worried about getting one cluster working for stuff really well.
And we've gotten to a point where we can make these really big single clusters, and there's all these features in a single cluster that can work with all these different use cases. But as we start talking about enterprise scale, it's going to be more and more multicluster workloads. So there's definitely great work going on there.
ANTHONY: I am totally going to echo what Kaslin's saying. Autopilot is really exciting. I think that a fair criticism is that for someone to-- again, as Gari sort of alluded to earlier-- just to get their application running, but they want it to be Kubernetes compliant, if they need to migrate into a cluster in the future or something like that, it was really difficult. You were provisioning a GKE cluster, you were owning the nodes and having to figure out how to tune and set the lifecycle for those nodes yourself.
So Autopilot is really exciting to me, because I'm excited to see how users are able to quickly get up and running on a production-grade Kubernetes cluster with minimal effort. You just write a deployment YAML and a service YAML, then you're paying for whatever compute that you're requesting. And it's a really nice experience that kind of abstracts away a lot of the challenges that managing Kubernetes at scale brings.
On the opposite side of that, I'm also very excited for the advanced profile of customers who have a lot of experience with Kubernetes, which many of our customers are actually participating very actively in open-source, are well aware of how to run a Kubernetes cluster in production. But they need things to be made easier for them that removes some of the toil that comes with even using a managed service like GKE. So I'm very excited for multicluster support.
As folks start to understand that they're deploying across multiple environments and multiple clusters, not only are you concerned about service discovery across those environments, or high availability across those environments, but I think even just making sure that those clusters are consistent-- so following a model where we have an external source of truth, like a Git repository, that will actually ensure that all of these clusters have the same defaults that their organization would like to employ is something that's available today in config sync. So this is something that you can deploy to your GKE environments and make sure that your clusters don't have configuration drift but are actually consistent as you start to scale up for all the reasons that Kaslin mentioned.
So I think really, what I'm excited about is seeing kind of both profiles of customers and everywhere in between starting to be covered. Those advanced users can start to, again, take advantage of more advanced features. And those users who are just looking to get started or won't ever need those advanced features can also get GKE running production fairly quickly.
GARI: When I think about Autopilot and where we're going with that right, it's going to open up, similar to what Anthony said, Kubernetes to a bunch of other customers who might not be your big enterprise or big digital natives. They've have heard the benefits of containerization-- a made-up, "containerization"-- but they're worried they're not going to have the people to do that. Or they can't-- don't want to pay some outsourcer or to necessarily do it. So I think we're going to see more growth actually in workloads that can come to GKE and Kubernetes from that perspective, which excites me, because that's one of the sort of core targets there.
An interesting area as people build their new applications is, how can we get older applications over to Kubernetes? We've been doing a bunch of work on things like Migrate for GKE, which are kind of fascinating. Basically taking VMs, introspecting them, packaging them up into a container, taking those assets, building that, putting that into a pipeline that you can do day two operations on if you want to optimize.
And at least pulling them out. Like, knocking down barriers to entry just is going to push us on the adoption and get people to using this stuff faster, and then, I think, driving cooler features and more uses in there. So those are some of the main areas that really excite me.
The other thing that's interesting that we've been working on are kind of two areas. We did the 15,000 node thing, right? We can scale a single cluster to 15,000 nodes. I think managing a fleet of 15,000 clusters-- maybe not next year-- but, you know, aiming at stuff like that becomes pretty interesting.
So again, customers can span huge enterprises, can have multiple clusters or workloads [INAUDIBLE] again, breaking down another barrier. So those are the kind of things that I'd like to see-- getting more workloads on there, opening it up to the masses, scaling out across enterprises. With that, we're just going to get more and more users. And all these companies are going to reap all the benefits that they've heard of Kubernetes more quickly.
MARK: Yeah, it's definitely super exciting to hear about the combination of flexibility and power. I think, Anthony, you were talking a lot about it's going to be a good solution for folks to look at GKE, because some areas, they might be coming in and just saying, I need to run an app. Or some more power users want to come in and say, let me get into the nitty gritty of it. Let me get to the configuration level.
And so it's really cool to see GKE having such a great history over the past six years of looking at those cases and working with these customers. And then of course, the fact that it's going to continue to grow.
We're just about out of time, unfortunately. But I'd love to give you all the last word and see if there's anything else we've missed that you all would like to mention.
KASLIN: I'll mention KubeCon is coming up. Always exciting. I'm always really involved with KubeCon. I'm going to have a panel there and also a small keynote, where I will be going over the history of multicluster with Kubernetes. And we'll be talking about what's coming up next there.
So if you want to hear a little bit more about that, check out KubeCon keynotes. And definitely check out the rest of the stuff going on at KubeCon. There's going to be a lot of really good talks there. I'll be around. You're always welcome to reach out to me on Twitter @kaslinfields for through platform, if you're attending virtually or anything like that. I'm happy to help you navigate that.
And also, we mentioned earlier about the Cloud onboard workshop thing that I did before. I wanted to say one more word about that. If you want to get hands-on, I did this Cloud onboard GKE workshop thing that was like a four-hour training. So it's pretty considerable commitment.
But it goes over so much that I think is essential when you're getting started with Kubernetes. It goes over why you want to use Kubernetes, what it's like to get started with GKE, how you connect GKE to other components of Google Cloud and make sure that your security needs are being met with GKE and things like that. So it goes over a lot of really great basics there. And I think we'll have a link to that hopefully in the show notes.
ANTHONY: I think all I would say is definitely check out Kaslin onboard session. It's a great place to start. And I think, again, back to what Kaslin and Gari were saying earlier. Like, if you're new to GKE, or you're new to Kubernetes, face it head on. Feel free to reach out to any of us. I think we'll have our Twitter handles.
So feel free to reach out, in addition to many folks at Google and many folks in the Kubernetes community on Slack. These are all great places to start. And people are willing to help, including us here. So feel free to reach out.
GARI: Our goal of being here and everywhere else is like do a bad quote of a movie. We want you to love GKE as much as we do. And Kubernetes as well. But those combined.
So feel free to reach out. You'll have our handles you can find us. It's not hard. We love to talk to potential people, just anything. Don't be afraid to ask.
KASLIN: And one more thing about KubeCon. We have a day zero event actually happening just before KubeCon. And it's going to go for a lot of the same stuff that the onboard went over. So if you want to do that kind of live with us, where you have people ready and there to answer your questions, definitely join for the KubeCon day zero event.
MARK: We're going to have a ton of links in the show notes. We'll try to categorize them as best we can. But obviously, we've gone through a tremendous amount of stuff. So I want to thank you all, Kaslin, Anthony, and Gari, for coming in and chatting about it. I know it's super exciting. And I know that with Next and KubeCon coming up, I'm sure there's going to be tons of great announcements and just a lot of stuff to follow.
So thank you all so much for coming in and talking.
KASLIN: Thanks for having us.
ANTHONY: Thanks, Mark.
GARI: Thanks for having us, Mark.
MARK: It's awesome, obviously, to hear about the sixth anniversary and to hear about what's changed in the past year, what's coming up. But it's also really great to get a perspective from different folks who, while everyone's working with Kubernetes and GKE, you're all working in different areas of that. And so that means you touch different customers, that means you work with different types of teams, that means you just get to experience completely different landscapes, essentially. But it's really cool to see that there is one tool, GKE, that's kind of behind all of these different methods.
KASLIN: Yeah. I learned so much listening to Gari and Anthony. I get to work with them occasionally in various contexts, because, as we said, they're on different teams. They work with different areas of the product. And every time we get to work together, it's always a new learning experience for me.
And you all heard how excited they are about GKE and the different perspectives they bring to the product. So always great to talk to them.
MARK: Yeah, absolutely. We may have mentioned it in passing, but there just wasn't enough time for the interview. I'm sure we could have gone for another hour. But I'll make sure to throw some links into the show notes.
Because one of the topics we've talked about before-- we actually did a previous podcast on it before as well, a previous episode on kind of cost optimization, specifically on GKE, because it is such a complex topic. There are so many different ways to look at resource optimization, and kind of how do you configure things correctly, which I think we talked about in the interview as a benefit to Autopilot, being able to just have a lot of things optimized for you. But if you are in kind of the more advanced segment, or you really need that level of configuration, what are some of the best practices and tips.
So I'll make sure to throw some links in the show notes definitely around, not only that previous conversation we had on optimizing, but also on some of the videos that Anthony has been working on that look at some of those topics in a deep dive method and really look at, hey, here's what you need to think about when you're looking at optimization.
KASLIN: An important point to remember about cost optimization is that we're talking about running your applications efficiently. It's not just about cutting costs. If it was, it would probably be a good bit simpler.
But since it's about running your applications as efficiently and well as you can, it's all of the tools and resources we tell you about with cost optimization are all just general Kubernetes best practices. And they're things that you should learn about when you're running Kubernetes, whether cost is your primary concern or not.
MARK: I could probably do a whole spiel on cost optimization. But I think that's the easiest thing to think about when you think about cost optimization, is like, oh, I'm just trying to reduce how much I spend. And actually, that's not the case.
What you're trying to do is get the most value for what you're spending, and figuring out if there are ways to reduce spending, absolutely, but also increase the unit economics of what you're spending right and figuring all that out. There's a bunch of cool conversations there. I'll save it for another time.
But definitely check out those videos, because I think they're very helpful. And again, they're very specific to what types of resources you want to look at optimizing in GKE.
KASLIN: And if you want to learn more about that and all sorts of other GKE and Google Cloud topics, you'll definitely want to check out Google Cloud Next, which is coming up in October.
MARK: That's right. We'll put a link in there, too. But I think that's, what, October 11th. Google Cloud Next. And then KubeCon right at the same time, right?
KASLIN: Yeah, they are exactly overlapping with one another. So I know it's tough. It's going to be tough for me, too. There's going to be some Cloud Next content that I want to see which is related to Google Cloud stuff. But there's also going to be KubeCon content which is about just kind of the general open-source community and what's going on in Kubernetes. So I'll be involved quite a bit with that.
MARK: Yeah, speaking of which, you've been quite busy lately, huh? What have you been working on since the last time we chatted?
KASLIN: I don't know if we mentioned last time, but there is a YouTube series that I've been working on with some essentials around getting started with Kubernetes, kind of short videos to give you little snippets of how to get started with Kubernetes and the basics you need to know. So that's called "Kubernetes Essentials." And we'll put a link to the playlist for that series.
We also have some new content coming up for YouTube. So keep an eye on the Google Cloud Tech YouTube channel. And watch out for updates there.
And in terms of KubeCon, I'm going to be there giving a panel as well as a keynote. So I'm very excited about this. We mentioned a little bit in the interview. Definitely check it out if you're interested in KubeCon. It's going to be both virtual and in-person. So wherever you are in the world, you should be able to participate in KubeCon if you are interested.
One other thing that I've been working on that I want to tell people about is the Cloud Native Computing Foundation recently spun up a Twitch channel and asked several of their ambassadors to create shows for it so that there is regular content on the Twitch channel. And one of those is mine. And it's called Fields Tested.
And my next episode on August 19 at 4:00 PM Pacific, I'm going to be running through a capture-the-flag that was run at KubeCon 2019. And I'm going to have the creators of the capture-the-flag there as guests. And it's going to be my first ever capture-the-flag.
All virtual, so if you want to try out a capture-the-flag for yourself on Kubernetes, definitely join us, and you'll be able to do that. It's from securekubernetes.com if you just want to do it on your own.
MARK: So how does the capture-the-flag aspect work?
KASLIN: I'm going to find out.
I've literally never done a capture-the-flag in Kubernetes before. Of course, in concept it is a security challenge where you have to accomplish some goal by doing things that you probably shouldn't be doing in a normal system. So I'm going to be breaking Kubernetes security things, is my assumption.
Like I said, it's on securekubernetes.com. So if you go there, all the information about the capture-the-flag is there, and you can do it through that website, which I haven't tried yet, because I kind of want it to be a fresh experience, and also because I'm bad at planning out my work sometimes. But-- [LAUGHS]
MARK: If I had known that there was the possibility to have a show for breaking Kubernetes, oh, I could be a champion there. That's all I ever do.
Anyway, I wanted to thank you so much for coming on, Kaslin. Super exciting talk. Really glad to have you. And of course, we look forward to having you soon.
KASLIN: Yeah. Thanks for having me. See you next time.
MARK: And of course, thanks to all our listeners for listening. We'll see you all next week.
Kaslin Fields and Mark Mirchandani
Continue the conversation
Leave us a comment on Reddit