2022 Year End Wrap Up
Happy Holidays from all of us at Google! This week, hosts Carter Morgan, Stephanie Wong, and Max Saltonstall are sharing their favorite moments from the year! From great partnerships with national companies, new releases in some of your favorite Google software tools, and a trillion digits of pi, we’re breaking down some 2022 highlights and introducing special guest Podcast Producer Kevin McCormack to help with a fun podcast trivia game!
Carter Morgan is Developer Advocate for Google Cloud, where he creates and hosts content on Google’s Youtube channel, co-hosts several Google Cloud podcasts, and designs courses like the Udacity course “Scalable Microservices with Kubernetes” he co-created with Kelsey Hightower. Carter Morgan is an international standup comedian, who’s approach of creating unique moments with the audience in front of him has seen him perform all over the world, including in Paris, London, the Melbourne International Comedy Festival with Joe White. And in 2019, and the 2019 Edinburgh Fringe Festival. Previously, he was a programmer for the USAF and Microsoft.
Stephanie Wong is a Developer Advocate focusing on online content across all Google Cloud products. She’s a host of the GCP Podcast and the Where the Internet Lives podcast, along with many GCP Youtube video series. She is the winner of a 2021 Webby Award for her content about data centers. Previously she was a Customer Engineer at Google and at Oracle. Outside of her tech life she is a former pageant queen and hip hop dancer and has an unhealthy obsession with dogs.
Max Saltonstall is a Developer Relations Engineer at Google Cloud. He is a father, teacher, storyteller, speaker, educator, nefarious villain, game designer, juggler, and is only part zombie.
Cool things of the week
- Boost medical discoveries with AlphaFold on Vertex AI blog
- 6 common mistakes to avoid in RESTful web API Design blog
- Marketing Analytics With Google Cloud blog
Our Favorite Episodes of 2022
- GCP Podcast Episode 316: Google Cloud for Higher Education with Laurie White and Aaron Yeats podcast
- GCP Podcast Episode 317: Launching Products at Google Cloud with Anita Kibunguchy-Grant and Gabe Weiss podcast
- GCP Podcast Episode 325: Digital Sovereignty with Archana Ramamoorthy and Julien Blanchez podcast
Stephanie’s Honorable Mentions
Carter’s Honorable Mentions
Max’s Honorable Mentions
- GCP Podcast Episode 326: Assured Workloads with Key Access Justifications with Bryce Buffaloe and Seth Denney | Google Cloud Platform Podcast podcast
Transcriptshow full transcript
[MUSIC PLAYING] STEPHANIE: Hey, everyone, and welcome to episode number 331 of the weekly Google Cloud Platform Podcast. This is Stephanie Wong. And today, I am so lucky because I'm joined by three different people from the podcast team here. These are the all-stars. It is Max Saltonstall, Carter Morgan, and Kevin McCormack. Kevin, this is new. You're the producer on the show. I can't believe we have you on.
KEVIN: Yes. Really appreciative to be on. I love just sitting behind the scenes and watching what you all do every week, which is just amazing. So thank you.
MAX: I consider it my job to try to make Kevin laugh as much as possible and just hope that he forgets to mute himself so that it messes up our podcast recordings.
CARTER: To the people listening at home, this podcast isn't possible without Kevin. He herds all the cats, puts all the things in the calendar. So it's exciting to get him on this side of the microphone.
STEPHANIE: He makes sure that we actually remember to click record and that we also remember to move our actual cats out of the room.
MAX: What are you talking about? My cat's been on, like, four episodes of this podcast, Stephanie.
KEVIN: No, you all are just too kind. This is absolutely your all show. You guys are amazing. So it's a pleasure, honestly.
MAX: I love that I get to work with you, Kevin, on a regular basis, just brainstorming, how do we make the show better, whether it's a specific interview or the way we highlight the content or just the way we can thread the stories together. I think it's really awesome that we get to collaborate so regularly. And you and I have been doing a lot behind the scenes on expanding the platform. And I'm really grateful for your help and your persistence in some of the hills we've been climbing together.
KEVIN: No, it's been a great year. And I'm just super excited too for what's to come in 2023.
STEPHANIE: So why do we have our special guest, Kevin, here today? Well, it's because it is officially our year-end wrap-up episode.
And in tradition, we are going to cover our favorite episodes throughout the year along with some cool new metrics and facts that we don't know. So stay tuned for that. But let's go ahead and first cover the cool things of the week.
So my cool thing of the week is a simple plug for a blog post that I launched this past week. It's about AlphaFold on Google Cloud. So if you don't know AlphaFold, it is a deep learning algorithm by DeepMind, which is our research arm that focuses on solving intelligence problems. And AlphaFold made massive breakthroughs in the scientific community because it was able to predict about 200 million predicted protein folding structures that help form the basis for pharmaceutical development.
And now you can run that on Google Cloud through a few different methods, including Vertex AI, which runs on Pipelines and Workbench. But also, you have access to the public data set through the public data set program on BigQuery. So check out that blog post, which includes my video. It's a fun video, and I walk through all the ways that you can get your hands on it.
MAX: And get your hands on folded cool colorful pieces of paper. It's a really cool video.
STEPHANIE: Yes. Origami.
CARTER: Oh, wow. That does seem really cool. Yeah, all right, Stephanie, that's so cool. All right. My cool thing of the week is an article that just came out. And I liked it because it was about RESTful web API design. And so I'm getting into programming a bit more. I took a bit of a break for a while. And that's always one of the things you have to think about when you're trying to make a service for somebody else is, how do they communicate with this service? How can they use it?
And so this article by Varun Krovvidi and Geir Sjurseth goes over, like, six principles that you can think about. There's some diagrams, some pictures. And so some of it, thinking inside out versus outside in, some of it is not having too many API calls or not making your API too complex. And so these are just good ideas to keep in mind. I'm not going to give them all away. But definitely be sure to check that out on the Cloud blog.
MAX: My cool thing of the week is a really neat blog post about marketing analytics by my colleague Bukola Ayodele. And she breaks down how you can bridge Google Analytics data and use BigQuery and get some fantastic information about audience segmentation, about insights into what your audience is doing, and trends to help you improve your marketing work and spend and choices. So check out the blog post in the show notes. And I love seeing these stories that connect multiple products into a real, tangible solution for a problem that people have right now today.
STEPHANIE: Very cool. All right. I know that we are done with our traditional cool things of the week, but we do have a special segment today for the most cool thing of the year, which Kevin is here to share. Let's go ahead and give a rundown of what you're about to do, Kevin.
KEVIN: Yeah. So I just dug into some of our information from the last year of "GCP Podcast" and pulled some factoids to give you guys a little bit of trivia to see who can guess correctly. I just got four questions for you.
STEPHANIE: Ooh, OK.
CARTER: Oh. I'm ready. What do you got?
STEPHANIE: And by the way, for the audience, we don't the answers. So Kevin has pulled the data from the past year, and we don't know what the answers are. I have an idea from last year kind of, but let's see what it is.
KEVIN: Yeah. And you don't even know the questions either, so let's just get into it. Number one.
KEVIN: No? That wasn't the answer?
KEVIN: Number one. How many total episodes of the "GCP Podcast" were published in 2022?
MAX: OK, well, it's a weekly podcast, so that's a starting point.
CARTER: Yeah. I would say probably about 10 episodes off, right? So that would be 41 to 45, in that range?
STEPHANIE: So everyone give your guess then.
CARTER: Well, I got to go with 42 in honor of that being the answer to everything.
MAX: Yeah, that's the answer. Yeah. Well, I'll say 43 because I think Carter is on the money.
STEPHANIE: All right. I'm going to go with 45.
KEVIN: Wow. I'm amazed. We have a winner, someone nailing it exactly. That is Max. Yes, we published 43 episodes, including this one. That'll be it.
MAX: All right. So this is clinching it.
CARTER: Good work.
MAX: There we go. Well, I'll expect my prize at home, Kevin.
KEVIN: Well, I'm going to be even more amazed if you guys can get this one exactly. So next question is, how many different guests did we have in 2022, bearing in mind that we did have a few people repeating? So unique guests in 2022.
MAX: Whoa. So usually it's twoish. Sometimes it's one.
STEPHANIE: Yeah. And we had 43 episodes.
MAX: I'm assuming we're not counting guest hosts, right, where we had other people on, but they were a host or co-host. We're really talking about the guests.
CARTER: I'm going to say 100 minus 30. So I'm going to say, like, 72.
MAX: You stole my answer again, Carter. I was going to say 70. All right. I got to go 73 then because that's my lucky shtick.
STEPHANIE: All right. My guess is going to be 67.
CARTER: Yeah, I think it's lower, yeah.
KEVIN: Again, I'm really impressed here. One off. Carter, what did you say?
CARTER: I said 72.
KEVIN: All right. It's 71 unique guests--
KEVIN: --this year.
CARTER: So close.
CARTER: Fermi estimations for the win.
MAX: On your next Google interview, to join the "GCP Podcast."
KEVIN: Number three here is, who hosted the most episodes in 2022 among the three of you?
CARTER: I got to go with Stephanie on this one.
MAX: Yeah, I'm assuming it's Stephanie.
CARTER: Yeah. I don't think this was a surprise. Stephanie is a machine.
MAX: Whenever the question is who did the most something, it's Stephanie.
CARTER: Stephanie, you do the most, but in a good way.
STEPHANIE: I'm doing the most, thank you. OK. I'm going to have to go with Stephanie.
MAX: The hostess with the mostest.
STEPHANIE: There we go.
KEVIN: Well, correct across the board. So yes, Carter and Max actually each had 11 episodes hosted, and then Stephanie doubling that with 22 here--
KEVIN: --in 2022.
KEVIN: And then finally is just dug into our social media, our Twitter feed. And I'm curious if you guys can guess which episode tweet received the most retweets this year.
CARTER: I think it's the Shopify episode. What's it, like 290?
STEPHANIE: I'm quickly scrolling through all the episodes right now.
MAX: No checking Twitter. No checking Twitter.
STEPHANIE: I know. Don't worry. I'm not checking Twitter. I'm going to have to go with the Shopify episode or GKE turns seven.
MAX: I think my guess is not the Shopify episode. That's all I got. I have no idea.
CARTER: What is it, Kevin? What is it?
KEVIN: So Stephanie, close. It was not Shopify. It was a GKE-themed episode, but not GKE turning seven. It was the GKE release channels with Kobi Magnezi and Abdel. That just got a ton of traction. I know there's a lot of people in that community out there, so it was really cool to see that that did really well for our Twitter feed.
MAX: Very cool.
STEPHANIE: All right. Got to do more of that next year.
CARTER: Oh, and props to the guests and the host on that one.
KEVIN: And that's what I got.
MAX: Nicely done, Kevin. Thank you.
STEPHANIE: Very cool. Thank you, Kevin, for collecting all that data and for being the best producer on this planet, I think.
KEVIN: Thanks for having me on, and yeah, really looking forward to hearing you all share your favorite memories from this year.
STEPHANIE: Thank you. OK. So on to the juicy goodness of the main content here. We are going to cover our top episodes throughout the year. We're going to round robin it. And then we're going to play some clips from each of those episodes to give you a little taste in case you want to go back and listen to the full thing.
All right. So one of my favorite episodes, and this is not in any specific order, is the resiliency episode with Shopify, actually. It's number 290, Resiliency at Shopify with Camilo Lopez, who is a resiliency lead, and Tai Dickerson, who is a production engineer. I thought it was so fascinating to learn about Shopify's resilience practices. They had this migration to Kubernetes in the past couple years from on-prem to cloud, and that begetted new resilience needs. And they also had an incident in 2014 that changed their approach.
And I'm really impressed that they are so open about talking about it. And on top of that, Black Friday and Cyber Monday has been a huge traffic surge for all of their clients. So it's a huge test for them every year, always breaking personal records of egress, throughput, and data traffic. But they mentioned that aiming for 100% uptime isn't realistic. So you really have to plan for failover and measure the metrics that take priority. So check out this clip from Camilo.
CAMILO: I don't think we ever tried to get 100% because the amount of effort to get there is unconventional. You will have to essentially make your system completely static, analyze every single line of code and every single interaction within the system. This is not practical. But within the bounds of what the user's experience is going to be, that's what we care about. So did 99.99% of our users get to sell what they needed to sell for the median speed that they needed? Yes.
STEPHANIE: I just loved how they take a truly measured approach to uptime and failover planning. And one other concept that I loved learning about was enforced pacing, where they basically have a code freeze, but they still let developers go out during that period and push out code based on the average deploy rates leading up to Black Friday and Cyber Monday. So check out this quote from Tai.
TAI: So we did have a code freeze for a period around VFCM. But what can sometimes happen with these code freezes is immediately before or immediately after, you have people shipping a ton of features either that they're trying to rush out beforehand or that they're trying to get out now that the freeze is gone. And so by using this sort of enforced pacing-- because any change is risky, so we're able to limit the amount of risk to the baseline and then try and keep the period where we were just leaving things as they were and not touching them unless there was an emergency. We were able to keep that period as short as possible.
STEPHANIE: Yeah. So I actually refer back to this episode a lot. And I referred back to it during the episode about DevOps with Nathan Harvey. And this team is such a great example of a high-performing team for SRE because they also talk about culture as being a big driver of resilient success. So there's a lot of goodness in this episode with Shopify. There are so many takeaways that engineering teams can walk away with, especially since Shopify learns from their own outages, and they become much better as a result.
And fun fact, I just checked Twitter. There's new Black Friday, Cyber Monday stats that just came out for Shopify. They recently, this year in 2022, achieved a 99.999% and above uptime while averaging three terabytes per minute of egress traffic across their infrastructure, which is about 4.3 petabytes of data per day. So a lot still happening there.
CARTER: Oh, wow. All right. That was a great episode. I'm going to be really interested to check out that tweet and see how the scale changed from last year. I remember-- I'm a little bit off on it probably, but they said something like we're getting a football stadium full of people ordering every second or millisecond or something like that.
CARTER: And wow, just the scale that that's operating at. And then also to go from four nines to five nines, the amount of work that take. They say it's an order of magnitude more work, more resources needed to just add another nine. So super impressive. And yeah, that was a great episode. What do you think, Max?
MAX: The scale of those online retail operations in the huge crunch of those two major days, it's just intimidating, that you'll see people who are really half of their revenue sometimes, especially for some of these smaller sellers on these big marketplaces, comes from that. And so that downtime is such a big sometimes even existential threat to them that it's fascinating to me that it's worth it to Shopify to invest so heavily in the reliability and resiliency because it's actually critical to their customers, these sellers, being able to continue to thrive on their platform.
CARTER: Yeah, and it's interesting because another thing they invested in a lot of was training, internal training. And so I remember that episode. They said, well, we have teachings that happen so that we can teach people how to operate at this kind of scale. And it's like, from the outside in, from this episode, it sounds like Shopify is doing just so much right when it comes to delivering service at scale.
MAX: It's a lot like what we do internally at Google where we do have these SRE teams, but they only come in after engineers have already built a service that meets a certain threshold of reliability. And so we have to teach everybody across the engineering and product world at Google how to build reliable systems before they even get to enjoy the benefit of working directly with SRE.
CARTER: Well, this is an interesting segue. I want to share one of my favorites. This was episode 308. It was the New Pi World Record episode with Emma Haruko Iwao and Sara Ford. And then it was myself and Brian Dorsey as the host.
And this episode was so fun because, again, we're talking about scale, the scale Shopify operates at with a number of customers and having to keep that going. This was a scale at a different level of, what is possible with cloud computing? You know what I mean? And so I think the best way to get into is just to play this clip now. So let's listen to this clip, and then I'll comment on it in a second.
SARA: I want to geek out a little bit and call attention to the moment when the 100 trillion digit result was final. And Emma opened it up and looked at it. She was the first human to know what the 100 trillionth digit was. And that's just neat. I mean, I get chills when I think about it. We talk about human exploration in the physical world sense, but there's also in the mathematical sense too. So I got to give a shout out to Emma for that particular achievement. Emma, I'll let you say it. What was the digit?
EMMA: It's 0.
SARA: The 100 trillionth digit of pi is 0.
CARTER: 100 trillion digits of pi. So that's what Emma and her team calculated. And they were talking about the amount of resources that went into that and the cost. And the sheer numbers were amazing. We're talking 600 terabytes of storage were needed and memory. But the biggest VMs that were available at the time were 256 terabytes, which is huge, and still wasn't big enough.
And so what they had to do was create a fleet of network attached disks, where each of the disk had some of the data that they were using to calculate. And so a question that came up after this episode was, well, why does this matter? It's just digits of pi. Who's going to do that ever? But Emma said something really amazing about why this is necessary and why it's cool. Let's listen to that.
EMMA: You need to combine everything, CPU memory storage, sometimes network if you decide to use network attached storage, and reliability because you need to run a program for such a long time and process a lot of data. So it checks everything, every component, every aspect of the system.
So I think calculating more and more digits, it's a good proxy to the overall capabilities of computers. So we didn't calculate 100 trillion digits in 2019. And today, we do 100 trillion digits. And somebody, Timothy Mullicon, calculated 50 trillion digits using Solver in his house.
CARTER: And so basically what Emma is saying is you can use calculations of pi and other things like this to measure the progress of computing. So in 2019, they couldn't calculate 100 trillion digits. Now they can. And so if you look at how much CPUs are sped up, how much more storage you can have, and all of this being in the cloud, that's actually really amazing.
MAX: Well, and it speaks to the infrastructure around that computing too. It's not just how big can I get my disks or how much memory can I attach to one of these VMs, but how can I string them together? I think what Emma and Sara and the team did was so cool because it went beyond some of the limits to really push the structural design limits into new territory.
STEPHANIE: Yeah. I also found that episode so impressive and fascinating. We also did a video about it with an actual pie in the studio. And we tried to cut it up into-- well, it was supposed to represent a trillion pieces so we could show the relative size of one digit of pie and just how enormous a trillion digits really is. And so to understand the work that went into it and the compute processing and memory that needed to compute all of these digits is just truly mindbending.
CARTER: Wait, wait, wait. So how many pieces did you get the pie into?
STEPHANIE: Oh, just 999 billion.
MAX: After you ate it, of course.
STEPHANIE: Right. It's just powder.
MAX: Turns out NASA only needs 15 digits to do their stuff, which I think is fascinating.
STEPHANIE: There we go.
MAX: We've got more just in case they come looking. I was really excited this summer to hear an episode, Stephanie, that you co-hosted with Kelci Mensah, our summer intern that we all worked with, about Google Cloud for Higher Education. And they had Laurie White and Aaron Yeats on. It was really neat for me to hear because we have two folks who are working really closely with the students and faculty and staff in higher education but coming at it from different angles and yet also working together in complementary ways.
And Laurie, having been in computer science faculty for her whole career before coming to Google and joining our team, has a great understanding of what faculty want and need and how to reach them. And then on the other side, Aaron is really focused on bridging the gap to students and helping equip them with what they need. So I really enjoyed hearing Aaron say this about what he thinks about his job.
AARON: To build and to speak to the future student leaders, we have to start with students who are just entering school, or they're in their second year. And so then how does that content play? I mean, we're providing them with Google Cloud content, and we're like, oh, now go have a career. Well, we like to provide these students with I would say a sampler. Laurie was talking about certifications, and that takes time. And we've worked hard to bring a three-year certification to a 40-hour concentrated pathway that goes on LinkedIn.
CARTER: What? I miss Kelci. It's so interesting having an intern. But no, it's just interesting seeing how when you think about cloud computing, you often think of the cutting edge, the AI or the research. But it's a lot of I don't want to say boring, so I'm not going to say that. That word came out. But it's a lot of--
MAX: It's a lot of existing concepts but used in a new way. And I think that's where Aaron and Laurie are noting that there's a gap, that we are teaching current computer science students the same concepts. But then how to use those when your resources are a huge scalable cloud as opposed to the machine that's under my desk, that's a different approach to problem-solving. And if we're not equipping current students with the ways to think about that cloud-based problem-solving, we're not really equipping them to go into the workplace.
STEPHANIE: 100% agree. It's how the next generation of people that are working on the cloud and using the cloud are going to learn. So I really admire the work that they're doing in the higher education space because it's very much needed and doesn't get enough attention, I think. Awesome episode. Completely agree. My next top episode is episode number 315, which is Cloud Functions, 2nd Gen, with PM Jaisen Mathai and our DevRel engineer Sara Ford.
And this was one of my favorite launches this year that I did some content for, including a video as well. But it was an evolution of Cloud Functions. And we had a great dynamic in the conversation from both the product perspective and the developer perspective. And we could also learn about the differences in the new second generation but also what's different about the developer experience.
There was a lot of excitement in the community around the new, longer HTTP processing times. This is something that people asked for and we delivered on. And there is also other new features around performance costs and control, including larger instance sizes, concurrency, and traffic splitting. So let's go ahead and hear from Jaisen about just how amazing Cloud Function is to get started on Google Cloud.
JAISEN: I think that one of the things for Cloud Functions that I really appreciate is that it is very approachable. And as Sara mentioned, the easiest way to start would probably be through the Cloud Console, through the browser, where you can just click and write a little bit of code and hit deploy. You don't have to set up your local environment.
And so I think that approachability is really a big, important factor of this because we want Cloud Functions to be able to be something that you can use. If you just know some Python, you should be able to use Cloud Functions. You don't need to know anything else. And so I think that that is one of the aspects of Cloud Functions that is appealing for many people.
STEPHANIE: Yeah. And it's just such a great overview of second gen differences because it really does highlight some of these beautiful features, like concurrency and traffic splitting. And I really enjoyed how they enumerated when to use Cloud Functions versus other compute products or serverless products, like Cloud Run. It really does boil down to, can your code be discrete and act as a function? And as Jaisen and Sara were saying, it's a really great way to just start using Cloud Functions, like a Start Here sign on a game board and then go from there.
MAX: A lot of folks I think get confused, like, should I do Compute? Kubernetes? Cloud Run? Cloud Functions? And so this breaking down where they're applicable to what kind of problem-solving is so valuable.
CARTER: Yeah. I find that really interesting about what technology is letting you be able to do now. So just there, it's like, if you just write a Python function, you can make a scalable maybe not an application, but a scalable endpoint that can do a lot and have a lot of power. Yeah, I just think that's an amazing thing. And so that idea of maybe democratizing technology and making technology more accessible to non-expert coders or non-expert programmers is a really cool concept.
And that's what one of my other favorite episodes, episode 327, the ML/AI Data Science episode with Dataiku. And so that was a fun one where it was Jed Dougherty and Dan Darnell. And Anu Srivastava was the other host with me. And I'm going to start it off with just a quote so you can hear what it was about.
But the idea was Dataiku is making AL accessible to people that don't have that knowledge. And so what does that mean for technology going forward, these emerging technologies? What does that mean for experts in the field or just everyday workers that are in the field that want to use these ML technologies?
DAN: Yeah, I think data science is evolving. It's evolving both for experts and for everyone else. And if you're in a subject matter expert role or you're an engineer on the shop floor or you're an actuary, an insurance company, or any of these technical roles or these people who have use cases that you're thinking about, it's possible for you to start to act on those and to build something.
And for the experts, freedom is coming, freedom to really hit those moonshot projects. And you shouldn't be worried at all about the future of what you're doing. You should be excited to have the freedom and the capability to really do some amazing work.
CARTER: And so what we just heard was Dan talking about the freedom that comes with this now. So one of the examples he gave throughout that episode was maybe you're a worker on the floor, and you have products that have to-- tools that you have to use. And these tools have to be back in your cart at the end of the day because it causes problems if you can't find your tools when the next person goes on their shift.
He's like, well, now, that line worker, that person, that engineer on the shop floor is able to use ML to identify these tools so that there's a system that goes, hey, that wrench isn't on this board, and get a notification. Before, maybe five years ago, maybe even two, three years ago, that's a very complex problem to solve. And now with tools like Dataiku, anyone, not necessarily the person that's an ML expert or a data scientist, is able to use those learnings and apply them to very specific problems.
MAX: I love a lot of these small companies and emerging players are broadening the accessibility of AI and ML tools and making it much easier to get in and use these really, really powerful technologies to solve everyday problems. I love seeing that.
STEPHANIE: Yeah. And you see this cascading effect that it has on, like you said, people that are working on manufacturing floors or general business users. But again, I think it's just providing a scale of tools, giving you different levels of abstraction and tooling. And I think that's a theme across not just companies like Dataiku, but also at Google and other cloud providers. We're giving different levels of abstraction depending on the skill level or the needs of your team. So really exciting to see that.
MAX: Yeah. And this is related to one of the other episodes I wanted to highlight, episode 317, Launching Products at Google Cloud, where we're talking about, how do we bring some of these new products to the public, to the market within Google? And what are the steps, what are the processes, what are some of the tricky parts about getting something from an idea to an actual Google Cloud product or service that you can use? So here's a great quote from Gabe on some of the ways that they have to think about it.
GABE: So the launch consideration stops being just about whatever it is we're launching. And now, all of a sudden, it opens this can of worms of all this complexity behind the scenes of how we want to talk about motions the platform is doing as well. So there's a micro story of whatever it is that you're launching and then this awesome macro story of the platform that wraps whatever it is that's happening.
MAX: It was great to talk to Gabe and Anita about these different perspectives on bringing a new product to the public. And what do we need to do to make sure it's ready in terms of talking about it, educating people about it, making sure that it works, it doesn't have major bugs? And they both had very different perspectives on how we approach launch readiness, much of it talking about AlloyDB as an example of a brand new database product.
STEPHANIE: I also love that episode because we heard from the perspective of Gabe, who is in DevRel, but also from the perspective of Anita, who's in product marketing. And you would think that product management is very similar or product launching is similar across tech companies. But at Google, it can vary actually quite a bit.
And so they were talking about how it really depends on the tier or the priority level of the launch. There's different cross-functional people that are involved each time. There's private preview and beta periods as well, and we get customers involved. So lots of different factors that come at play, and it was great learning about that.
Another really fun episode that I loved is episode number 307, FinOps with Joe Daly. I had a complete blast talking to Joe Daly, who joined the FinOps Foundation, and he has been setting up the ambassador program, supporting Meetup groups and producing his own FinOps pod.
I personally didn't know that much about FinOps, but it's a really interesting intersection between finance and operations that emerged as cloud matured. And as Joe puts it, you're probably already doing some form of FinOps, and you don't even know it. But it's more about tuning those skills so you do it better. So check out this clip with Joe.
JOE: My wife listens to my calls sometimes. And after I'm done, she's like, why are you teaching people life skills? It's like, look. You might not have a FinOps team. And you might not be purposely acknowledging FinOps. That does not mean you are not doing FinOps. It's just simply about implementing policies so that you can be financially accountable. And if you're ignoring that, you're just simply doing a bad job at it.
CARTER: Stephanie, I recently started basically turning my credit card, just blocking it. And then I only turn it on when I want to make payments or purchases. Is that FinOps?
STEPHANIE: Yeah, pretty much. I mean, as you heard, Joe is an amusing guy who knows his stuff. And he compared FinOps to if you had your car running and you left it on all night, you would be wasting energy. And you just have to turn it off when you're not using it. And it's the exact same thing when it comes to cloud spending. So you remove that risk of you overspending by turning it off. So I think your credit card example is correct.
You don't see a lot of former tax accountants like Joe on this podcast, but he proved that there is a really important intersection for cloud. And in that episode, you learn that the FinOps Foundation really has a lot of philosophy that they teach to control costs and be financially accountable in a complex world that cloud can be. So check it out. I highly recommend it.
CARTER: Very cool. I actually really want to listen to that episode now because I'm like, yo, I need to get my money together. What you got for me, Joe?
MAX: I just leave my money running in the garage overnight. Is that OK?
CARTER: You got money that can be in your garage?
STEPHANIE: I mean, hey, you're doing fine FinOps, just poorly.
MAX: I live in New York City. I don't have a garage.
STEPHANIE: Yeah. Who are we kidding.
CARTER: So OK. This is slightly related. But an episode I liked was 289. And what made me think about it-- that was the Cloud Security Megatrends episode with Phil Venables. And what made me think about it was Phil was a guest that had some examples that were-- maybe it was because it's a space I don't really know as much about, but security at that scale.
But some of his examples were really interesting to me. So one of the things he talked about was a big question that he gets asked is, is the cloud really more secure than being on-prem or private infrastructure. And so listen to Phil's quote here.
PHIL: But there's something deeper at play that we observed. And when we sat back and thought about it, we realized there's a bunch of what people call megatrends in various other fields. But in megatrends for cloud security are really at play here. And these are things like economy of scale. So as Google Cloud and, in fact, all of the hyperscale collaborators get bigger and bigger and bigger, the unit cost of security goes down.
So trivial things, for example, what we just deployed by default, security chips in our servers that do boot security and firmware validation and many other things, the unit cost of that at the scale we operate is a lot lower than the unit cost that organizations can implement on-premise in a smaller scale environment, just to pick on one example. And so that economy of scale drives a higher level of baseline security in the cloud.
CARTER: What he's talking about is, yes, the cloud is more secure, but some of the reasons why are maybe unintuitive. He says the cloud is operating at such a scale, hyper scale, he calls it, that little trivial things just happen automatically. For example, you heard it there where he's talking about security chips or just being deployed by default. And the cost for you to do that on your own if you were going to buy one of these new end chips, very expensive. We talked about FinOps. FinOps. Get it?
But if you're doing it at this kind of scale, you start saving money so that that scale is spread out, that spend is spread out across everyone that uses the cloud service. So that was really cool. I think it's important because I didn't say it, so just before I forget, that was a crossover episode that we did, which is another reason why I thought it was so cool. We got to work with the "Security Podcast." And so Anton Chuvakin and Timothy Peacock were on as well as Phil and Mark Mirchandani, which I think is the greatest podcast to ever exist.
I'm always like, Mark, will you teach me how to podcast like you? And he never does. But I really want him to. But yeah, that was a fun episode because it's just really interesting to see the concerns for people that are looking at global security, whether it's data sovereignty and what you own and who can get access to it.
Is that different if you're in the US versus if you're in Europe, and things like that. And then also just thinking about what it means to have shared responsibility or what it means to have Defense in Depth or layers of defense. I've lost that term now. I know you know it, though, Max. What's that term?
MAX: Defense in Depth?
CARTER: I did have it right. Damn. And so it was just a fun episode hearing about all that.
MAX: Security is like an onion, Carter. The more you dig into it, the more you cry.
STEPHANIE: That's what they mean by Defense in Depth.
STEPHANIE: But for everyone, Phil Venables is a long-term veteran in the security space. He has a ton of experience. He was at Goldman Sachs for over 20 years doing this, and he's on the board for a few companies. So highly recommend listening to this episode to learn from someone that's very well-experienced here.
MAX: On the topic of trust, we also had a great discussion with Archana and Julien on Digital Sovereignty in episode 325, where we were talking about some of the same concepts, Carter, that you were mentioning around, how do you build trust for the consumers of a cloud platform and not just trust that the data or the workloads I put up are going to be secure, but also trust that only the people that I think ought to be looking at my data are actually looking at my data.
And this comes up a lot with regulation and with different laws around how governments, especially, can move to the cloud. So there's a great quote from Julien that I think summarizes some of the complexity here.
JULIEN: It's a very public and very necessary discussion. And the good thing is that we see we are making good progress collaboratively on this. Both technology providers, regulators, and lawmakers are progressing in the process.
MAX: Because it's complicated. It's really, really complicated. And it's not just Google does a thing, and then it's fixed. It's much more than that. And it's working with the regulators. It's working with partners, and it's working with other technology providers to make sure that we can meet the needs of any of the digital sovereignty requirements that come to us.
STEPHANIE: Yeah. Digital sovereignty is a complex and difficult space, especially as governments and different economies change. And so it's only going to become more of a requirement for cloud customers. So it's great to see that. We are trying to pave the way here and having those necessary discussions.
MAX: There's a really interesting call out also in that episode about, what do you do when things might need to change rather rapidly? Like, say, I don't know, your country is getting invaded by a neighboring country and you need to get your data out of the sovereign nation to protect it because if you leave it within the boundaries of your country, it could be at more risk.
STEPHANIE: Wow. That's actually interesting. I mean, things can change on a dime. So how do we handle those situations? I do have two other honorable mentions that I do want to include because I thought they were also really great episodes. The first one is episode number 323, Next '22 with Forrest Brazeal and me. I had such a blast going back and forth with Forrest.
He's super sharp and punny, and we were both present at Next in Sunnyvale, running around, interviewing folks around their favorite launches and what they would do with that extra day if we had a four-day workweek. There were big mentions and themes around Cloud Workstation, Software Delivery Shield, and the new Innovators Plus subscription. So check that out.
And then my second one is episode number 298, Celebrating Women's History Month with Vidya Raman. She is an amazing woman and tech leader at Google who's been here for over 15 years. She's pivoted from different roles into product management back when Google was mainly a consumer business. And she gives us her critical tips for success, including what she calls her advisory board and how to build that. So I highly recommend those two episodes. I think you're going to walk away with a lot to learn.
CARTER: Yo. OK, OK. Honorable mentions on my end were the Prometheus episode. I never mentioned that, but episode 312 with Lee Yanco and Ashish Kumar, talking about how Home Depot was one of the early adopters of monitoring and monitoring the way it's done now and what that means and how it changed their organization. And again, the scale of Home Depot. I don't know if I've ever been to a city without a Home Depot. So that was a really interesting one.
I loved the Shopify episode, but I said that. So then I'll just honorable mention one thing I didn't mention earlier, but the reason I loved the pi episode so much was that was also one of the first video podcasts that we've done, to my knowledge. And so that went out on the YouTube channel. And I just thought that was really interesting and fun.
MAX: Very cool. My honorable mention is 326, Assured Workloads with Key Access Justifications. It touches on a lot of the same topics that the Digital Sovereignty tied into but also introduced some really cool technology options for companies to protect their data by choosing when and where and under what circumstances it can be decrypted or used. And I found that a really engaging discussion. I got to actually sit down at a table with Seth and Bryce, which I rarely get to do when I'm doing these interviews. And I thought it came out really well.
STEPHANIE: Amazing. Well, we basically covered half the year with how much we talked about today, but I just want to thank you both for co-hosting and leading this podcast. I want to thank Kevin. And most of all, I want to thank all the listeners for sticking with us through 2022. We got more in store next year. And have a great holiday break.
MAX: Woo-hoo! Enjoy.
Stephanie Wong, Carter Morgan and Max Saltonstall
Continue the conversation
Leave us a comment on Reddit