Google Cloud Next Data, Analytics, and AI Launches with Eric Schmidt and Bruno Aziza
Mark Mirchandani is back this week with cohost Bukola Ayodele. We’re talking with Eric Schmidt and Bruno Aziza about all the awesome new analytics, data, and AI launches from last week’s Google Cloud Next conference.
Our guests start the show outlining the challenges clients face when storing, organizing, and analyzing data in the cloud. These needs have inspired Google solutions that focus on simplifying data management for customers. Next announcements like BigQuery Omni, which helps customers achieve full data visibility with cross-cloud analytics, and DataPlex, which facilitates data management at scale, will change the way companies think about their data. BigQuery integration with AppSheets and the new Cloud Looker LookML let customers build once and access from anywhere. The new Looker and Tableau integration revolutionizes the use of the semantic model in Tableau, allowing things like company-established data governance and the Looker Blocks ecosystem to pull into Tableau analysis. New Looker Blocks specifically targeted to the healthcare industry were also introduced at Next.
We talk about the ML announcements including Vertex AI Workbench, a fully-managed service used for data exploration aimed at simplifying the workloads of data engineers. Serverless Spark on Google Cloud shares these goals by making performance tuning, scaling, infrastructure provisioning, and other tasks fully-managed. The new PostgreSQL interface for Spanner lets clients use tools already developed in PostgreSQL while leveraging the global scaling and other benefits of Spanner.
Bruno and Eric share some favorite customer stories as we wrap up this week’s episode. Albertson’s, Renault, and others have interesting data journeys on Google Cloud and our listeners can learn more in the YouTube series hosted by Bruno.
Eric is the the Head of Advocacy for Data Analytics at Google and has been at Google for almost eight years. Previously, he was with Microsoft, where he led Advocacy and Evangelism there, too. He focuses on products like BigQuery, Dataflow, Dataproc and leads a team of advocates who help customers turn data into value. In his downtime, Eric is a DJ at 90.3 KEXP here in Seattle or online at kexp.org where he focuses on global music culture.
You can find Eric on Twitter. His handle is “not that eric” - not to be confused with the ‘other Eric Schmidt’ here at Google. In fact, internally, people affectionately call him “cloud E”.
Bruno is the Head of Data & Analytics at Google Cloud. He specializes in everything data, from data analytics, to business intelligence, data science, and artificial intelligence. Before working at Google, he worked at companies like Business Objects when it went IPO and Oracle, where his team led a big turnarounds in the business analytics industry. Bruno also had the opportunity to help launch startups like Alpine Data (now part of Tibco). Sisense and AtScale and helped Microsoft grow its Data unit into a $1B business. He has been educated in the US, France, the UK, and Germany and has written two books on Data Analytics and Performance Management.
In his spare time, Bruno writes a monthly column on Forbes.com on everything Data, AI and Analytics.
Cool things of the week
- Next Reaction: Security and zero-trust announcements blog
- Next Reaction: New Data Cloud launches blog
- Next Reaction: Making multicloud easier for all blog
- Next Reaction: Features to reduce IT carbon emissions and build climate-related solutions blog
- Next Reaction: Monitor your conversations, get started with CCAI Insights blog
- GCP Podcast Episode 266: Data Analytics Launches with Bruno Aziza and Eric Schmidt podcast
- BigQuery site
- Bringing multi-cloud analytics to your data with BigQuery Omni blog
- Google Cloud Next—Day 1 livestream - WalMart video
- Dataplex site
- AppSheet site
- Cloud Looker LookML site
- Tableau site
- Vertex AI site
- Vertex AI Workbench site
- TensorFlow site
- Apache Spark on Google Cloud site
- New PostgreSQL Interface makes Cloud Spanner’s scalability and availability more open and accessible blog
- PostgreSQL site
- Cloud Spanner site
- Google Earth Engine site
- Google Maps Platform site
- Inside Industry Data Management 4.0 at Renault site
- Chess.com site
- Google Next Opening Keynote site
- Data Journeys with Bruno Aziza videos
- Cloud Next Catalog site
- Bruno’s Cloud Next Playlist videos
- Cloud Next Data Analytics Playlist videos
- Bruno on Linkedin site
- Lak on Twitter site
What’s something cool you’re working on?
Bukola is working on the Click to Deploy video series.
Transcriptshow full transcript
[MUSIC PLAYING] MARK: Hey there, everybody, and welcome to episode number 281 of the weekly Google Cloud Platform Podcast. As always, I'm Mark Mirchandani and today I'm here with my colleague, Bukola. Hey, Bukola. How are you doing?
BUKOLA: Hi, Mark. Hey, everyone. I was on the podcast a couple episodes ago and I'm so happy to be here again.
MARK: Absolutely. We are so excited to have you back. It's always a great time. And we haven't had a chance to record together, so this will be a first for both of us.
BUKOLA: Exactly. I'm excited.
MARK: Absolutely. And we have a great conversation today. You know last week we were talking all about Next, which is a pretty big Google Cloud event for those who might be living under a Google Cloud rock, perhaps. But today we're going to be talking all about what?
BUKOLA: Yeah, so we're going to be joined by Bruno and Eric and they'll be talking to us about data analytics and AI announcements from Google Cloud Next this year.
MARK: Yeah. We did a great recap last week of just kind of all the announcements in general, but of course there are so many, we had to dedicate an entire episode to talking about the data and analytics and AI announcements. So we get to do a really good deep dive on that. And we actually had both Eric and Bruno on earlier this year as they were talking about some of the data and analytics launches from earlier because there's always more launches.
BUKOLA: Yeah. I'm excited to see what they have to say. And I'm looking forward to just hearing more about the new data analytics and ML launches announced at Next.
MARK: Absolutely. But before we get into that, why don't we go ahead and do our cool things of the week?
BUKOLA: Yeah, so I recently read this article on the Google Cloud blog by RTMateMax about security and zero trust announcements. He was talking about a couple things that have come up recently like automatic data loss prevention in BigQuery, which allows you to protect your sensitive data such as phone numbers, credit card information, names, addresses, and identifies and protects them from leaks across your entire company.
And there was another thing that he mentioned in the article that was also really cool, which was the machine learning based URL checking to detect phishing sites in real time plus document detection. So we can help do an in depth scan of sketchy docs that might have malware and malicious attachments on them. So yeah, that was a really cool thing to hear about.
MARK: Yeah, absolutely. And I'm going to cheat a little bit because my cool thing was also Max's blog, but it was also four other blogs as well. There are five blog posts that we're going to include in the show notes and I definitely recommend checking them out because they're written by our incredibly talented and always fun to work with developer relations team, and they're basically recapping some of the cool announcements from Next.
In the same way that we talk about on the podcast, each of the blog posts go into a bit more detail about what was launched and why developers and technical practitioners really care about these things. Highly recommend giving them a quick read just because they are so great and because they do a really good job of summarizing tremendous amount of information into just a few short blog posts. Well, with those cool things out of the way, why don't we go ahead and jump right into the conversation with Eric and Bruno?
MARK: Thanks for joining us Eric and Bruno before we get started, could you all tell us a little bit more about yourselves?
ERIC: I'm the head of advocacy for data analytics at Google Cloud. And I've been at Google-- let me check my watch-- almost eight years now. Previously to Google I also led advocacy and evangelism efforts at Microsoft. Here at Google I focus on products like BigQuery, data flow, data PROC, and I lead a team of advocates who help customers turn data into value.
In my downtime I'm also a DJ here in Seattle at 90.3 KEXP or online at kexp.org where I focus on global music culture. You can find me on Twitter. My handle is @NotThatEric, not to be confused with the other Eric Schmidt previously at Google. And my LDAP or email is affectionately cloude-- or cloud with an E-- @google.com.
BRUNO: And I'm Bruno Aziza. I'm the head of data analytics at Google. I specialize in everything data, data analytics, business intelligence, data science, artificial intelligence. Before Google, I worked at companies like Business Objects, if you remember them when they went IPO. I was at Oracle where my team led big turnarounds in the business analytics space. Also had the opportunity to launch a couple of startups, Alpine Data Labs, if you know them. It's a data science workbench bought by TIBCO Data Science at scale.
And I worked at Microsoft as well, just like Eric, where I helped them grow their data unit to a billion dollars. You're hearing a little bit of an accent because I'm French. I have been educated in the US, France, and UK in Germany. I've written two books in the space of data analytics, so I'm very passionate about data analytics. In my spare time, I do something a lot less glamorous than Eric. I write a monthly column on forbes.com on everything data and analytics.
BUKOLA: Awesome. Thank you so much for providing your bios, Eric and Bruno. Some of our AVID fans might remember that you both joined us earlier this year to talk about some data analytics launches. Now we're talking about all of the many launches from Next to this year. So before we go into the launches themselves, can we talk more about the high level challenges that come from working with data in the cloud?
BRUNO: You bet. I think the best way to think about the products and the solutions that we bring to market is really to start with the customer needs, their jobs and their goals, and go from there. And if you think about the biggest issues customers have today in their data environment, what is it? In one word, it's complexity. I think everybody in the audience will probably relate to that. Data is everywhere. It's on prem, of course, in multiple silos, it's in the cloud, increasingly across multiple clouds.
So they have a hard time discovering and managing the data at scale. And so what we want to do is do everything we can so we can make sure they get the value from their data and many of them struggle. If you think about the average chief cloud officer today, their tenure on average is less than 1,000 days. So there's a real struggle here. And so everything we announce at this event is around the same theme, which is how do we simplify the world of data for our customers? We want organizations that work with, we want to be able to help them build an environment where, essentially, insights can be developed rapidly because the data is well governed. So we'll talk a little bit about Dataplex there.
We want to make sure it's discoverable. We want to make sure it's delivered with the high degree of personalization and relevance to the people that need to use the data. And you know, what I'm describing here is essentially how we at Google have solved this for ourselves for the last four decades. If you think about the google.com site, it's nothing but the world's largest data and insight system the way I look at it. We make the world's data simple. And what I mean by that is we're dealing in the back of this beautiful, simple page is a lot of disconnected, rapidly changing data, all types of data.
And we make it accessible to anyone without any training and we deliver it with high level of relevance. So when I look at what we're delivering, this is a prime example of what we've been. I joke sometimes and I say data analytics, which sometimes we say DNA here, is in our DNA. It's not a joke. I think we really relate well to the modern problems that customers have by delivering insights to what you could call insight consumers in a way.
MARK: Yeah, and I think that's a pretty common pattern too because we do see a tremendous amount of challenges at Google that have been solved because of the amount of data that we have to handle. And so it's always really interesting to see how that came to be, how we created it, and then how we kind of turned it into a product to help folks who are dealing with a very similar set of problems. With that in mind, a lot of these launches at Next were really designed or are upgrades or are new features to products that we have been working on for a very long time. Let's dive into it. What did we announce at Next that really enables how to help folks deal with their DNA, basically?
BRUNO: Yeah, yeah. So there are lots and lots of announcements, and I'm going to try and just kind of break them up by themes for you. The first thing on the way to working with data is getting full visibility on your data, right? Say you're a customer and you're building your customer 360 solution or a customer data platform and your data is bound to be spread across the multiple silos or in multiple clouds, as we talked about earlier. And so what we announced is the upcoming GF of BigQuery Omni.
BigQuery is, if you think about it, a solution that enables customers to do cross cloud analytics. And they can do that from BigQuery's single pane of glass. So for the first time customers will be able to analyze data in the Google Cloud, of course, in AWS as well as in Microsoft Azure. And so who cares about this? I tend to say everyone because I'd say from a business user standpoint if you want to gain into an insights, well, the best way is to have one single view of all your data.
If you're a data architect today without Omni, you have to move data around, you've got to build these complex environments that cost you a lot more money, of course, but also slows you down as you're trying to deliver value to these people. What I would say is probably one of the most meaningful announcement this Google next is BigQuery Omni. We have a great blog that shows the example of Johnson & Johnson and how they're able to combine all the data without migrating it. So super important one.
By the way, I'm sure that your audience knows that BigQuery is a key part of our data portfolio, you know, it's a big part of it. In fact, if you haven't yet attended TK's keynote, Thomas Kurian, at about 71 minute, I think, into the keynote, has invited Walmart's Chief Technology Officer and Chief Development Officer to talk about how they're using BigQuery and they're giving their data to BigQuery treatment. And so it's a really, really great talk. So that's the first one, right? It's everything around BigQuery.
The second announcement is the upcoming GF Dataplex. So this is, again, the same theme, right? We want to be able to provide an intelligent data fabric to customers. And I think this is going to change the world for a lot of organizations, because it allows them to manage data at scale with high degree of intelligence. In there we talk about a great customer, Deutsche Bank, and how they delivered unified data mesh solution for their constituents. This is a big deal as well. Data mesh is a term that I'm sure your audience has heard a lot about.
And as organizations move from building a data ocean where they put all their data into one place to a data mesh where they're now starting to federate environments so people can self-serve to data to all the way to building a data factory and building data products. Dataplex is going to be really important components of their deployment and executing on a concept like data mesh with a smart solution like Dataplex, it's an important step, I think, that people are going to want to take a look at.
So Omni and Dataplex, huge, I think, consequential launches for the industry. I'll even tell you one more thing. I know this is probably not expected by the audience, but data really showed up everywhere at Next. For instance, we talk a lot about data architects and business analysts, but if you go and attend the productivity section we talked about deep integration between Sheets and BigQuery.
You know that today customers can use connected Sheets to query billions of rows in BigQuery, and yes, I just said billions. You know, Mark, I like to say that billion is the new million. But the best way to think about it is that we're investing even more to make sure that business users can self-serve to a large amount of data.
So what we're doing now is connected Sheets, we'll be able soon to take advantage of Google Cloud's Look or LookML so they can write a semantic logic into the platform so they can build it once and it could be accessed everywhere. So in a nutshell here, I introduced a lot of innovation to make it simple not just for the architects and the heads of data, but also for the business users to work with their data. So we're really excited about all the innovation we delivered at this Google Next.
ERIC: There's a lot more to the story and you kind of teed me up. Whenever I was reading some of our notes last night I was like, all right, I'm going to drop this joke. So my joke goes something like this. Most enterprises on the planet at the end of the day are run by email, Outlook, Gmail, they run on Excel and Sheets, PowerPoint, et cetera. Yeah, there's data inside of SQL, Oracle, et cetera, but there's so many different tools that you can use.
Now it's not a funny joke, but it's true. Right? There are all different types of solutions and tools that enterprises use and whenever it comes to things like business intelligence and data analytics, this is definitely true. There are hundreds of tools out there. And one of the other announcements that we made was announcing integration between Looker and Tableau. And if you're a Looker and Tableau user, a lot of people were hand clapping and high fiving about this because it's opening up new possibilities for integration.
If you're not familiar with this, I'm going to spend a couple of minutes kind of breaking this down on why this is valuable. So Tableau, it's a powerful tool for data analytics. It's the mainstay for enterprises, especially if you've built your BI system around a traditional OLAP approach. So maybe you're running on SQL server and you embrace the whole OLAP ID over the past decade. It's awesome. It's good for large scale analytics.
At the same time, Looker has been a powerful tool for BI and data analytics, but we took a different approach to how it was purposely built around a semantic modeling layer. So instead of looking at it and driving your analysis through OLAP concepts, you're driving your analysis through the semantic modeling layer which helps break down and produce benefit, especially whenever you're trying to do multiorg complexity. What does the concept of net revenue mean? What does a concept of a lifetime value mean? Et cetera. That can all be codified inside of this semantic model.
Now it's true recently-- I guess it was about a year and a half ago or so-- Tableau also introduced their own semantic model, which is great. But the integration that we announced at Next is highlighting the fact that Tableau customers can now get direct access to all of LookMLs semantic model. So the key piece here is that you'll be able to basically open up, democratize access to all the data that you may be analyzing inside of Looker, which is great. But you'll ultimately benefit from the deeper integration with the semantic model, especially for cross-cutting things like data governance.
So for organizations who have layered a lot of data governance inside of the semantic model, this will now get pulled through into your Tableau environment. So this is nice, right? We're basically opening up democratization while getting additional horsepower for things like data governance. The other thing I want to point out is that this will also bring forth Lookers expansive Looker Blocks ecosystem. So for those of you who don't know Bruno really well like I do, I know he's a big fan of toys. And the analogy that I like use here are things like LEGOs. So Bruno, I don't know if you have the Millennium Falcon or not.
BRUNO: I do.
ERIC: So that thing is 7,500 pieces, right? Now you can imagine if you were to be able to, say, build that with, say, 10 pre-built blocks of LEGOs, you could build that thing a lot faster. Sure, it'd be less fun, but it would provide you more time to play with it after you were done. So this is the exact concept-- you could use this analogy for Looker Blocks. We have hundreds of existing blocks for things like data sources, pre-built analytic blocks, visualization blocks.
So if you're an IT admin out there running on multicloud, you could use the cloud cost management block to track spend across GCP, Azure, and AWS. And then maybe you could layer another block like Zendesk or Snowplow to blend cost data with event data about your business. So net faster time to insight, leaving you more time to understand what's happening in your business. Now I wish we had like blocks for LEGOs and stuff inside of Looker to make it even more fun.
BRUNO: Maybe that's coming next, who knows?
ERIC: Yeah. I'm making a joke because we're always releasing new blocks and at Next we also release a new block specifically for the health care NLP API. Which provides simplified access to intelligent insights for unstructured medical text. It's compatible with the Fast Health Care interoperability resource. You can extract things like medical information, medical concepts, medication procedures, et cetera helping providers better understand things like patient family history, known allergies, miscoding opportunities, patient cohorting, and much more.
So again, leaving the user, in this case the medical practitioner, faster time to insight and hopefully less time asking you, Bruno, can you please tell me all the medications that you're taking? Like this is an easy one. I don't take any medications. And make sure that they write that down right. So if you're a Tableau customer out there today, look to Looker for democratized access to data within your organization.
BUKOLA: Wow, that's so many data analytics launches for the audience to check out from Looker all the way to BigQuery. And you also made such a good point about how many tools a company tends to use and the challenges around getting all the different forms of data usable in those tools. To change gears a little bit, there were also some exciting launches in the intelligence and ML spaces. Can you break some of your favorites down?
BRUNO: There's a lot going on here and I'm really excited about the work that we're doing, just like Eric was saying, to democratize access to data but also doing the same for machine learning and data science. Now I know, Eric, you're passionate about serverless Spark, so I'm going to let you touch on that. And so I'm just going to touch on Vertex AI Workbench. First of all, you might remember earlier this year at Google I/O we introduced Vertex AI.
Vertex AI, think about it as a unified data science and machine learning platform that covers all roles and all levels from machine learning engineers to data scientists. In fact, at the time we shared an amazing stat around productivity gains from the unification of the platform. Vertex AI requires nearly 80% fewer lines of code to train a model versus competitive platforms. So really focused on how do we bring assets together to give you productivity gains. And so at Next we introduced the Vertex AI Workbench. So what is it?
Well simply if you think about it, it's a fully managed notebook ID used for data exploration, data science workflows, but it matters because, again, it simplifies the work of data people. With Vertex AI Workbench, your data people, now they can launch multiple kernels using Tensorflow or PySpark from the same instance. It's scheduled notebooks to run ad hoc or on a repeating basis, change hardware profiles without having to shut down your instance.
So a lot of this simplification, essentially, is targeting better productivity, which in turn will lead to better value from data. That's really what we're focused on when you look at some of the demos and then the outcomes of using Vertex AI Workbench, data scientists can build and train models five times faster than they could with traditional notebook services. So it's a big deal for data science and the data analysis community here.
ERIC: You know what else is a big deal for the data science community? Spark, Spark, Spark. I should add that to my joke, right? The world runs on Outlook, Gmail, and Spark.
ERIC: If you're into data science workflow or you're a data engineer, you know that Spark is integral to most organizations for data engineering and for powering data analytic workloads, especially for data science workloads. One of the key reasons is there's this very tight integration between Spark SQL basically doing push down into your structured or unstructured data and then getting connectivity back to the broader panda ecosystem. So if you're a data scientist and you're focused on Python, you're living in the Pandas ecosystem.
And then if you combine that with Sparks ability to horizontally scale, especially for parallelized analysis, you have a very powerful environment to drive your workload. These benefits also bring a lot of challenges. So Spark based workloads, traditional workloads, require a lot of manual infrastructure, provisioning, and deploying a cluster. They understand a lot of deep understanding of the control plane, especially whenever it comes to performance tuning.
These are things that data scientists and data analysts should not be burdened with. In my previous role at Google, I was the product manager for data flow. And whenever we released data flow roughly around five, six years ago, this was a very similar idea. It's like we needed to, in essence, unburden data analysts, data engineers, et cetera, from figuring out how to manage and tune large scale transformation systems.
So the Spark community has to deal with a lot of these types of challenges today, which enters our announcement for Spark on Google Cloud, or as I have been calling it, Big Spark. But the marketing team tells me to stop using that, but it helps really get the point across because Spark on Google Cloud is now a fully managed autoscaling environment that, for the most part, is serverless. And what this enables you to do is simply write your PySpark code and submit it to the service. That's it. There's no more deploying of compute and storage infrastructure, including the cluster. It's all done for you and managed by the service.
Now to be clear, we do have existing product and market, specifically data product, which is a solution that provides similar functionality for [INAUDIBLE] based workloads. But Spark on Google Cloud takes us one step further because we accelerate the deployment time for clusters for you, but you don't have to interact or tune that system. And then we provide autotuning and scaling for the jobs that you submit.
So this is much like the value proposition for BigQuery where you simply run query. You're not having to worry about the underlying infrastructure. So with Spark on Google Cloud, you simply run job. So if you have a big investment in PySpark today, especially for batch based processing, check out Spark on Google Cloud. It will save you a lot of time.
BRUNO: Yeah. Again, you see, Mark, the theme here is how do we make it simpler for people, right? Our tagline should be just add data and then you go from there. So I think that's an important announcement. The serverless Spark is definitely something everyone should take a look at.
MARK: Yeah. I mean, I think it addresses two really key concerns, one of which you brought up already which was complexity. Right? People want to spend time working with the data, they don't want to spend time managing the infrastructure. I think we've heard that a lot of times and it's cool to see announcements like serverless Spark really lean into that. But then the second side is not everybody, especially these large enterprise companies, are going to be willing to have the switching cost of new platforms constantly. Right?
Once they develop a model or a tool or a set of tools that work for them, they want to continue to use that for a long period of time. And so when you bring out announcements like serverless Spark, you really have the ability to say, OK, great we get to keep all of our investment, but still reap the benefits of not having to spend all of our time managing it. It's a really, really cool set of announcements. I'm sure a ton of people are going to be super excited about that. But those aren't the only announcements, of course, that we had. There were no shortage of them. So what are some of the other cool announcements that we haven't had a chance to talk about yet?
ERIC: Another one that kind of dovetails into this concept of openness, especially whenever you're thinking about workloads that are-- I wouldn't say up and comers, but whenever you think about systems like Postgres, there is a really, really strong adoption curve for Postgres now for managing database applications. So this past Next, we announced a preview of a Postgres SQL interface for Spanner.
With this new interface, enterprises can now use skills and tools that they've already developed on top of Postgres while they're starting to look at and take advantage of Spanner's global scale and Five9's availability, while also providing strong consistency. There's going back to that joke that I made. It's like there are all different types of options for these solutions, and this is a great example of where enterprises are starting to look at.
I've outgrown Postgres's ability to serve my business and I need to move into something that's more global and scalable in nature, and this interface provides the ability to do that. So you get support for all of Spanner's rich feature set while pulling through support for Postgres's most popular data types as well as features. So we are enabling your migration to Spanner without reducing some barriers of entry.
And you get all this in a familiar Postgres interface, et cetera. So I really like the commitment here, as it gives developers a clear redirection path if the reporting process goes south, and it also provides some needed longevity for tooling that may be lagging behind the core database migration.
BUKOLA: What about some announcements that further expand Google's tools to help democratize data beyond the traditional data scientists, analysts, and engineers?
BRUNO: Yeah. That's a great question because up until now, we've talked a lot about innovation that's relevant to data people and I know you're going to cover some of this innovation in your next podcast, but I cannot not talk about the preview release of Google Earth Engine on the Google Cloud. Now this launch is going to make Google Earth Engine 50 plus bitobyte catalog of satellite imagery and geospatial data set available.
So that means customers will be able to integrate Earth Engine with BigQuery, Google Cloud's machine learning technologies, and Google Maps Platform. I think about all these scenario use cases where your team is going to be able to use data to enable sustainable sourcing, saving energy, understanding business risks. And we're integrating, frankly, the best of Google and the best of Google Cloud together.
So again, just going back to what we talked about earlier, how do we make it easier for everyone to use data to create a better business and sustainable future for our planet? So I think this one is a big deal. I'm really excited that you've actually dedicated a full episode to it because there's a lot going on there. But there are great demos that people can go and take a look at today. Google Earth Engine is a big launch for us as well.
MARK: Yeah. I mean, it's going to be really exciting to talk about that. As you just mentioned, Bruno, our episode next week is going to be deep diving on that. So I feel like we wouldn't do it justice if we were to just mention it here. There's so much great content to talk about. So for people who want to learn more about Earth Engine, just stay tuned. Next week we're going to have a full episode on it.
BUKOLA: That's awesome to hear. I'll definitely check it out as well, Mark. So there were no shortages of customers and partners that were involved in Next this year. What are some of your favorite customer stories?
BRUNO: So you got me on my favorite topic here. People know that I like products. Now they know from Eric that I like LEGOs. I've invested a lot in LEGO. That's the way to think about it. But my favorite topic of conversation is customers. I always joke customers, customers, customers. And we certainly had an impressive lineup of data journeys throughout the event.
I think about Danielle, who's the Chief Data Officer at Albertson's and Jaime, who's the VP of Data at Tellus, both of which I had the opportunity to interview in our, what I call, power panel. So if you didn't catch that course, I highly recommend it. This is nonscripted. This is a just conversation with chief data officers, so hopefully people like it. I also particularly like the showcase demos that we've built based on the journeys of our customers. I'm thinking about, for instance, the Renault Group.
I know you're going to give me a hard time again because I'm French so I have to talk about French companies. But Renault is a big automotive leader and they've built this supply chain digital twin, 8,000 factory robots, more than half a billion data points scanned, 2,500 plus data trends aggregated. They're using the data for anomaly detection, proactive maintenance, assembly line optimization. They're also doing real time vehicle tracking.
So all of that is run on Google Cloud platform, and we built this great website experience where you can take a look at how they're using the data. We also did the same thing for chess.com. Now this is an example of a company that needed a platform like ours to scale. To give you a sense, their site experienced 10 years of growth in just six months. They went from 20 million to 70 million active users in one year, obviously boosted by the popularity of chess in 2020.
So this is a really great example of a company that needed to scale and needed to scale quickly. We also feature chess.com in our keynote. We also featured the journey of Cardinal Health CTO, who talked about the impact of data and how you can use data to reimagine health care. I think that's a meaningful testimonial as well. And then finally, the one that you do not want to miss is Suresh, who's the Chief Development Officer and Chief Technology Officer at Walmart. I touched a little bit on it earlier, but in TK's keynote he talks about how he's giving his data the BigQuery treatment.
So it's a great example. And then there's many more-- Deutsche Bank, Tyson Foods. Everything we do is for customers, so it's amazing to see them come together and celebrate with us throughout the event. And I know that's what the audience wants to hear the most about, right? They can hear from products from Eric and I, but they really should go and attend these customer sessions. They have great lessons to share with us.
MARK: Yeah, I think that's the best part about these customer stories is that they really help contextualize the announcements. Because it's easy to talk about the announcements and the features and how great they are, but it's a whole different level when we actually talk about how folks are taking these and then actually implementing real change with their businesses or their enterprises with those technologies.
So I mean, absolutely. Bruno, we know you love talking about customers and in fact, you actually have a weekly segment where you talk a lot with these customers what their data journey looks like. So we'll make sure to include a link in the show notes to that. That's the Data Journey series it sounds like it drops every Tuesday morning. Is that correct?
BRUNO: Yeah, that's correct. Every Tuesday morning I sit down for 10 minutes with a customer. Again, not really scripted. Right? It's basically the questions are simple. Tell me about yourself, tell me about your use cases, and tell me what you learned. And from that it's kind of a service to the community. We're trying to connect. I know it's hard not being able to go to events and connect with communities, so that's kind of the goal of this series is let's connect you with people like you that have succeeded and that are willing to share their best practices with you.
MARK: So definitely make sure to check that one out. And then we're just about out of time here, but before we wrap up, we've talked about so many announcements today. I'm sure some of the listeners are just completely overwhelmed. What are some of the best resources, Bruno and Eric, for folks to go to and learn more about not only the announcements, but also all these different stories that have been told with the announcements?
BRUNO: Lots of places to go. Eric, if you don't mind I'll just give a few pointers to people. The most immediate one is go to Google Next site, watch the replay. All the content is available for free and you can just block a few hours and then power through those sessions. I would start with TK's keynote and the Walmart one. I have a curated playlist as well across customers and products and partners and I know Eric did the same as well.
So we'll make sure to add that you have it. But also don't just take it from us, just go and check out what the partners have announced. We've made some really significant partnership announcements this past week. Eric talked about Tableau, but we also announced partner work with CALIBRE for data quality, Informatica for our migration factory. We're deepening our partnership with Five Trend, with whom we share more than 1,000 customers.
And we, of course, announced data engineering accelerators with trifecta. So to recap, really Next site. Of course, you could follow me on my LinkedIn feed where I can post these customer stories, just like Mark was saying, every Tuesday. And then, again, the partner announcement. And I know, Eric, you've got a few things as well.
ERIC: I was thinking following you, Lak Lakshmanan who heads up data analytics and ML solutions at Google Cloud, following me-- although sometimes I kind of get off the rails and talk about other things like music and LEGOs. But I want to underscore that the Next site is probably the best place to go. Sometimes we talk about like, yeah, there's other content over here.
I think we've done an awesome job, especially this year, of really honing in on the right narratives, highlighting customers in meaningful ways, and really bringing these product announcements to life at the level of practitioners and how they need to embrace these tools to help them out. So go immerse yourself and watch the videos. And also follow Bruno on LinkedIn. Shameless plug.
BUKOLA: Awesome. I'll make sure to follow Bruno and also follow you, Eric. Thank you guys so much for joining us today.
BRUNO: Thanks for having us.
ERIC: Take care.
MARK: Once again, thanks so much to Eric and Bruno for coming in. Always exciting to, of course, hear from them. But also they did such a great job of just talking about all the different-- I mean, there's so many announcements, but they did a great job of recapping the details of some of the cool ones. And it's always exciting to hear more about what those announcements are and why they're cool things to learn more about.
BUKOLA: Exactly. I learned so much through the entire conversation with them. And one of the things that was really fascinating to me was the new integration between Tableau and Looker. I can definitely see how it's going to benefit customers by providing a new level of access while enforcing existing governance. It was also really cool to learn about Looker Blocks and how helpful it's going to be, especially with allowing customers to have access to pre-built analytic blocks and visualization blocks as well.
MARK: Yeah. There was such a great point that all the tools that companies tend to use that may not be what you'd think of as data storage. But it turns out whether that's spreadsheets or presentation tools or docs, all of these things make up an entire business and how they handle their data. So it's really more complicated than just saying, oh, you need a good database and that'll solve all your problems. It's like, no, you need tools on the actual storage side, on how your business folks use them, on the actual presentation side, and then on business intelligence tools like Looker and Tableau.
MARK: And of course, one of the announcements that we didn't get to spend too much time on this time was Earth Engine. And that's because next week, we're going to do a full deep dive on it. So we're going to bring in some awesome folks to really explain to us not only the announcements that came out of Next, but also some of the cool, what can you do with these announcements. I don't want to go too much into it because, again, we're going to have all of next week to talk about it. So if you're interested in hearing more about Earth Engine and some of the cool announcements around that and sustainability, definitely check out next week's episode.
BUKOLA: Awesome. I'm definitely looking forward to it.
MARK: So yeah, Bukola, as we said earlier, it's been a little bit since you've been on the podcast. What have you been working on?
BUKOLA: Well, in a month we're going to be launching season two of the "Click to Deploy" series, which shows you how to launch solutions quickly from the Google Cloud Marketplace. So we'll be talking about how to launch solutions like Redis, Databricks easily on Google Cloud Marketplace.
MARK: Oh, that's super cool to hear about. Yeah. I mean, it's such an interesting thing. The marketplace, if folks haven't had a chance to check that out, it can be a really great tool to find a lot of pre-built groups or open source software like Redis, like WordPress, things like that where you can just kind of click a button and it's all deployed for you. Again, it leans into this idea of just being able to take a solution and build it into your cloud environment without having to do all of the actual manual setup.
MARK: Yeah, those are super cool. So we'll definitely keep an eye out for those videos. I'd love to see it. All right, well thank you to all of our listeners for tuning in and we'll see you all next week.
BUKOLA: See you all.
Mark Mirchandani and Bukola Ayodele
Continue the conversation
Leave us a comment on Reddit