Google I/O 2022 - Ambient Computing is getting real
Every announcement at Google I/O 2022 had AI at its heart. No surprise for an “AI-first” company but what used to be a vision in 2021 finally becomes reality.
Every announcement at Google I/O 2022 had AI at its heart. No surprise for an “AI-first” company, but what used to be a vision in 2021 finally becomes reality. 2021 I/O introduced MUM and the idea of multimodality in Search. 2022 focused on the actual application of MUM (without really mentioning it) beyond a few search features.
The biggest highlights revolved around how to synthesize text and combine it with visuals:
Auto-generated chapters for videos (driven by DeepMind) and summaries for Google Docs
Glasses that transcribe the world for you in realtime
Scene Exploration: insightful information overlays for anything your camera “sees” (a ”supercharged control + F for the world around you”)
And, of course, Multisearch.
Multisearch and the localization of the internet
The number of keywords Google shows a Map Packs for has been growing, especially for head terms like SEO or Suits. The interesting thing about head terms is that they are searched for a lot and their intent is fragmented, meaning people have different intentions when searching for such an ambiguous term. For example, are people looking for an agency or a definition when searching for SEO on Google?
How Google interprets that is obviously up to Google. They mentioned to measures tab clicks (images, news, maps, etc.) to decide what vertical Search features to show (e.g. a Map Pack). But could they also simply decide to show more Maps results? Is Google really showing more Map Packs because users want them (by clicking on the Maps tab) or because Google wants them?
Map Packs are the epitome of Local Search and a competitive advantage for Google over Amazon. Google is losing the fight over e-commerce but winning the fight over retail since Amazon doesn’t provide Local Search. That’s an important opportunity.
Over 70% of Alphabet’s total revenue still comes from ads but how much more can Google squeeze Search and Youtube? In order to keep its insane growth rate up, Google needs to find new revenue drivers. Products like Google Cloud grow fast (45% YoY) but still only make up 15% of what Google Search delivers. Local Search could be a new source of revenue and showing Map Packs for more keywords is one way to do it. The intent is already there.
Multisearch, Near Me and Ambient Computing
Between 2013 and 2015 near me searches doubled. But between 2016 and 2018, they 5x’ed. [1, 2]
More people search with a location context than ever before but it’s not just about proximity. Back in 2018, searches for near me now queries already grew by 2x or even 9x multiples (YoY) in some cases. Customers want to know if they can get something immediately, which might be the reason for Google to adopt inventory and shipping time schema and show the information right in the SERPs.
In the 22 I/O keynote, Prabhakar Raghavan shares a new vision for the future of search:
Search reimagined. Any way and anywhere.
That’s profound! Alphabet’s mission is to “organize the world’s information and make it accessible and useful” but the vision for Search seems to go from text to context.
Users should search from anywhere, meaning Google fully leans into the explosion of near me queries. And they should be able to search any way, meaning through voice, pictures (Lens), and keywords. Enter Multisearch.
Multisearch is Google’s attempt to merge ways of searching with context (location and time). At I/O 22, Prabakar Raghavan states that “Near me will work for everything from apparel to home goods and restaurants.”
He then goes on to show an impressive case of Google understanding the dish in a photo and recommending in close proximity that serve that dish based on Google reviews and photos.
The combination of text and image search within Multisearch is the most tangible representation of MUM so far. It allows you to upload a picture and add textual context. The example used in the I/O keynote is a picture of a dress combined with the word “green” to search for the same dress in green. There are no more hard walls between formats when it comes to expressing intent, a vision long in the making.
Ambient Computing and MUM
In 2019, Google’s SVP of devices and services Rick Osterloh shared his vision for Ambient Computing at Made by Google and laid the basis for what we’ve seen in this year’s I/O. The idea behind Ambient Computing is that devices should be all around us, connect, and help while fading into the background. But to help, devices need to know what we want and text is a very inefficient way to express that.
This is where we bridge the gap between the 2022 I/O and 2019 Made by Google keynote: Ambient Computing needs a better input than text, i.e. voice, text, and pictures. Now, MUM can tie the 3 formats together and Google can execute its vision of Ambient Computing.
MUM is the brain and Search the heart of Ambient Computing. Google’s hardware devices are the eyes and ears. In a smart way, Google built devices that can be present everywhere in people’s lives:
Your home = Nest & Google Home
Your car = Waymo
Your body = Android/Fitbit/Pixel (phone/watch/Buds)
That sounds exciting but also scary. No wonder Google pushes privacy so much. They’re not just scared of antitrust lawsuits but plan to collect more data than customers can imagine today. Privacy needs to be built in for this to work.
If you thought iMessage is a lock-in, wait until you have to decide whether you want to equip your home with Android or iOS.
This brings me back to a concept I introduced a while ago but have been quieter about lately: Platform Confluence.
Google’s long play
In 2017, Google mentioned that ⅓ of all mobile searches are location-related. Smartphones were the enabler of Google’s Ambient Computing vision. But now every eligible human on earth has a smartphone. The iPhone came out 14 years ago and was a huge success but now we’re waiting for the next big thing. People often mention the Metaverse, AR/VR, or self-driving cars stepping into that role but none of them seem as possible as what Google presented at I/O this year. The next big thing is not one thing but a confluence of many different devices and services coming together to create more context and solutions. Think orchestra instead of instrument.
In the mobile era, smartphones changed the world. It’s super useful to have a powerful computer everywhere you are. But it’s even more useful when computing is anywhere you need it, always available to help. Now you heard me talk about this idea with Baratunde, that helpful computing can be all around you — ambient computing. Your devices work together with services and AI, so help is anywhere you want it, and it’s fluid. The technology just fades into the background when you don’t need it. So the devices aren’t the center of the system, you are. That’s our vision for ambient computing.
This comment from Rick Osterloh is not from this year’s I/O but from the 2019 Made by Google event. He might as well have said it in 2022, though.
The Localization of Google is really just a step toward Ambient Computing. Getting better search results for proximity and timeliness is just the beginning.