Google Gets Context, Sours Apple

Depending on the day, Apple or Alphabet is the world’s most valuable company as measured by market cap, and both manage the two dominant computing platforms used anywhere: iOS/OS X and Android/Chrome OS, respectively. As I write, Alphabet-subsidiary Google holds its annual developer conference. Apple’s event starts June 13.

During the opening keynote, Google CEO Sundar Pichai frames the conference and the company’s direction by rightly focusing on two fundamentally future-forward concepts: Voice and context. Google gets what Apple likely won’t present to its developers, and we’ll know next month. But based on product priority to date, the fruit-logo company is unlikely to match its rival’s commitment to the next user interface. 

Contextual Computing
“We live in very, very exciting times,” Pichai tells Google I/O attendees. “It is truly the moment of mobile”. Example: “Over 50 percent of our queries come from mobile phones”. In the United States, one in five queries is from voice, with context typically the defining characteristic—meaning information sought based on need or location.

I believe it. Last week, sitting in Wendy’s with my 94 year-old father-in-law, an elementary school-age kid behind me spoke “OK, Google” to his phone before several questions. We are on the cusp of Star Trek computing, where information is available at the command of your voice and the machine is a personal assistant that anticipates you.

I’ve harped about the importance of context and voice interaction for a long time. In September 2005, I described “Search as the New User Interface“—essentially command line for the Internet—diminishing the relevance of the then traditional desktop motif. Search’s extension is hands-free interaction based on contextual need.

Apple operates under the misconception that we have entered the post-pc-eraThere is no such thing. I contend (again) that this is the contextual cloud computing era where how, when, and where devices and services are used matters more. How people use the tech changes depending on context, but the underlying cloud services deliver it. Google gets context. Few other companies do.

The cloud is all about context. Content follows users everywhere, independent of device. Your music is available anytime, anywhere, on anything. You watch a movie in one context, sitting in the man chair at the mall on a smartphone and resume on the big-screen TV at home. Content is the same, but context and device change. They say content is king. No, context is.

The surest way to platform success, particularly around ones where third parties build other stuff and profit from it, is the killer application. More than one is even better. But I use applications differently than most people do. I don’t refer to the software program, but how the thing is used. The context. The PC offered greater contextual usage than the mainframe. Suddenly a computer could be used inside a small business, home, or school rather than fixed terminals connected to a mainframe. Smartphones and tablets provide even greater context, as do panels delivering ads on the street or recipes to your refrigerator. There are more applications—meaning ways the thing is used. But in this computing era, devices don’t stand alone. They are connected, which from the cloud is a throwback to the mainframe era but location independent.

Life in Sync
In June 2007, writing for the now defunct Microsoft-Watch blog, I started calling out sync as the “killer application” for the connected-device era, warning: “If Google gets synchronization right before Microsoft, it’s game over”. No company does sync better than Google. But the term isn’t just about synchronizing information but getting your life in sync—that’s the practical benefit the search and information subsidiary put forth today with the new Google Assistant.

Pichai describes the cloud-based tech as an “ongoing, two-way dialog with Google”. He emphasizes the importance of context and building a personalized Google for each individual. “Every single conversation is different—every single context is different”. He describes an “ambient experience” across devices.

Among those forthcoming gadgets: Google Home, which capabilities remind of Amazon Alexa on Echo—Pichai offhandedly praises the retailer for popularizing the category. Like Echo, Google Home is a contextually-relevant, WiFi-connected, cloud streaming speaker.

As I expressed earlier, synchronization is more than about information but syncing your life. Google Home works with related platform devices, such as the search company’s Chromecast Audio or Nest, drawing from contextual cloud services. In the demo video, a dad uses his voice to start music streaming in the kitchen but later commands playing in the bedrooms to wake his kids. That’s connection to Chromecast Audio.

Uh-oh, looks like the preferred prompt changes to “Hey, Google” from “OK, Google”. Is that a dig at Apple, or copycatting? Big A uses “Hey, Siri”.

Near the video’s end, a youngster preparing for school asks Google what is the closest star to the Earth. He gets the answer and asks the assistant to show him, and a photo of Alpha Centauri appears on the TV—that’s the Chromecast connection. BTW, Google has sold more than 25 million of the streaming devices.

Machine Learning
Like I said, context is king, and Big G gets it. Apple offers some voice-command contextual capabilities among iOS devices and supporting services but lacks the depth Google brings from search and how it refines voice or text-input queries from apps and extended services.

For example, during today’s keynote, presenters highlighted new search capabilities available from Google Photos, which has more than 200 million active monthly users. The more things that subscribers search for, the greater is the accuracy over time. It’s machine learning in progress that can be applied to search across the entire Internet, not just the soloed service, for everyone.

Other examples of context in motion: Android Wear 2.0 auto-detecting fitness activity. Forthcoming Allo messaging and Duo video-calling apps also are contextually aware.

More ambitious context in practice: Android Instant Apps. Google Play downloads function-specific code based on need, rather than requiring the user to install something. For example, if someone sends you a message with link to a BuzzFeed video, Android Instant Apps plays the clip without installing the full BF app, if not on the device.

Many of the products and services demoed today, including Android N and extended Google Cardboard/VR capabilities, won’t release until later this year. When isn’t so important as what. Sundar Pichai laid out a clear vision for how Google will fundamentally define the user interface for cloud-connected device: Context, search, and voice.

Photo Credit: Tsahi Levent-Levi

Editor’s Note: A version of this story appears on BetaNews.