Google Lens is smart enough to identify flower species
A new Assistant feature in Google Lens can identify what you’re looking at and suggest actions.
What’s Google got in store for us in 2017 and beyond? We tuned in to its Google I/O developer conference on its 10th anniversary to bring you the lowdown. Unlike the hardware-centric announcements of last October, I/O focuses mostly on software and how to to get the most out of those devices.
In the beginning there was the recap — essentially AI and machine-learning everywhere — and the numbers. No surprise: there are a lot of Android devices, out there and we spend a lot of time and bandwidth using Google services.
And in a random comment, Google announced that Gmail gets the smart reply feature that’s been available in Inbox and other services for a while.
The company announced several new efforts. Google.ai is a spinoff division to encompass learning systems, research tools and applied AI to inform all of its work, including an AI that can build more AIs. Google Lens is a new recognition engine that enables intelligent mixed reality — performing text and object recognition and feeding it into other apps to act upon, such as using the camera to view your router serial numbers and automatically provide related links.
And it finally announced Google Jobs, its platform to bring its contextual intelligence power to making it easier to find the jobs you’re looking for.
Google released the developer preview of Android O in March, but a lot can change in a few months. Today we saw the official version of Android O, along with its new Android Go, a slimmed-down, less-demanding version of the operating system to power devices in emerging markets.
We did get demos of picture-in-picture, Notification Dots, smart copy-and-paste, and other features to streamline your phone use, and improve speed and security.
Developers can sign up for the beta today.
5 new features on Android O
Google’s Android O has several new features that aim to give you a more fluid phone experience, including picture in picture and faster copy and paste.
Google Home, its smart speaker for Google Assistant, gets tons of new features. It now will have “proactive assistance,” otherwise known as push notifications, hands-free free calling (outgoing only, to start), Spotify integration (as well as SoundCloud and Deezer) and Bluetooth support. It can launch HBO Now as well.
Visual Responses will send information to a relevant device for display, such as directions to your phone, or a calendar to your Chromecast for display on your TV; it’s also a way to let you interact with streaming services without having to talk to your TV.
The question remains, though: is what we’ve seen today for Google Home enough to close the gap with the Smart Home-hub leader Amazon Echo? It may get a leg up on that if Android TV gains more traction, since Google Assistant voice control is coming to that platform.
Google’s competitor to Amazon Echo is just a year old; even younger if you start counting from when the first hardware product, Google Home, first became available. And it looks like the best smart-home-assistant plan at the moment is to speak a lot of different tech dialects. But Google Assistant has recently added support for Whirlpool and GE appliances.
Search and Google Assistant also get a boost from the company’s machine learning progress. Now you can type into Assistant on your phone and it will be able to converse with you about what you show it through Google Lens’ eye (the camera), such as helping you with translations.
Assistant is now available on iOS, and Google released a developer’s kit to bring it to a whole bunch of other devices. Actions on Google will allow developers to deliver the ability to perform transactions from request to receipt.
Google Assistant will take your lunch order
Google Assistant now lets you make and pay for transactions just by talking to your phone.
Google wants to push you to share more, adding Suggested Sharing to Google Photos to remind you to share and recommend who to share with, plus a feed of shared images. And it works on iOS. Shared Librariesautomatically saves photos from a group of people, and you can set intelligent rules based on content for adding to your instance.
And now it will be able to automatically select photos to create photo books.
Google Lens is being integrated into Photos for example, to help identify places and bring up information about them.
VR and AR
Following its Daydream View phone-based headset from last year, which has garnered new phone partners, Google announced an unnamed Daydream-compatible headset that doesn’t require a phone — a standalone wireless VR headset with built-in positional tracking. We’ll see devices later this year.
It also mentioned its visual positioning service, which uses Project Tango technology to locate yourself indoors to a finer degree than GPS.
The ability to stream live and prerecorded YouTube 360 videos will be coming soon to the YouTube app on your smart TV. Previously announced Super Chat, which lets you throw money at the person running a livestream, will now let you make stuff happen in real life.
Not everything we saw or heard about today will become a success, though. If you need proof, take a look at these past Google I/O flops.
But that’s just the 30,000-foot view of what the tech behemoth showed us today, for a 100-foot view, read our interview with some of Google’s decisionmakers. And, of course, you’ll be able to find our continuing 3-foot views from our people on the ground in Mountain View for our continuing coverage of the three-day Google I/O conference.