On the occasion of Google I / O 2022, a series of new features were illustrated that will integrate with Google’s key services.
The news will be available within the year on a global level and specifically concern the search engine, the online translator, the maps and the virtual assistant.
Multiple search with text and images
We start with the concept of Multisearch, a function announced a few weeks ago, which will allow users to perform combined searches through Google Lens by inserting images and text.
It will therefore be possible to obtain precise information by inserting, for example, a photo of an item of clothing: Google will compare a series of similar images present online to indicate to the user what model it is and if it is possible to buy the item in a nearby shop.
Multisearch will be available globally in English by the end of the year, while for the other languages we will have to wait a few more months.
Google is also hard at work on functionality Scene explorationwhich will allow you to obtain information on what you are framing with the smartphone camera in real time.
The example cited by the company is that of a shelf of chocolate bars: with the phone it will be possible to scan the entire supply, reading on the fly the main ingredients of each tablet.
Translator, now with 134 languages
Great work also done on the online translator. The service has supported about 100 languages to date, some of the most spoken in the world, but Google has pointed out that there are over 1,000 languages present around the globe.
Precisely for this reason the company has worked with translators in the flesh, from all over the world, and with its own Artificial Intelligence system. by patenting a system called Zero-Shot.
It is a machine learning technology that uses the structure of the source language, creating a series of standard constructs. In this way, the system is able to offer a translation in a different language, without having a parallel text with which to compare but, based on the data already present on the platform with the support of AI.
An impressive technology, useful for less common languages and for which you do not have great resources, which today manages to offer a quality translation in 70% of cases.
Google Translate will also support 24 new languages including Assamese, Mizo, Sanskrit, Krio and Konkani.
Google Maps opens to Immersive View
Space also for the new Immersive View function for Google Maps. It is an immersive mode that allows you to explore cities and neighborhoods from abovedisplaying monuments and places of interest, superimposing information on roads and traffic in real time.
The displayed environment, particularly realistic, is obtained by processing satellite images with the Google Street View archives and will be available on most smartphones, even older ones. However, the first cities to take advantage of this new mode will be a handful: Los Angeles, London, New York, San Francisco and Tokyo.
The maps will also be more intuitive thanks to the function Live View AR available free for third party developers. This is because Google has decided to adopt the new type of ARCor geospatial API.
This will allow the platform to provide even more information to users, interacting intuitively with the physical world using Augmented Reality. For example, you can indicate free parking spaces in a specific area, grandstand seats available in stadiums or theaters and much more.
To call up the Google assistant it will no longer be necessary to say “Ok Google”
A section dedicated to Google’s virtual assistant could not be missing. The company has in fact illustrated new ways to interact with the assistant, which bypass the activation through the command “Ok Google”.
The first new function is called Look and speak and will allow all owners of a Nest Hub device to speak directly to the virtual assistant by looking at the screen.
Thanks to Face Match, the device will recognize the user’s face and prepare the assistant to receive voice commands. Google worked on over 100 different external inputs such as sounds and images, proximity, head orientation, gaze direction, lip movement, before arriving at the current result.
In addition, Google is developing a series of quick phrases to say to perform the most common daily tasks, such as “turn the lights on / off”, “set a 10 minute timer”, “start the heating”, for which it will be possible skip the “Ok Google” command. The new features will initially be available to US users who own a Nest Hub Max.
Finally, the work on learning the commands in a more precise way for the Google assistant continues. The company is working on a number of language models that will allow Google Assistant to better understand the nuances of human languagelike when someone pauses but hasn’t finished speaking, and ignores the interlayers in sentences.
#Googles #immersive #maps #leave #speechless #news #Google