Friday, April 19

Google revolutionizes searches by applying more artificial intelligence


The company Google has shown, at the international event Search On, as advances in artificial intelligence are helping to transform their information products, creating search experiences more in tune with the way the human mind works and multidimensional like people themselves. In the mentioned event they have presented three novelties to find exactly what we are looking for in our searches by combining images, sounds, text and voice, as humans do naturally.

A more natural visual search, with multi search, a new way of doing searches using images and text simultaneously.

The translation of the environment. Using advances in artificial intelligence, Google goes from translating text to translating images. Today it is already used more than a billion times a month to translate the text of images into more than a hundred languages.

Using immersive vision to explore the world. Through advances in computer vision and predictive models, it has reinvented what we understand by a “map”. The two-dimensional classics will evolve into a multidimensional view of the real world, allowing you to experience a place as if you were there.

“We have been working on our mission for more than two decades to organize the world’s information and make it accessible and useful to everyone. At first, it was text search, but over time, we have created more natural and intuitive ways to find information. For example, now you can search what you see with the camera or ask questions out loud,” it was reported during the event.

From the perspective of Google, there is a glimpse of a world in which you can find exactly what you are looking for combining images, sounds, text and voiceas the human being does naturally.

Also Read  Instagram's 'Take a Break' feature arrives in Spain

visual search

It uses cameras as a tool, the keyboard of the future, with which to access information and better understand the environment. In 2017 Lens was born, which allows us to search what we see using the camera or an image. Today, Lens is used to answer eight billion questions every month.

Visual search is more natural with multisearch, a new way to search using images and text simultaneously. A few months ago, the beta version of the multi-search mode was implemented in the United States and, in Search On, it has been announced that it will be available in more than seventy languages ​​in the coming months. A step further is given with the multisearch near me, which allows you to take a photo of something unknown, such as a plate of food or a plant, and find it in a nearby place, such as a restaurant or a garden center. This fall, this tool will be launched in English, in the United States.

Translate the world around

One of the greatest potentials of visual perception is its ability to break down language barriers. Through artificial intelligence, it has gone from translating text to translating images. Google is already used more than a billion times a month to translate the text of images into more than a hundred languages. But often it is the combination of the words and their context (the images in which the text is inscribed) that make up the meaning. Today, translated text is already being combined with those contextual images, thanks to a machine learning technology called Generative Adversarial Networks (RGA or GAN). If, for example, the camera is pointed at a magazine in another language, we will see the translated text superimposed on the accompanying images on the screen.

Also Read  Why are Lidl and Aldi so similar? History and curiosities of the two great German supermarkets

immersive vision

Thanks to advances in computer vision and predictive models, Google is reinventing maps. The two-dimensional classics will evolve into a multidimensional view, which will allow you to experience a place in a personalized way.

Related news

Just as the ability to check traffic in real time in navigation mode changed Google Maps, making it more useful, another significant advance has been achieved with the immersive view of Google Maps, with more information such as weather conditions or how crowded a place is. certain place. With this experience it is possible to get an idea of ​​what a place is like before even setting foot in it, to decide where you want to go and when.

By fusing an advanced representation of the world with predictive models, it gives insight into what a place will look like tomorrow, next week, or even a month from now. To date, the first version of this function is extended with aerial images of two hundred and fifty emblematic enclaves. In the coming months, the immersive view will arrive in five major cities.

Leave a Reply

Your email address will not be published. Required fields are marked *