Google Lens to Search for Combined Words and Images

Update: 2021-09-30 12:45 IST

Google Lens to Search for Combined Words and Images

Google is updating its Google Lens visual search tool with new AI-powered language features. The update will allow users to further restrict searches using text. So, for example, if you take a photo of a cashmere shirt to find similar items online using Google Lens, you can add the command "socks with this pattern" to specify the clothes you are looking for.

ADVERTISEMENT

Additionally, Google is rolling out a new "lens mode" option in its Google app for iOS, which allows users to search using any image that appears while searching the web. It will be available "soon" but will be limited to the US Google is also launching Google Lens on the desktop within the Chrome browser, allowing users to select an image or video when browsing the web to find visual search results. without leaving your tab. It will be available globally "soon".

These updates are part of Google's latest push to improve its search tools using understanding the language of artificial intelligence. The Lens updates are powered by a machine learning model the company introduced to I / O earlier this year called MUM. In addition to these new features, Google is also introducing new AI-powered tools to its web and mobile searches.

The changes to Google Lens show that the company has not lost interest in this feature, which has always shown promise but appears to be more attractive as a novelty. Machine learning techniques have made image and object recognition features relatively easy to start at a basic level, but as today's updates show, they require a bit of finesse on the part of users to work properly. However, the excitement may be building: Snap recently updated its own scanning feature, which works much like Google Lens.

Google wants these Lens updates to turn its global scanning AI into a more useful tool. Give the example of someone trying to fix their bike but don't know what the rear wheel mechanism is called. They take a photo with Lens, add the search text "how to fix this" and Google comes up with the results identifying the mechanism as a "diverter."

As always with these demos, the examples Google offers seem simple and useful. But we'll have to test the updated Lens for ourselves to see if understanding the language of AI really makes visual search more than just a parlour trick.


Tags:    

Similar News