The new function works by using algorithms that identify songs based on a variety of sources, which extends to humans singing, whistling and humming, as well as recordings. To make it easier to find information about how and where to vote-regardless of your preferred voting method-we've launched election-related features with information from trusted and authoritative organizations in Google Search.
The feature is just one of many machine-learning updates announced by the company this week, now available in 20 languages.
You can also use it on your Google smart speaker, by first saying "Hey Google, what's this song?". It'll show you the most likely options based on the tune. "Say 'Hey Google, what's this song?' and then hum the tune", the company added. It is using computer and speech recognition to automatically identify key moments in the videos.
"This lets us tag those moments in the video, so you can navigate them like chapters in a book". Users can then explore more about the chosen track with accompanying features, such as listen to the song on various platforms, get more information on the song and artist, watch music videos, and even read lyrics and analyses. "Whether you're looking for that one step in a recipe tutorial, or the game-winning home run in a highlights reel, you can easily find those moments", the company informed. Datasets which were previously available as part of Open Data Commons can now be accessed through Google search. It will also give other relevant data points and context-like stats for other cities. If you're learning a new language, Lens can also translate more than 100 languages, such as Spanish and Arabic, and you can tap to hear words and sentences pronounced out loud. On the shopping side, if you are browsing products online, Lens can find the exact or similar items and suggest ways to style it. Lens can now recognize 15 billion objects, according to Google (up from 1 billion two years ago).