The Pixel 3 experienced a flood of leaks before it launched, but Google managed to keep a few unique software features under wraps. One of the most unique software features it included is Top Shot. If you’re unfamiliar, Top Shot captures up to 90 images in 3 seconds before, during, and after you press the shutter button. After that, it analyzes the images in real time and presents you with the original shot when you pressed the shutter button, plus two recommended alternatives.
Filtering out these images for recommendations is a tricky process. In a post on its AI Blog, Google has broken down the workings behind Top Shot. For example, when it comes to analyzing images, Top Shot uses features such as smile detection, analysis of light, and emotional expressions. This provides qualitative, objective, and subjective factors in the analysis.
With this analysis, Google needed high accuracy. As such, it collected data from hundreds of volunteers who ranked which images (out of 90) they liked best, with feedback as to why they preferred that image. This data was used to improve the quality metrics in Top Shot, so that it could consistently choose the image that people perceived as best.
Processing these photos takes a lot of work, and Google is talking up the Pixel’s Visual Core as the powerhouse for the process. All photos are analyzed on-device and in real time for privacy and speed.
There are a lot of technical details for the process. If you feel like nerding out, follow the source link below to learn more about Top Shot from Google.
Source :
Androidandme