In 2017, Microsoft launched Seeing AI , an app that took advantage of the power of artificial intelligence to allow people with vision problems to perceive in greater detail the world around them, thanks to the option of the device describing what it looked like. Throught the camera.
Two years later, the app (free, but for now available only for iOS ) has incorporated new features that complement its operation.
The most important of these is that from now on, users are able to explore the content of the photos by simply turning their hand over the screen: Seeing AI will read aloud the description of the objects (or beings) that go pointing .
Knowing what we would see just by pointing it out
All the user needs now is to open an image in the viewer, and the machine learning will do its ‘magic’ as soon as I touch any part of the photo. According to one of those responsible for the development of the app, Saqib Shaikh explains:
“This new feature allows users to take advantage of the touch properties of their screens to hear a description of objects within an image and the spatial relationship between them.The app can even describe the physical appearance of people and predict their status. cheer up”.
Thanks to the integrated facial recognition, it is possible to take a picture of our friends (or use the accessible menu from other applications to apply the recognition to the images we are seeing on social networks) and the app to describe who is on each side, if he is smiling or not, or if a dog that passed by has gotten into the picture. For example.
Thanks to the possibility of pointing out specific elements of the image, it is possible for the user to enjoy a more complete knowledge of it, which is not limited to a mere generic description of the scene like the one offered to us by the app so far. Here is an example of the old functionality:
Besides that, Seeing AI has added native iPhone support for the first time , aimed at those users who can not access an iPhone or who need to use the app in academic contexts.