The Pixel 2 is arguably the best camera experience on Android with Google achieving with software what others require hardware to do.
One of these features is the Pixel 2‘s Portrait Mode which identifies a subject and then blurs the surrounding background to provide focus on that subject. Google is using semantic image segmentation to achieve this, which is essentially analyzing each pixel in a picture and categorizing it as a person or sky, for example. By applying an identification to each pixel, the software knows exactly what the subject is and therefore which pixels to blur out.
The Portrait Mode has been exclusively available to the Pixel 2 devices, but Google has now made the magic that achieves this open-source. This means that developers can use the same technology in their own implementations so not only can Portrait Mode be brought to other devices and apps, developers can take it one step further.
This release includes DeepLab-v3+ models built on top of a powerful convolutional neural network (CNN) backbone architecture [2, 3] for the most accurate results, intended for server-side deployment. As part of this release, we are additionally sharing our Tensorflow model training and evaluation code, as well as models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks.
The move to bring this open-source is great news for Android users who either don’t have a Pixel 2 or prefer other manufacturers. It means that developers can freely enhance their own apps with the technology that makes the Pixel 2 deliver such good results.
Does this lessen the appeal of the Pixel 2 now the technology could potentially be used in other devices? Let us know in the comments below.