Tuesday , August 20 2019
Home / canada / Here's how Google made it even better

Here's how Google made it even better

  • Google has posted blogs on recent AI and photo enhancements – especially with portrait mode in Pixel 3.
  • The study looks at how Google has improved neuron network depth measurements.
  • The result is an enhanced bokeh effect in portrait mode.

Google has detailed one of the biggest photography achievements that it has made in the AI ​​blog Pixel 3. In yesterday's article, Google talked about how it improved portrait mode between Pixels 2 and 3.

Portrait mode is a popular smartphone shooting mode that emanates from the background of the scene, while keeping focus on the foreground theme (sometimes called the bokeh effect). Pixel 3 and the Google Camera app take advantage of neuron networks, machine learning and GPU hardware to improve this effect.

In the Pixel 2 portrait mode, the camera will capture two scene versions at slightly different angles. In these images, it appears that the foreground figure, which is in most portraits, moves to a lesser degree than the background images (an effect known as parallax). This distinction was used as the basis for interpreting the depth of the image, and thus, which areas are exiting.

Google's portrait mode parallax scrolling example. Google blog

This provided strong results in Pixel 2, but it was not perfect. The two scenes versions provided very little depth information, so there might be problems. Most often Pixel 2 (and many others like it) would not be able to precisely separate the foreground from the background.

With the Google Pixel 3 camera, Google included deeper guidance to inform this fuzzy effect to get more precision. As well as the signature, Google used sharpness as a depth indicator – the more distant objects are less sharp than the closer objects – and real-world object identification. For example, the camera can recognize a person's face on the scene and explore how close or distant it is based on its pixel count compared to surrounding objects. Wise

Subsequently, Google trained its neural network with the new variables to improve understanding of the depth of the image or rather evaluation.

Google Pixel 3 portrait mode bokeh skull

Pixel's portrait mode not only requires a human subject.

What does it all mean?

The result is better portrait modes using Pixel 3 compared to previous Pixel (and, apparently, many other Android phones) cameras, thanks to a more precise background blur. And yes, this means that hair loss is lost as a result of background blur.

All of this has an interesting role, which also applies to chips. To handle the data required to create these photos after they are locked (based on full-resolution, multi-megapixel PDAF images), high power is required; Thanks to the combination of TensorFlow Lite and GPU, Pixel 3 converts it pretty well.

In the future, improved processing efficiency and special neural chips will expand not only the speed with which these images will be delivered, but also what improvements developers even choose to integrate.

To find out more about Pixel 3 camera, click on the link and make comments in your thoughts.

Source link