Sunday , August 18 2019
Home / zimbabwe / How AI Changes Photo

How AI Changes Photo



If you think how good the next phone camera will be, it is wise to pay attention to what the manufacturer has to say about AI. Regardless of the hype and bluster, this technology has made it possible to shoot fast in the last couple of years and there is no reason to think that progress will slow down.

There are a lot of tricks to make sure. However, the most significant recent advances in photography are made at software and silicon levels, not because of the sensor or lens, and this is largely due to the AI ​​giving the cameras a better understanding of what they are watching.

Google Photos provided a clear demonstration of how powerful AI and combination of photos are when the application starts in 2015. Before that, the search giant had used machine learning to categorize Google+ images for years, but launching its photo app included features that were unimaginable for the benefit of the majority of consumers. Thousands of unlabeled photos were converted into search databases overnight by users' inactive libraries.

Suddenly, or it seemed, Google knew how your cat looked.


Photo: James Bareham / The Verge

Google relied on the previous 2013 acquisition, DNNresearch, to create a deep neural network trained on human-labeled data. This is called supervised learning; The process involves millions of images in a training network so that it can search for visual clues at the pixel level to help determine the category. Over time, the algorithm gets better and better recognizes, for example, a pandan, as it contains patterns used to correctly identify pandas in the past. It learns where black fur and white fur are against each other and how it differs, for example, from a Holstein cow. Continuing the training, it is possible to search for more abstract terms such as "animal" or "breakfast", which may not have a common visual indicator, but which are still obvious to people.

It requires a lot of time and processing power to train such an algorithm, but once the data centers have done their thing, it can be operated with low-power mobile devices without major problems. Heavy lifting has already been done, so when your photos are uploaded to the cloud, Google can use its model to analyze and label the entire library. Approximately one year after the launch of Google Photos, Apple announced a photo search feature similar to that of a neural network, but as part of the company's commitment to privacy, actual categorization is performed on each device processor without sending data. Usually it takes one or two days and it happens in the background after installation.

Intelligent photo management software is one thing, but learning AI and machine undoubtedly has a greater impact on how images are captured in the first place. Yes, lenses continue to get a little faster, and the sensors can always get a little bigger, but we're already pushing on the limits of physics when it comes to optical systems on slim mobile devices. In spite of this, these days, at least before the photography, phones have unusually better photos in some situations than many special cameras. This is because traditional cameras cannot compete in another hardware category that is just as deep in photography: in systems that have a chip containing CPU, an image signal processor, and an increasing number of neural processing units (NPUs).


This is the hardware used in a so-called computational photograph, which is a broad term that covers everything from fake field effect to portrait portraits to algorithms that help manage the unbelievable image quality of Google Pixel. Not all computing photos include AI, but AI is definitely its main component.

Apple uses this technology to control the two-camera phone portrait mode. The iPhone's image signal processor uses machine learning techniques to recognize people with one camera, while the second camera creates a depth map to help isolate the object and plan the background. The opportunity to recognize people by learning in the car was not new when this function debuted in 2016 because it is a software for a photo organization. But to manage it in real time with the speed required for the smartphone camera, it was a breakthrough.

Google is still the obvious leader in this field, as the major results of all three Pixel generations are the most important evidence. Default Shooting Mode HDR + uses a complex algorithm that combines multiple under-exposed frames into one, and whereas Google's computing photo leads to Marc Levoy, The VergeMachine learning means the system improves over time. Google has trained its AI on a huge set of tagged photos, just like Google's photo software, and it helps the camera to work even more. In particular, Pixel 2 created such an impressive level of original image quality as some of us The Verge have been more than easy to use for professional work on this site.

However, Google's advantage has never been as pronounced as a few months ago when the night scene was launched. The new Pixel feature combines long exposures together and uses a machine learning algorithm to calculate more accurate white balance and colors, openly surprising results. The feature works best with Pixel 3 because the algorithms were designed with the latest hardware in mind, but Google made it available to all Pixel phones – even the original ones lacking optical image stabilization – and it's a stunning ad on how software is now more important than camera hardware when it comes to mobile photography.


This means that there is still room for changing hardware, especially if supported by AI. Honor's new View 20 with its parent company, Huawei's Nova 4, is the first to use the Sony IMX586 image sensor. It is a larger sensor than most competitors, and 48 megapixels are the highest resolution still visible on any phone. But it still means that many tiny pixels are being tiny in a room that is usually problematic for image quality. In its View 20 tests, Honor's AI Ultra Clarity mode stands out for maximum resolution by separating the sensor's unusual color filter to unlock additional details. This leads to huge photos that you can zoom in on days.

Image signal processors have been important to take the camera for a while, but it seems that NPU will play a greater role as the computational photo will evolve. Huawei was the first company to announce a chip with special AI hardware, Kirin 970, although Apple A11 Bionic finally came to consumers. Qualcomm, the world's largest supplier of Android processors, has not yet made the machine a focus, but Google has developed its own chip called Pixel Visual Core to help with AI-related imaging features. Finally, the latest Apple A12 Bionic is an eight-core nerve engine that can run tasks on the Core ML, Apple's machine training system up to nine times faster than the A11, and for the first time it is directly related to the image processor. Apple says it gives the camera a better understanding of the focal plane, for example, helping to create a more realistic depth of field.

This type of hardware will be increasingly important to operate efficiently and effectively on a device equipped with equipment and has an extremely high ceiling relative to its processor requirements. Remember that algorithms that provide Google Photos have been trained on huge, powerful computers with smart GPUs and tensor kernels before they were released in your photo library. Much of this work can be done 'in advance', but the ability to make mobile device computer training calculations in real time is still the most advanced.

Google has shown some impressive work that could reduce the burden of processing, while neural engines are getting faster over the years. But even in this early stage of computing photography, there are real benefits that can be derived from the camera cameras that have been developed around machine learning. In fact, from all the possibilities and applications created by the recent AI hype wave, the area with the most practical use today is undoubtedly a photograph. The camera is an essential feature of any phone, and AI is our best shot to improve it.


Source link