How iPhone Could Beat Google Pixel 2 At Its Own Game

Apple has been busy developing a radical new camera capable of delivering depth-sensing capabilities with a single lens. In a patent granted earlier this month entitled ‘Image Sensor With In-Pixel Depth Sensing’, Apple has described a new camera device able to achieve portrait-mode-style trickery with only a single lens – much like the one built into Google’s much-lauded Pixel 2.

However, Apple’s patent goes beyond merely creating artificial bokeh. The document describes multiple technologies designed individually to enhance gesture recognition, 3D applications and autofocus capabilities regarding both accuracy and speed.

USPTO

A simplified cross-sectional view of an example asymmetrical photodetector pair.

One usage example describes a camera sensor capable of operating in three distinct modes: a charge summing mode, a high dynamic range mode and a depth-of-field mode. These would allow the camera to switch between configurations as required to achieve better images in low light, bright sunlight or portrait modes.

In each case, asymmetrical pairs of pixels are used in which one of each pair is filtered or otherwise treated differently to the other to distinguish between light coming from the left and right sides of the image (or from the top and bottom, depending on the orientation of the pixels).

This enables to the camera to create slightly different ‘left’ and ‘right’ versions of the image from which some degree of depth can be calculated.

This small difference alone isn’t enough to generate a convincing illusion of depth, but it does provide enough information for a software solution to calculate a depth map which can then be used to simulate realistic depth effects algorithmically, much like the Pixel 2.

Google

The Google Pixel 2 XL In ‘Just Black’

The patent describes different methods of differentiating the pixel pairs including the use of light shields, colored filters, and multiple photodetectors installed within individual pixels.

Also described is a method of focusing a lens based on the measured difference in output between such asymmetrical pixel pairs. This shows that Apple’s invention can be used for more than creating depth effects. It can also be used to improve focus.

While Apple is already heavily invested in multiple-camera solutions in its flagship iPhones, adding such capabilities to a single-lensed camera would pave the way for depth-based functions like portrait mode to be added to smaller and perhaps less-costly devices as well as improving the capabilities of front-facing ‘selfie’ cameras without the need to add a second lens.

___

More By Me On Forbes

Huawei P20 Pro Camera: How It Beats The Samsung Galaxy S9 Plus And iPhone X

Samsung Galaxy S9 Plus Camera: How It Beats The iPhone X

GoPro Hero Vs Hero6 Black: What’s The Difference?

GoPro Hero Vs GoPro Hero5 Session: What’s the Difference?

Related Posts:

  • No Related Posts