"Deep Fusion" explained: First look at Apple's most innovative camera feature

The new iPhone 11 family comes with new and better cameras, but there is one ground-breaking new camera feature that will not be available on these iPhones from the very start, but it will instead arrive as an update.
Apple calls this special feature “Deep Fusion” and it is a brand new way of taking pictures where the Neural Engine inside the Apple A13 chip uses machine learning to create the output image.
“Deep Fusion” will arrive with a software update on iPhone 11 series this fall
The result is a photo with a stunning amount of detail, of great dynamic range and with very low noise. The feature uses machine learning and works best in low to medium light.
Phil Schiller, Apple’s chief camera enthusiast and also head of marketing, demonstrated the feature with a single teaser picture and explained how it works.
How “Deep Fusion” works:
- it shoots a total of 9 images
- even before you press the shutter button, it has already captured 4 short images and 4 secondary images
- when you press the shutter button, it takes 1 long exposure photo
- then in just 1 second, the Neural Engine analyzes the fused combination of long and short images, picking the best among them, selecting all the pixels, and pixel by pixel, going through 24 million pixels to optimize for detail and low noise