Marques Brownlee's decade-spanning phone camera test reveals computational photography's evolution
Marques Brownlee’s side-by-side phone tests show how far smartphone cameras have moved beyond optics alone, and why newer HDR still changes the look.

Why the same-scene test matters
Marques Brownlee’s latest repeat test turns a simple photo challenge into a decade-long lesson in computational photography. By shooting the same scene across generations of Samsung Galaxy phones and then doing the same with Google phones, he created a visual timeline that shows how much of modern phone imaging now happens after the shutter fires, not just through the lens and sensor.
That is what makes the comparison so useful to photographers. It is not a spec-sheet exercise. It shows how current phones push dynamic range, balance highlights and shadows, and reshape the final image through software choices that older devices simply could not make.
A better test than the earlier iPhone version
Brownlee’s earlier iPhone version of the experiment was considered a little tame because the outdoor daytime scene was too easy for even older hardware. This time, he chose more demanding conditions, which is exactly why the results land harder. Instead of giving every phone a straightforward exposure, he looked for scenes that expose the limits of each generation’s processing.
That shift matters because the hardest comparisons are the most revealing. When the scene is too forgiving, the differences between phones blur together. When the lighting gets tricky, the strengths and compromises of each generation become obvious.
Samsung from the original Galaxy S to the Galaxy S26
On the Samsung side, Brownlee posted a sequence that ran from the original Galaxy S to the Galaxy S26. The progression makes one thing very clear: Samsung’s latest phones are producing a striking amount of HDR, while the older devices often hold on to more natural contrast in some frames.
The oldest generations also get progressively rougher as the hardware ages. That is not just a reminder of aging sensors and weaker image pipelines. It is a visible demonstration of how much the entire camera stack has improved, from capture to processing to tone mapping. The newer phones do more to lift detail out of shadows and keep bright areas usable, but they also push the image toward a more heavily processed look.
What the Google test reveals in backlit light
Brownlee repeated the experiment on Google phones while wearing a black Daft Punk hoodie and standing in front of a window. That setup created exactly the kind of backlit challenge that separates modern computational photography from older mobile imaging. A dark subject in front of a bright window forces the camera to decide what matters most, and today’s phones often choose to save both.
PetaPixel notes that the blue in the sky gradually disappears as the camera systems improve. That is a vivid visual cue for what computational photography is doing under the hood. The phones are combining multiple frames, applying noise reduction, and using machine learning to preserve both highlight and shadow detail, even when the scene would have once blown out the background or buried the subject in darkness.
What changed under the hood
The most important takeaway is that smartphone cameras are no longer simple image-capture devices. They are highly processed computational systems, and the final image is shaped as much by software decisions as by optics or sensor size. Brownlee’s comparison makes that easy to see without needing a technical breakdown on screen.
For photographers, that matters because “better” does not always mean more natural. The newer phones deliver more dynamic range and more consistency, but they can also smooth away texture and flatten some shadow depth that older phones, or even older processing, rendered differently. The evolution is real, but so is the tradeoff.
What hobby photographers can learn from the comparison
This is where the video becomes more than nostalgia. It gives you a practical read on the current state of phone photography, especially if you use a smartphone as a second camera or your main camera for everyday work.
- Newer phones are much better at rescuing highlights and shadows in difficult light.
- Older phones can sometimes look more contrasty and less processed, which some viewers may prefer.
- The stronger the HDR, the more the camera is making aesthetic decisions for you.
- Backlit scenes still reveal the biggest differences between generations.
- Software now matters enough that two phones with similar hardware can produce very different-looking files.
That balance between progress and personality is why these comparisons keep resonating. A clean, well-lit scene can make almost any phone look decent. A backlit window, a dark hoodie, and a challenging sky reveal how much the device is really doing.
Why the debate is still alive
Brownlee’s test is also a reminder that the old debate between natural rendering and processed rendering has not gone away. It has just moved into a more advanced phase. Today’s phones can deliver more detail, more usable range, and fewer blown highlights than ever before, but that improvement often comes with a look that feels more engineered.
For anyone who shoots on smartphones, that is the real story behind the specs. The biggest gains are no longer just about sharper sensors or bigger numbers on a chart. They are about how aggressively the phone interprets the scene, and how much of the final photo is now built by software after the exposure is made.
The bigger picture for phone photography
Brownlee’s decade-spanning comparison works because it is easy to understand at a glance and hard to ignore once you notice the pattern. The newer the phone, the more the image feels assembled rather than simply captured. The older the phone, the more its rough edges show through, sometimes with a contrast and texture that newer devices intentionally soften.
That is the clearest lesson here: smartphone camera progress has been enormous, but it has not been neutral. The hardware has improved, the software has become far more capable, and the image itself has become a product of computation as much as photography. For readers who care about how cameras actually render the world, that is the real upgrade story.
Know something we missed? Have a correction or additional information?
Submit a Tip

