This week Matt explains dynamic range – what it is, how it works, and how to play with it
Let’s imagine you take two identical images of a landscape, one with a compact camera and one with a DSLR. When you compare images from the two, one of the things you may notice is the difference in detail between bright and dark areas; while the DSLR may be able to record the intricate tones in darker and lighter parts of the scene, the compact struggle to show detail in both.
Similarly, when taking images indoors, you may notice your camera being able to record all the details the room accurately but details through a window blown out to a bright highlight.
When we talk about dynamic range, we refer to a camera’s ability to record shadow details and highlights in a scene at the same time. Cameras known to have a wide dynamic range, such as most DSLRs and Compact System Cameras, are able to do both to a greater degree than compacts and the cameras found inside smartphones.
Much of this is down to the type of sensor used and the size of each pixel. If you imagine each pixel as a bucket collecting rainwater, you’ll appreciate that a smaller one will fill faster; this is the upper limit, known as the saturation level. The lower limit is the amount of noise generated at each pixel when there is no light reaching it, also known as the noise floor. The ratio between the two is the dynamic range of the sensor.
The actual dynamic range in the final image is subject to all subsequent image processing. The process of converting the scene into digital information and the choices the photographer makes with regards to capture settings, together with any processing you do later on will all have an effect.
Areas in which details have failed to record accurately – instead, showing a plain areas of black or white where one might expect to see fine details – are said to have been clipped in the case of highlights or blocked when referring to shadows. Most cameras allow you to check this upon reviewing images by flashing any areas which have lost detail.
You can also spot this when examining the image’s histogram; details that fail to be contained within the scale at either end show as a line rising right to the top of it. With a Raw file you may be able to regain some of this detail in post-processing, although this depends on the information within the Raw file and the skill of the person editing the image.
As with many similar aspects of a camera’s performance, manufacturers have long attempted to better this through clever image processing.
Almost all cameras today come equipped with some kind of dynamic-range-optimisation option, which attempt to regain details in highlight and shadows that would otherwise be lost.
Using flash can also help in situations where the optimum exposure for one area is vastly different to that of another, as this helps to lift darker areas to a level closer to that of better-illuminated areas.
You may have also heard of High Dynamic Range imaging, commonly known as HDR, which refers to a process whereby multiple exposures are combined into one. This allows you to expose separately for shadows, midtones and highlights and combine all correctly exposed parts into a single image.
You can do this yourself in post-production, although many recent cameras have a setting that will capture and combine two or more images in an instant. This process allows you to achieve a result that’s closer to what you can see with your own eyes, although the results aren’t to everyone’s taste as they can appear quite unnatural.