14-bit Raw Versus 12-bit Raw: Further Analysis and Comparison
Both Nikon and Canon now have cameras that can shoot in 14-bit raw mode rather than the traditional 12-bit raw. I wrote at length last week comparing whether the larger 14-bit files are actually any better than 12-bit raw, looking mainly at resolution as it relates to shadow and highlight recovery. This week I want to continue with the topic in order to look at differences in gradient rendering and also to respond to clarify a few points about my initial article.
First off, some clarification.
No, the 12-bit shadow image was not out of focus, nor was it blurred due to motion blur. Those are both ways that resolution can be affected, but clearly they are not the only ways. All images were shot on a tripod with a cable release, with the lens pre-focused. The tripod and the camera never moved and the camera was untouched with the change between 12-bit and 14-bit made via USB connection to a laptop running Nikon's Camera Control Pro 2 software. If one image were out of focus or blurred they all would have been. And they aren't.
What led some to conclude this might be is simply the fact that it is so much different than the 14-bit matching image. But that was my point in posting these particular crops in the first place. There's no way I could have posted the entire test chart at the scale of these crops. My server's bandwidth would object even if your internet connection wouldn't. I had hoped to make the fact that the images were in focus clear by using a section that had a frame of reference in the form of the dark bar across the top. Also, while still somewhat fuzzy looking, if you look closely at the left-hand end of the image it does seem sharper than the right-hand end where the vertical bars are closer together. To give an even greater sense of perspective though, here's a wider crop covering the underexposed section from last week plus more to the left where the 12-bit version still looks more like the 14-bit version. What you see is the 14-bit crop, but if you hold your mouse over the image you should see the 12-bit version. They're pretty close to the same on the left, but increasingly different as you move towards the right. It's the rendering of finer details that suffers in underexposed 12-bit images, not so much coarser ones.
Section of test chart comparing 14-bit to 12-bit detail
(Move your mouse cursor over the image to toggle between 14-bit and 12-bit versions)
Others readers have asked whether the increase in shadow detail recovery might allow one to intentionally underexpose in order to gain faster shutter speeds, knowing that you can later post process to restore the details latent in the shadows. The reasoning goes something like this: The number of values available to accurately record detail in any given stop in 14-bit is the same as is available two stops down in 12-bit raw. For example, as detailed in the chart in last week's article, the fifth stop down in 14-bit mode is represented by 512 discreet values out of the available range from black to white. In 2-bit mode you only get 128 values for this same stop, but do get 512 values two stops up from that, in what would be the third stop from the top. Thus, you should get the detail in the fifth stop of a 14-bit image that you normally would in the third stop of a 12-bit image, since each provides 512 discreet values to render that detail.
While this does hold true in theory, it doesn't really work that way in practice due to the effects of what is known as the signal to noise ratio. Boosting the exposure two stops in post processing would magnify not only the information content of the image (the signal), it also any noise that was present. Noise comes from lots of sources and while it can be controlled it can not be eliminated. You are much better off shooting at the correct exposure than trying to reduce noise later on.
One thing I didn't look at last week was how well gradients are rendered in 12-bit versus 14-bit raw. Not only should more bits help with accurately rendering detail, they also should help to accurately record each and every color even in areas lacking any detail. Indeed, vast blue skies and other similar featureless areas can often reveal problems with an image not apparent when detail is present to mask it. We expect gradients to be smooth yet sometimes digital noise can make it appear blotchy or make the transitions of color appear abrupt or banded. The more cleanly and accurately each data point in an image can be recorded the smoother one should expect gradients to be rendered.
Think for a minute about what a bad Xerox copy of fax transmission of an image looks like. Or think back to the days when Microsoft Windows rendered in only 16 colors, or Mac OS was still black and white. You just can't make a smooth gradient under such conditions since there simply aren't enough discreet values to give the illusion of continuous tone.
Even 12-bit raw images are more than capable of rendering smooth gradients in the brighter areas, but as we go down in stops there are fewer and fewer values available to record data and thus gradients will naturally be less accurately rendered. Or they would be if weren't for the fact that our eyes themselves become less sensitive to differences in shadow areas too, a phenomenon that should offset this potential raw problem to some degree. But if we later attempt to brighten a gradient shadow area in post processing, we may find the lack of color accuracy more evident. Again, this is a signal to noise problem. We have a weaker signal, and even if noise is unchanged, both will get amplified when we increase exposure in post processing.
To test the effect of increased raw bit depth on gradient rendering, I shot a number of images of an artificially created gradient in both 12-bit and 14-bit. I wanted to shoot a gradient that was as smooth as I could such that any errors later observed must have been caused by digital artifacts rather than being inherent in the test target itself. I looked at a number of ways to accomplish this and ended up creating a gradient image in Photoshop and displaying it on a monitor on one side of a darkened room. On the other sat the camera on a tripod in a setup similar to what I used last week. This week though the lens was intentionally de-focused to eliminate any possible problems with the source target. De-focusing does visually change the way the gradient renders by making some colors appear to widen or bloom while others contract, but when examining individual color transitions it should be completely smooth.
Much as with last week, after closely examining a number of images I found little if any different in properly exposed images when comparing 12-bit and 14-bit. When boosting the exposure of underexposed images though, I again could start to see some improvements in the 14-bit versions. But rather than needing to be as grossly underexposed as I had to when comparing shadow detail, gradient rendering started to suffer at even modest underexposure. The effect is more subtle here though, becoming one more of esthetics than anything else. Some people are more critical when viewing images. Some people have better displays as well that are more able to show differences. But whereas the diminishing appearance of fine detail in 12-bit is readily apparent, the differences in gradient rendering are harder to show definitively. Here are several samples to do the best I can though. As with the focus image discussed above, each gradient image below shows a crop taken from the 14-bit capture. Moving your mouse over each will change it to show the 12-bit version. Each was taken from a 300% enlargement of a gradient image two stops underexposed that was then post processed to increase the exposure and bump up the contrast so that imperfections would be more apparent.
Gradient test from 14-bits and 12-bit capture -
not much difference at this scale
Gradient comparison 1 at 300% crop -
a good example of many of the transitions
Gradient comparison 2 at 300% crop -
cyan seemed to be more of a problem than other colors in both modes
Gradient comparison 3 at 300% crop -
yes, these are subtle aren't they
|(Move your mouse cursor over each image to toggle between 14-bit and 12-bit versions)|
Differences are indeed subtle. If you disagree with my conclusion that they are real enough to further warrant shooting in 14-bit raw, I invite you to do your own testing. Back in the days of film, some photographers were content to buy the best film they could based on reputation or even price and went about shooting it without worry. Others spent considerable time comparing different films to decide for themselves which they preferred. Personally, I always fell somewhere between these two camps as today I fall somewhere between those who use what should be the best tools and techniques they can and those who attempt to statistically test every component of their digital workflow. And at least for now I've probably had my fill of 14-bit versus 12-bit. Hard drive space is cheap these days and compact flash cards are coming down in price all the time. My best images generally record a unique moment in time when the light and scene are just right. Or at least that's what I strive for. I generally can't go back and retake images so I want to capture the best raw files I can when I'm there.