Are Adjustment Layers Really Still Necessary in 16-bit?
The use of adjustment layers helps to maintain image quality by avoiding the buildup of artifacts that can result from a series of successive individual edits. But if you're editing in 16-bit mode, you've got a heck of a lot more data to work with. Each pixel value is so much more precise than it would be in 8-bit, do you in fact now have data to spare? That is, do you really still need to worry about using adjustment layers at all?
I got asked the above question earlier this week, and I have to admit it made me stop and think for a minute. The premise is an interesting one for someone like me who has been known to take a few digital photographs in my time and also writes about digital imaging. I mean, I want the highest quality I can, but I don't want to spend time doing something if there's no real benefit in it. So it got me thinking. And since it did I figured it would be worthwhile sharing my thoughts with my readers here.
Let's start with a few basics: Most of us work on images in some RGB color space where individual pixels are defined by numbers for their red, green and blue components. If those numbers are 8-bit values, each can contain a number between zero and two hundred fifty-five. That's not a large range of possible values but multiplied together it does afford us 255 x 255 x 255 = 16 million distinct combinations of red, green and blue which gives the perception of photographic quality. But when looking at what problems can happen we need to consider each channel separately. To illustrate, imagine for instance that the red value ends up as 135 instead of 136 somewhere in an image you care about. There are a vast number of colors that can only be created with a given red value and missing the correct red by one means those colors will now be out of our reach.
Adjustment layers help mitigate this issue by consolidating the actual implementation of an entire series of edits into just a single mathematical computation in the software. If you directly edit an image to adjust brightness, then contrast, then change saturation, and so on, each edit may result in rounding errors to the pixel data, and those errors add up. If you later decide you overdid the saturation a bit and tone things down more, that edit compounds things further. The errors that result from each edit continue to build even when one edit undoes all or part of what a previous edit did. On the other hand, if you do the same thing with adjustment layers, all edits effectively get applied only once, and they get applied together as if you had done them all as a single edit. Rather than suffering the combined effects of a series of edits, you never have to lose more than what a single edit causes. It's just that with adjustment layers, that single edit did everything simultaneously.
But if we are working in 16-bit mode, rather than only having 256 possible values for each channel, we now have 65,536 reds to choose from and the same for blue and green. These values don't define a broader range of reds — brighter or perhaps more saturated — they define greater precision for the same range of red colors we had in 8-bit. The extra bits get used to store fractional values between each of the 8-bit colors. Think of 8-bit editing as being confined to whole numbers whereas 16-bit opens things up to the word of fractions and digits to the right of the decimal point. Now if we miss a given shade of red due to an editing issue, we're only off by a fraction of how much we would have missed it by in 8-bit. Perceptually speaking, it becomes a much narrower range of colors we can no longer reach. While a whole number error in 8-bit might be noticeable as banding in an area that should have a smooth gradation of color, a fractional error in 16-bit from the same cause likely won't show even to the discerning viewer.
So the question is, if you have so much less need for concern when working in 16-bit, can you dispense with the need for adjustment layers at all, relying instead on all those fractional pixel values to make any errors subtle enough that they won't show? As I said at the outset, an interesting premise to be sure.
Up to a point, I'd have to say that there is indeed something to this so long as you don't obsess over an image to the point where the loss from countless edits start to show even when you do have all that extra data to begin with. But I think any truth there may be in this misses the broader reasons to use adjustment layers.
Not only do adjustment layers help minimize image degradation in digital editing, they provide a degree of control not otherwise possible when optimizing an image. Many editing dialogs in Photoshop are non-trivial. Some have enough sliders and controls to initially scare away all but the most serious users. Setting all those controls just where you want them can require a fair degree of attention to detail. If you perform such changes as direct edits on an image, those changes are baked into the image as soon as you close the dialog. If you want to go back and modify your settings you have to instead recreate them since while their results are evident in the image, the sliders and settings themselves are now gone. If those same changes are done via an adjustment layer, the specific settings you used are still there, easily available for future tweaking. Simply by double-clicking on the adjustment layer icon, you are right back to the exact dialog with everything set right where you last left it so you can change only what you came there to without the need to redo all those other settings also.
Adjustment layers not only help to preserve your image quality, they also help save you time. Time is valuable, and that holds true regardless of 8-bit versus 16-bit.