Earthbound Light - Nature Photography from the Pacific Northwest and beyond by Bob Johnson
Home
About
Portfolio
Online Ordering
Contact
Comments
Recent Updates
Support

Photo Tip of the Week
CurrentArchivesSubscribeSearch

After Converting to 8-Bit, Can You Go Back to 16 Bits Per Channel?

Converting an image from 16-bit to 8-bits per channel necessarily loses a lot of information and this loss of precision can never be recovered. It's a commonly held belief that once you do this, converting back to 16-bit is pointless. But is that really true?

Let's start by reviewing the nature of the problem. Colors in RGB are described a sets of three numbers, one each for red, green and blue. The larger the number, the more of that color present at that point. With 8 bits, the total number of possible values is two to the eighth power, or 256. With 16 bits to work with, a similar calculation tells us we have two to the sixteenth power total values, or 65,536, for each color channel. But since we human beings think more easily in base ten than in powers of two, rather than continuing with these somewhat unwieldy numbers, allow me to simplify things a bit in this article. Purely for the purposes of illustration, let's pretend that colors can be described by regular base 10 numbers using either one digit or two. Our hypothetical 1-digit color mode would therefore give us 10 values (zero through nine) and 2-digit color would increase that to 100 possible color values (zero through ninety-nine). The principle is the same regardless — 16-bit gives us more color values just as would 2-digit color — but it will be easier for me to show what happens when we convert between bit depths if I continue with numbers the average person can divide in their head. Computers don't mind powers of two, but most of us do.

Now that I have that established, we need to look at what the additional numbers get used for in 2-digit mode versus 1-digit mode. Whether we compare 65,536 to 256, or the more manageable 100 compared to 10, the extra values don't get used to describe more saturated colors, they facilitate more precise descriptions of color. That is, "100" in our simplified 2-digit world describes the same color as "10" does in our easier-to-calculate 1-digit world. Likewise "90" corresponds to "9" and so on. The numbers in between 90 and 100 therefore allow us to define more discreet colors in between these two points not possible in 1-digit. The jump in color from 9 to 10 in our low bit depth example can be broken down into 10 smaller steps from 90 to 100 with more bits available.

Thus, converting from 1-digit to 2-digit color is the same as multiplying by 10, and converting from 2-digit down to 1-digit mode can be accomplished by dividing by 10 and throwing away any remainder (or rounding, it makes no practical difference to the argument being presented). Arithmetic of a similar sort can be done with real 8-bit and 16-bit numbers, but we would need to multiply and divide by 256 rather than 10.

So, continuing to work with our simplified numbering scheme, if we start with an image containing 100 possible values for each channel in 2-digit mode and down-sample it to 1-digit color, we clearly would lose one digit of precision. We would still be attempting to describe the same colors, but we would be doing so less accurately. Keep in mind that real 8-bit color is still sufficient to create photorealistic images so we wouldn't really lost anything we could perceive visually, but we would indeed have lost precision. If we then convert back to 2-digit color, we still visually would have the same image we had before, but any numbers not evenly divisible by ten would be truncated (or rounded) to the nearest multiple of ten. If our original image had values in a smooth gradation of color from, let's say, 71, 72, 73, and so on up to 79, all these would be converted to simply "7" in 1-digit color, and then become "70" after converting back to 2-digit mode. It might not look any different, but it would be.

But if it doesn't look any different, who really cares? Well, you should. At least if you make a habit of such conversions. Each time you do it, you would lose a little bit more accuracy. Eventually, it would show. You'd probably notice it first in such areas of smooth gradation described above. Banding would become evident when numbers that should be evenly spread across the spectrum get chopped down to just powers of ten. Each time you do this the rounding errors would get compounded and eventually things just wouldn't look as smooth as they should. What started as a smooth gradation would now become blotchy and banded.

The problem is no better though if you convert to a low bit depth and continue editing without converting back since every edit you do would be constrained by the same lack of precision. But if you convert back to a higher bit depth, subsequent edits could avail themselves of the greater precision possible. You would thereby have stopped any subsequent rounding problems. The damage done up to that point would already be done, but any future damage would be prevented. To illustrate, if you convert an image to 1-digit color and do something to it your resulting image will contain nothing but 1-digit numbers. If instead you convert it back to 2-digit color and do the same edit the results could contain a broader range of values made possible by the extra digit.

But if every number in your reconverted 2-digit color image is necessarily a multiple of ten, wouldn't the result of any edit still have problems? That is, say you started with a smooth gradation all the way from one through ninety-nine. Converting that to 1-digit, then back to 2-digit color would give you a bunch of zeroes, followed by a bunch of tens, then a group of twenties, and so on with a bunch of nineties at the far side of what was originally a smooth gradation. Make an edit that modifies the tonality of that image and all the tens would end up sharing the same value, as would all the twenties, thirties, and so on, wouldn't they?

It depends on what that edit is. Some edits take into account only the value of each discreet pixel without regard to that of any neighboring pixels. Others attempt to take into account the surrounding pixels. Take the various interpolation methods supported by Photoshop as an example. "Nearest Neighbor" doesn't factor in surrounding pixels; "Bilinear" looks at surrounding pixels in a simplistic way, while the "Bicubic" options attempt to fully consider the context of each pixel and what is around it. To the extent that surrounding pixels are considered, edits in 2-digit color would yield better results than identical edits in 1-digit mode even when the image started out with only ten discreet color values, whether those values be one through ten or multiples of ten up to one hundred. This is because the results could include intermediate values not present in the source that result from the consideration of surrounding pixels. For example, the closer a "70" pixel is to an "80" neighbor the greater the likelihood would be that the result would be an intermediate value closer to 80 than 70 rather than either number ending in zero you started with. That is, most tonal edits would tend to smooth out potential banding problems in an image rather than preserve or even exacerbate them. All this works identically if we work with the real value limitations inherent in 8-bit and 16-bit.

To review, a single conversion from 16-bit to 8-bit and back is not visually noticeable. If it were, printing from 8-bit mode, or even printing a 16-bit image with 8-bit drivers would yield noticeably inferior results. Furthermore, while there is indeed a loss of precision thereby created, so long as you do convert back to 16-bit, any further degradation would be prevented, and even mitigated to a degree.

Yes, I am a huge fan of 16-bit workflow, and strongly recommend using it whenever possible. But the idea that once you convert to 8-bit mode for any reason there's no going back simply isn't valid. If you plan on doing any further editing it is always worth converting to 16-bit mode first to prevent damage to your image. This is true whether you start with an 8-bit jpeg or start with a RAW capture edited in 16-bit mode that you were compelled to convert to 8-bit for some reason. Of course, if you find yourself faced with the need to convert to 8-bit I'd suggest thinking hard about whether you really want to do this. You're always better off staying in 16-bit mode and there are likely other ways to accomplish whatever you want to do. Be kind to your images.


Date posted: May 6, 2012

 

Copyright © 2012 Bob Johnson, Earthbound Light - all rights reserved.
Permanent link for this article
 

Previous tip: What About 16-Bit for Printing? Return to archives menu Next tip: A Square Peg in a Round Hole

Related articles:
More Than a Bit of a Difference: 8-bit Versus 16-bit
More on the Subject of 8-bit Versus 16-bit
Are Adjustment Layers Really Still Necessary in 16-bit?
What About 16-Bit for Printing?
 

Tweet this page       Bookmark and Share       Subscribe on Facebook via NetworkedBlogs       Printer Friendly Version

Machine translation:   Español   |   Deutsch   |   Français   |   Italiano   |   Português


A new photo tip is posted each Sunday, so please check back regularly.


Support Earthbound Light by buying from B&H Photo
  Buy a good book
Click here for book recommendations
Support Earthbound Light
  Or say thanks the easy way with PayPal if you prefer



Home  |  About  |  Portfolio  |  WebStore  |  PhotoTips  |  Contact  |  Comments  |  Updates  |  Support
Nature Photography from the Pacific Northwest and beyond by Bob Johnson


View Cart  |  Store Policies  |  Terms of Use  |  Your Privacy