What About 16-Bit for Printing?
Anyone who has hung out here much knows that I'm an advocate of working in 16-bits per channel to avoid degradation. So when it comes to printing, does the same advice apply?
Let's start by looking at what is possible. On the Apple side, Mac OS has supported 16-bit printing since OS X Leopard was released several years ago. 16-bit printing was added to Windows around the same time as part of the XPS Print Path in Windows Vista. As such, the two major operating system platforms provide relatively comparable support for 16-bit printing. But beyond this foundational level of what is possible, the two platforms couldn't be more different in what you can actually do.
In response to Apple adding 16-bit printing support to OS X, Epson started releasing print drivers that took advantage of the increased bit depth. But to this day, they have yet to do so for Windows. In order to take advantage of the higher bit depths possible on current versions of Windows, one has to step up to more expensive third party solutions that bypass the Epson drivers. Windows Vista and Windows 7 may make 16-bit printing possible, but Epson doesn't, so most of us on the Microsoft side are constrained to using drivers limited to 8-bits per channel.
So if this is the lay of the land, so to speak, how much of a difference does 16-bit printing make? I mean, if it produced noticeably superior results, wouldn't Epson provide support on both platforms? I'm not a Mac OS user so I can't compare directly myself, but I do know plenty of folks who are Mac users and have done so. Uniformly, they report little to no difference in side by side comparisons made with 8-bit and 16-bit drivers on OS X.
It's worth considering why this might be so.
8-bits per channel multiplied by three channels in the RGB model yield a total of around 16.7 million possible colors. That may seem like a lot, but the same calculations done in 16-bit mode yield about 281 trillion possible colors for each pixel. That's a tremendous range of colors. But researchers estimate that the human eye can perceive no more than around 10 million different gradations of color. This means that both bit depths are capable of rendering more colors than we can distinguish. Given this, it stands to reason that there would be no perceivable difference in prints made each way.
But this only serves to bring up yet another question. If we can't tell the difference in a print, why is a 16-bit editing workflow uniformly regarded as capable of producing higher quality results? The answer this time has to do with the word "editing." Any image can be photographically represented to a quality beyond what the human eye can differentiate with only 8-bits per channel. But the same can't be said for editing.
Editing digital images is inherently damaging. If you need to divide a value in half, you can only avoid loss if that value is itself a multiple of two. Ten divided by two gives five, but what about eleven divided by two? There are no fractions in either 8-bit or 16-bit mode. By choosing to round up or round down, you err on one side or the other. Eleven is simply not evenly divisible to two. And each time you make an edit you face the same problem of added error. Successive edits may cancel each other out, but they are much more likely to make the problem worse. The editing advantage of 16-bit mode comes from the fact that we are starting with much bigger numbers so the error introduced at each stage is proportionately that much smaller and thereby less significant.
The reason why this doesn't matter much if at all for printing is that we really aren't asking the print driver to do that much integer arithmetic. The standard printing workflow involves setting the driver to "no color adjustment" and instead relying on Photoshop to convert from the image's working space to the target printer profile. Yes, between the print driver and the printer itself, the RGB data you print has to be converted into the actual ink values needed to support however many ink colors your printer uses. But a single translation can't add that much error. Remember, it's the aggregate sum of the error from successive edits that starts to cause problems.
Editing requires 16-bits per pixel to keep the rounding problem in check. Once you get to printing, 8-bits are enough. If you start with a 12-bit or 14-bit RAW image and print directly from Lightroom where all edits are non-destructive and combine to a single calculation when you print, or convert to a 16-bit RGB image in Photoshop and use Adjustment Layers to minimize loss, you can then print without worry on either Mac OS or Windows. Even if you start with a perfectly exposed camera 8-bit jpeg and don't need to do any editing, you can print just fine on both platforms. But if you start with an 8-bit jpeg and need to perform any edits, you need to increase the bit depth to avoid problems. Once you're done editing, you can even convert back to 8-bit if you want to before printing, but I'd advise against it. You're better off keeping your images in a bit depth sufficient to support any future editing needs. After all, you never really know when you are done editing an image.
I still really wish Epson would provide 16-bit drivers for Windows, if for no other reason than to be sure. In practical use though, the need simply doesn't exist.