RAW Versus JPEG in the Age of Mega Megapixels
Providing more data, camera RAW files are generally preferred over jpeg files for editing. But since the ultra-high resolution of today's mega-megapixel cameras make so much more data available when shooting, regardless of format, do you really need to worry about RAW anymore?
Let's start with a few numbers for comparison. My first DSLR was the Nikon D100 that shot 6-megapixel images. Some years later, I was shooting with a Nikon D300 that had 12.3 megapixels. Move a bit further forward in time and you'd find me shooting with a Nikon D7100 that produced 24.1-megapixel captures. By comparison, my new Nikon D850 features a whopping 45.7 megapixels. This chronology doesn't include every camera upgrade I have made, but I highlight these for a reason. Each of these leaps in technology came with a roughly doubling of sensor resolution. Along the way, other changes have also taken place. By the time we get to that Nikon D300, it became possible to save RAW files in either the traditional 12-bits per pixel standard or the newer 14-bit RAW format, making even more data available to us.
The question at hand then is, does this increasing amount of data at some point justify shooting in jpeg mode, getting rid of the need to hassle with RAW conversion yourself?
To help answer that question, let's take a look at the jpeg side of the ledger. Here, everything gets saved in 8-bits per pixel. This holds true from my early D100 right on through to my new D850. Eight-bit data is all the jpeg file format supports, period. And 8-bit data will always have less precision than either 12-bit or 14-bit data. On any given camera, RAW will always have more bits per pixel than jpeg will. Upgrade your camera, and you upgrade the resolution of both RAW and jpeg, so case closed, right? I mean, you do want the best results you can from all the pixels you paid for in your new camera, right?
But it's potentially not quite that simple. With jpeg, while it is true we are constrained to just 8 bits per channel, we now have separate red, green and blue channels (RGB mode) as opposed to the single channel raw format. RGB files such as jpeg use a triplet of values to represent each pixel, one for each color channel. So, an 8-bit RGB jpeg has values from 0 — 255 for each of the three color channels, for each pixel. By contrast, a RAW file interpolates continuous RGB color from a single channel of data by means of an ingenious method of overlaying different colored filters on top of adjacent pixels (photosites). Alternate pixels within a row have alternate colors, and rows are stacked vertically of alternating types as determined by the color pair within that row. Assuming one row of photosites is filtered so that pixels record as red, green, red, green, and so on across the row, alternating between red and green, the next row will be filtered to record blue, green, blue, green pixels, and so on. So, while each pixel records only a single color based on its color filter, there will always be a photosite recording a pixel with the other two RGB colors right next door. RAW conversion software then interpolates the data that actually exists to fill in the missing values and generate full RGB triplets for each pixel. This pattern of varying colored pixels is known as the Bayer Mosaic. And yes, this does indeed imply that fully two thirds of the final RGB numbers were never actually recorded and are the result of guesswork.
For the curious following along at home who are wondering why green ends up getting twice as many pixels dedicated to it as any other color (green pixels are present in the color paring for every row while red and blue exist in only every other row), this was no accident. Green is in the middle of the visible color spectrum and our eyes are more sensitive to green than they are to red or blue light.
With this new Bayer Mosaic wrinkle, it makes it rather difficult to compare pixel depth. How is one to compare 8 versus 12 or 14 bits when all bits are not created (or even recorded) equal? And even if an RGB jpeg only has 8 bits for each red, green and blue value, it has each of these values at every pixel photosite instead of just a single 12- or 14-bit value (color filtered) at each point. Doesn't it then have fully 24 bits total (8 bits times 3 RGB channels) for each pixel? So, shouldn't this comparison really be between 24 bits for jpeg versus only 12 or 14 bits for RAW? That can't be right, or jpeg would come out ahead on any camera. Never mind just on new ulta-resolution models like the Nikon D850.
This might lead one to assume that jpeg could win again or little competition here since it has red, green and blue channels while RAW has to guess at values not directly captured via the Bayer Mosaic filter pattern. The problem with this idea, though, is that every jpeg started out as a RAW capture, with the conversion happening in your camera rather than on your computer. Your camera can only shoot in RAW. The difference in modes relates only to whether you want your camera to convert to jpeg for you, or whether you'd rather deal with that yourself, after the fact.
The whole reason for that Bayer pattern is that it just isn't possible to record all three color channels at each photosite. Put a green pixel at some point in the pattern, and you can't also put a red or a blue. Two things can't occupy the same space at the same time. In the early days of camera sensor development, there was something known as a Foveon X3 sensor did indeed attempt to stack layers for all three colors on top of each other, much like the different layers of traditional film emulsions. But for technical and market-driven forces, such designs eventually gave way to the almost universal adoption of the Bayer Mosaic. In other words, the potential loss of detail from interpolating the missing values from the Bayer pattern is baked into every image captured by your camera, RAW or jpeg.
If you keep the full RAW capture, you may or may not need the extra bits, but once you convert to jpeg, there's no going back unless you still have the original RAW file too. And whether you do need it or not depends in large measure on how much work you need to do on a given image. If you nail it in camera, you should be fine with jpeg. If you underexposed or have other limitations to contend with, you might be better off with RAW.
The real problem is whether you clip data by overflowing the range of possible values in any given color channel. An 8-bit jpeg channel can only store values from 0 — 255. If something is more saturated red than the maximum 255 value represents, too bad. Overflow all three channels, and you end up recording pure white with no detail. The value will be clipped to 255. The same problem exists at the lower end of the scale too, but I've generally found fully black shadows more acceptable than burned out, white highlights.
Don't think of the extra bits in RAW channels as simply allowing you continue counting beyond the 255 limit though. The pixel values in a RAW file are an entirely different beast altogether. RAW data is defined in something known as a "linear gamma" color space. Technical jargon aside, such values are much more amenable to recording useful data without clipping in extreme exposure situations.
So, let's look at the situation where some area (the sky perhaps) is overexposed — a situation where manual RAW might help improve an image. Let's assume that it represents ten percent of the image — whatever. More pixels on a newer camera only mean that more pixels will be at risk in this ten percent. The problem remains unchanged: ten percent of your image risks losing all detail. Ten percent is still ten percent, regardless of your camera. And regardless of how many megapixels it may have.
The bottom line of this is that nothing has really changed. No matter what camera you shoot with and how many mega-megapixels it may have, the difference between shooting in jpeg versus RAW remains the same. If you are confident that you can nail the exposure in camera, let it do the work of RAW conversion for you and save file space by working in jpeg. If you shoot under tricky lighting conditions or conclude a bit more work on your part is a reasonable price to pay for extra insurance of having the full RAW data available to work with, then shoot in RAW.
True as it ever was.