Luminance versus RGB Histograms
Histograms are everywhere but come in two basic varieties, simple black-and-white and colored. RGB colored histograms certainly look pretty, but do you really need them? Or do basic luminance histograms tell you all you really need to know?
Once upon a time, all we had was one kind of histogram. It showed the proportion of pixels in an image at each degree of brightness. The horizontal or x-axis represented brightness, or "luminance", with pure black on the far left-hand end and pure white on the far right. The vertical or y-axis represented how many pixels at that brightness there were in the image. If there weren't any or very few the histogram would be low at that point, near the bottom axis. If there were lots of pixels that brightness, the histogram would be way up at the top of the graph. A histogram doesn't tell you where in an image any certain brightness can be found; it just tells you how much of that brightness there is somewhere.
An image that had a fairly random distribution of tonal values would have a histogram that was relatively even all the way across from left to right. An image that was fairly monotone would have a huge spike at that brightness with little if anything to either side of that spike. In this way, a histogram helped with judging exposure. An underexposed image would have a histogram bunched up on the left-hand side of the scale. An overexposed one would be bunched up on the right. You get the idea.
But perhaps the single most important use of a histogram is to help look for burned out highlights where detail in an image would be lost. Digital imaging is very unforgiving of overexposed highlights. Once the brightness scale runs up against the far right end of the histogram, there's nowhere else for it to go. Anything brighter than the maximum possible brightness gets recorded as the same value of pure white. And that's bad. Histograms are an invaluable tool for digital photography.
But given this, why would you want to complicate things with color? Isn't "overexposure" overexposure? If a digital camera recorded color information separately from luminance, yes. But since it doesn't, the real story is quite different.
A camera sensor separately records red, green and blue (RGB) channels separately. The composite luminance channel used by traditional histograms is made up from information actually recorded by three separate RGB channels. Since green is in the middle of the spectrum with red to one side and blue to the other, it contributes more to the appearance of luminance than do the other two colors. And it's therefore more than possible to burn out the highlights in one channel without impacting the other two at all. My favorite example would have to be photographing flowers. In order to entice insects to come and pollinate them, the colors of many flowers tend to be quite saturated. And if those flowers are red, there won't be much green light present at all. Fill the frame with a close-up of red flowers and the luminance will seem quite respectable or even underexposed. The green and blue channels will contain very little information. At the same time though, the red channel could easily be horribly burned out and clipped.
If you are photographing a purely grayscale subject then luminance histograms are quite satisfactory. There simply won't be any color present to need an RGB meter for. A luminance histogram will work well too if you are shooting something with a completely random but uniform color distribution. If there's a lot of one color present, there's a lot of every color present since the distribution is uniform. The image may be a cacophony of color, but it all averages out to be true to the luminance histogram representation. But if you're shooting a more real-world subject that may rely more strongly on one color or another at various points in the image, you'll be better served by an RGB histogram. Only an RGB histogram can do justice to the way a digital camera actually records information.
When we as human beings look at something, it's easy to see its color as being one variable and the brightness as another. In large measure this is due to the way our eye's iris automatically opens and closes to compensate for brightness without our conscious effort. A look at the physiology of the eye might even support that contention. The photoreceptors in our eyes can be divided into rods that are extremely sensitive to light but don't sense much in the way of color, and cones that need more light but give us our ability to see color, so it would be easy to assume human vision has separate brightness and color information channels. But such is not the case. The rods in our eyes contribute very little to the overall sense of vision at normal lighting levels with most everything coming instead from the cones.
Just as in a digital camera where the photosites (pixels) can be broken down into an arrangement of red, green and blue photosites with no one pixel sensing a combination of two or more of these constituent colors, the cones in our eyes are broken down into ones with sensitivity to red, green or blue light. Curiously though, while fully half of the pixels on a digital camera sensor sense green light, the cones in our eyes have a far different color breakdown. Sixty-four percent of them are primarily sensitive to red light, with thirty-two percent "green" cones and only two percent "blue." Since a large percentage of what we see as brightness (luminance) comes from the green portion of the spectrum, if our aim is luminance, green wins hands down. But if our aim is sensitivity, it's red light that wins. Undoubtedly this is why red light has been used in darkrooms and other applications where being able to see in low light levels is important.
All these details are handled more or less automatically by our eyes and our brains. But when recording images with a digital camera, you have to take a more active role in avoiding problems with exposure on a per-channel basis.