White Skies as a Measure Progress in the Digital Age
With each new generation of camera and of camera technology, change happens, sometimes so much it can seem hard to keep up. But at its core, photography will always be photography, so are all these goodies really bringing us progress? Recently I was thinking about at least one small way we definitely are.
It can be quite bright in the great outdoors. To compensate, the pupils in our eyes automatically dilate and contract to let in more or less light. We generally only become aware of this process when confronted by abrupt changes of brightness such as going from a well-lit hallway into a darkened room. It takes time for our eyes to adjust, and until they do, it can be difficult to see. But other than that, this process happens quite automatically and fluidly, even as we look around and scan our environment. Look up towards the bright sky, and our pupils contract. Look back down towards the land, and they open back up. While we think we saw both correctly exposed together, we actually saw the bright sky first, and the darker foreground second. Our brains combine the constituent parts to create a unified whole. And each part will seem correctly exposed since our eyes and brain automatically made it so when we saw it.
Camera apertures work somewhat similarly, opening wider to let in more light, or stopping down to let in less. But whereas we have brains that help knit together what we see around us to form what we think we perceive, cameras don't. On automatic exposure modes, the camera can adjust exposure on the fly, but whatever it comes up with will be used for exposing the entire frame. We see the two halves of bright sky and darker foreground and put them together automatically. The camera has to choose one exposure for the entire image. Expose for the sky and the foreground way end up lost in darkness. Expose for the foreground, and the sky will probably end up rendered as burned out white. White skies suck.
There are really only two strategies for solving this problem in order to get the shot. The traditional solution involves cutting back some of the brightness from the sky to bring it more in line with the rest of the frame by means of graduated neutral density (GND) filters. Gray ("neutral" density) on one end and transitioned (graduated) to clear on the other, such filters can be appropriately affixed to the front of a camera lens to darken the sky without overly affecting the land. By selecting a filter of sufficient strength and carefully aligning the transition to hide it somewhere near the horizon, you can equalize the brightness across the frame to bring everything within the range a camera is capable of recording.
With enough painstaking preparation, it was often possible to work seeming miracles, but some situations defied even experienced GND users had to admit defeat when presented by tree trunks or other major elements that would cross the transition no matter where it was placed. I would regularly use two, three, or even more grads in a given shot, each angled and positioned to address some specific lighting issue and contribute to creating the best shot I could manage. Not everything I tried worked out, but I wouldn't have gotten some shots any other way. Nested in between a collection of lenses in my camera back pack, I typically carried a dozen or more various GND filters. Some were stronger (more dense) than others, some had a fairly sharp transition in the middle while others were made with feathered transitions to render smoother transitions. And then there was the wonderful "Reverse" graduated neutral density filter championed by Canadian photographer Daryl Benson and manufactured by Singh-Ray. That was the bomb, man.
Now in the digital world, we have a better alternative. With the help of programs such as Lightroom and Photoshop, we can mimic what happens in our brain. Well, sort of. What we can do is merge portions of more than one shot taken at different exposures, taking the best exposed portions of each, to form a composite. We can take the best parts of each exposure to create an image that looks like it was rendered as a single image but was actually shot serially and then combined. This is what our brain does, mentally creating an image made up of bits seen with our pupils opened (or closed down) appropriately for each.
Digitally, it doesn't really matter how complicated the transition line (or lines) are. All we have to do is paint the correct mask for each. Some software makes this whole process more or less automatic, but generally to get the best results you have to be willing to invest some time. For some shots though, there really is no other way, so we do it. The results make it all worthwhile.
So, think about it. Now, rather than spending an inordinate amount of time setting up graduated neutral density filters just right, preparing to take the shot and avoid burned out, white skies, we can spend inordinate amounts of time (in some cases it can seem that way at least) after the shot, blending a series of differently exposed frames to create a final image. It can still be a lot of work sometimes, so is it worth it?
To me, it definitely is. This is real progress, if for no other reason than it puts us more fully in control, and we can have the luxury of spending as long as we want getting things right, and after we get back to a warm house rather than having to shiver in the dark messing with filters in a frantic quest to get the best shot we can before the rising sun ends our window of opportunity and we have to start shooting, ready or not.
I really like this new way.