What is Tonal Range?
A rich range of tones in a colored, as well as a black and white, image is vital to its success. In digital photography, this is limited to the sensor’s dynamic range and its ability to capture a wide enough distribution of tones that would be considered suitable for the purpose of the photo.
Factors that influence tonal range of the final image include subject reflectance as well as the lighting. A high reflectance subject accompanied by high lighting results in a long span of image tones that are probably beyond any sensor’s capabilities.
Light size and quality, harsh or soft, also control the tonal range of a scene. So do special effect filters such as polarizing filters that are known to increase color vibrancy and expand tonal range. Diffusers and similar light-control equipment lessen contrast thus reducing the tonal range.
The bit depth used to encode an image has an effect on the tonal range of that image. 8 bits yield 256 discrete levels of information which is the minimum number of levels that produces visually continuous tones, while 16 bits yield 65,536 discrete levels of information (more on this here).
As discussed in an earlier post, famous American landscape photographer Ansel Adams devised what is known as the Zone System. The zone system can be applied for obtaining correct exposures under different circumstances no matter how tricky.
Good knowledge of the zone system as well as a solid experience in applying it allow you to accurately tune your results up or down the tone scale, and to successfully contract or expand the range it occupies on that scale for technical or creative purposes.
Pre-Visualization as a First Step
The truth of anything that related to art, graphic design, fine art, photography, 3D & animation and even film, nothing is trustfully record from real life, it must involve certain level of personalize and unrealistic manipulated then it only deserve to “see” as art.
In photography, we often stumble upon a scene that seems perfectly beautiful. We hurry to set up our gear, and start shooting away, only to get back home and look at the photos we got. And the result is often disappointing.
Truth of the matter is, our human visual system has a great ability to quickly scan a scene, and focus on interesting parts while disregarding the mundane. A camera and lens cannot intuitively do that, but have to be directed so that what they end up capturing in the final image is actually what we intended them to.
The human visual system is also highly sophisticated in its response to light, how it falls on different areas of a scene, and how it changes in time. For example, a person would still recognize a blank piece of paper as white whether it was appropriately lit or laying in the shade.
The human eye is also quick to adapt to differences between areas of highly contrasting light such as bright highlights and extreme shadows, or when light gradually changes its property, intensity or position such as during sunrise and sunset. This is something a camera simply cannot do.
In order to end up with a satisfying rendition of an actual scene, you need to visualize that scene before even making any exposure or technical decisions. You have to have an image in mind that you’d like the view in front of you to turn out like.
You also need to train yourself to disregard what your eyes tell you, and think in terms of your specific camera sensor and its capabilities, envisioning how light is falling on your scene, how its properties will change with time, whether that change would better serve your purposes if you waited a little longer or came back later, or if you can better recompose your shot to better accentuate your intentions.
This opens the door for an endless number of creative possibilities. Once you figure out what you want, and how you would go about getting there, the rest is just technical really.
For example, let’s assume you have a blonde model dressed in black in front of you. You first start considering the two extremes of the scenario: the darkest part of your scene, and the brightest part of your scene.
At first glance, the black dress the model is wearing might appear to be the darkest area of your scene so you’d hurry to place that in zone III (which is a -2 stop exposure). But if you observe closely, you might find an area that falls within deep shadows constituting a tone that is darker than the black dress.
In this case, the dress would actually become a darker tone of grey, but not exactly as dark as the shadow area of your scene. Had you not noticed that, you would’ve probably ended with clipped shadow details in your final image.
In the same way, you might hurry to consider an apparent white as the brightest part of a scene, but after looking closely you might find out that there is actually a yet brighter value somewhere like a metal, highly reflective surface. This way, the actual white would become a very light shade of grey, not as light as the metal surface though.
Digital photography actually makes the visualization process even easier than film did. Back in the day, photographers such as Ansel Adams carried a Polaroid camera with them in order to get a glimpse of how a scene would actually turn out with the specific settings they were set to apply.
Of course a Polaroid would just give an estimate that is not identical to print paper, but it did help them get there. In the same sense, LCD displays on camera backs can be used to help you estimate what your decided settings would yield in the end result.
The image you see on the LCD display is just the camera’s interpretation of the shot taken (which could very well vary from the result you actually end up with) but it does help you get there.
Most cameras would also notify you of potential clipping in highlight areas, so you can utilize this information to better expose your shot. Or you could alternatively observe the histogram which represents the way your tones expand, and the range they occupy on the scale.