The myth of “sharp” captures …

Yes, you read that right.  OK, maybe it’s really not a  “myth”, but sharp is a relative term and when discussing captures, acceptably sharp might be a better term, since perfect sharpness is probably unachievable. Understanding all of the elements that conspire against sharpness when capturing images is helpful in controlling them and maximizing the quality of the captures.

For example, most photographers understand depth of field as it relates to the area of distance which will be “in focus” based on the focus setting of the lens and the f-stop.  What most don’t realize is depth of field itself doesn’t really exist.  A lens can only project a single plane that is actually in focus.  What we regard as depth of field is based on the limitations of the  recording medium, and thus areas of the image which are not on this plane can actually appear in focus since the recording medium can’t resolve the lack of focus.  It isn’t really difficult to visualize this phenomenon, and those who shot film have seen it first hand.  I can remember many times a 4×5 proof print of a family which looked sharp but  the 16×20 was blurry. Almost every negative had to be examined with a loop before presenting the proof to a customer to insure a high quality enlargement could be made. Pretty much all digital photographers have on more than one occasion viewed an image on their monitor which looked good, only to  zoom into 100% and find it isn’t sharp.  The image from the top of this page looks pretty good on the web, and OK for images up to about 11×14, but larger than that areas become soft, caused by a defective lens (for more about what happened you can read this).  Here is a 100% portion of the church and you can see it isn’t sharp.

 

This is why “depth of field” works … even though parts of  image are not sharp, the film or sensor can’t resolve that fact,  so it appears sharp. The higher the resolution of the recording device, the less effective this illusion of depth of field is, which means “depth of field” is shallower for most current digital cameras than film of the same format. Focusing using hyperfocal techniques have been used for a long time but  I’ve found with my p65+ back, the distances on the lenses aren’t accurate, my actual DoF range is more like 50-60% of that range because of the high resolution of the sensor.

Also since todays digital cameras have higher resolution than film, not only can they record the softness regarding depth of field, but they can even record the weaknesses of the lens itself.  So while stopping down to f/22 may indeed make more of the image appear in focus, the entire image will appear soft due the diffraction that occurs as the light bends around the edge of the diaphragm and is scattered – sort of like lens flare.  This means as you increase focus from “depth of field”, you also may degrade the sharpest parts of the image with diffraction.  Every lens is different, and different camera/sensors will show this effect differently. Here is an example.

Full Image, Nikon d7000 with 24-70 at 24mm

This is the entire image, below are sections at 100%

f 5.6

 

 

f8 f11

 

f16 f22

As you can see, the higher the f/stop the softer the image. So while more of the image comes into focus from “depth of field”, the image is also softened. Unlike something that’s out of focus, this is more about pollution of the data from the effects of diffraction, and sharpening techniques are more effective at improving  this than those that are just out of focus.  Most lens/camera combinations perform best at between f/8 and f/11.  Going beyond that can be acceptable depending on image type, amount of micro detail and need for sharpness.  Other techniques can be used to overcome depth of field limitations as well, such as focus stacking, so understanding the limitations of depth of field is the best way to use the right technique for any given capture.

All of this is physics, but there are other more important issues which affect image sharpness which are to some degree under the control of the photographer.  Understanding these limitations can aid photographers in maximizing the sharpness of their captures.

There are several important variables that affect image sharpness, but the three most important are lens optical quality, lens/camera focusing accuracy, and movement.To achieve perfect sharpness require all three to be perfect … something which just isn’t going to happen .  Fortunately despite the limitations, acceptable sharpness isn’t difficult to obtain, but if striving for maximum sharpness, one must be completely aware of all of these factors.

Optical performance can be measure and quantified.  When purchasing a lens you have a few options such as examining MTF (modulation transfer function) charts, examination of images, and internet discussions regarding lens performance.  Lenses can have various problems affecting sharpness, including poor/cheap design which results in poor focus, corner or edge softness, too little or too much contrast, color casts,  pin cushioning and perhaps most important chromatic aberration. Some of this can be corrected in software, but some lenses just can’t achieve sharpness.  Even the same model of lens from the same manufacturer can vary greatly from copy to copy in regards to sharpness, so it’s important to test a lens to make sure the one you purchased performs at least up to dealer specifications.   Some lens are optimized for different things (such as macro lens), and may not perform as well doing other types of imaging, so be sure understand this as well.  Typically prime lenses are sharper than zoom lenses, although some zooms obtain good image sharpness through most areas of their range,  and the convenience of a zoom as well as the ability to not need to crop some of the data often makes them a good choice. (not to mention some primes aren’t that good).

Auto focusing and manual focusing techniques are also critical.  Most auto focusing systems and lenses are designed to reach an acceptable point, but that might not mean critical focus.  Nearly all lens/body combinations experience at least a minor amount of back or front focusing, meaning they rarely take a picture which nails the focus perfectly. Despite that, they still are probably more accurate than manual focusing by the photographer, especially with todays cameras which no longer sport the best viewfinders for manually focusing (the manufacturers don’t see the need). If using a tripod, Live View in some circumstances is very effective to dial in focus by zooming in 10x and manually focusing, but here again nailing focus is tough and somewhat rare.  Fortunately most of the time it still achieves good results as long as the photographer understands it’s limitations and can compensate where it may not be accurate enough.

Movement is another thing.  While there are certainly techniques to minimize movement, obtaining a perfect image with no movement may actually be almost physically impossible.  So here again, sharpness is relative, and the result may be acceptably sharp, not perfectly sharp.  Thus opting for a fast shutter speed and perhaps faster ISO, using a tripod, or using stabilized lenses are important tools to eliminating movement, but understanding just how difficult it is to eliminate movement emphasizes how important it is to apply all tools available to eliminate in micro movements.

Why?  Imagine if you hold a laser pointer in your hand, and you “point” at a small circle on a wall 50 feet away.  Start with a 10″ circle.  It’s pretty easy to keep the pointer inside the circle.  As you make the circle smaller and smaller, it becomes more challenging, and if the circle becomes as small as the projected point of light, it’s pretty much impossible.  Now think of the image being projected by your lens onto your sensor, but think about it broken down into small dots.  Each dot is to land on a specific sensor site, and if the camera moves the information will spill over into another site.  Doesn’t sound too hard, until you realize just how small those sites are, and how little movement it takes to cause the information to miss the intended site or be spread across more than one site.

A good example is the Canon 5D Mark 2.  This is a higher resolution camera, but since it’s full frame it’s sensel density is similar to many lower resolution dSLR’s that use an APC size sensor.  This camera has 21.1 million sensels crammed onto its 24x36mm surface.  This means each individual sensor site is 6.4µm in size.  (The µm stands for micrometers, where 1µm is 1/1,000,000 meters.  This is often also referred to as micron, although technically this term is not official).  So how big is a 6.4µm sensel?  Here are some comparisons …

A human hair is between 20 and 180µm in width.  A typical red blood cell is about 8µm in size.  A piece of standard copy paper is about 100µm thick.

This means that to have no camera movement recorded at all, the image must vacillate less than the size of a red blood cell during the exposure.  Anything more than this will spread the information across more than one sensel, and the larger movements means the information can be spread over more than just a couple of pixels.

I don’t know about you but despite amazing technology of image stabilization, fast shutter speeds and the like, it’s hard to envision the camera not moving what amounts to a microscopic amount. Fortunately the goal is acceptably sharp.

My point of this article is it requires all the tools at ones disposal to capture the sharpest image possible, since  every image will have some imperfections.  Eliminating those as much as possible with correct tools is the only way to insure you obtain maximum sharpness.  I have many images where I used what I believed at the time was good technique in capturing them, including a good tripod and head, mirror up, cable release or self timer, etc. only to find out later I didn’t nail it and thus the size I can print the image is limited.

Post a comment here, log into Facebook and check the box to see it on Facebook as well

Leave a Reply

You must be logged in to post a comment.