EttR (Expose to the Right) … still useful?

Emerald Pools, Zions National Park

PhaseOne DF with p65+ back, AF 75-150mm at 80mm, 4 seconds at f/14, ISO 50

If you haven’t heard about exposing to the right (EttR), that’s OK.  While the main purpose of this article is to discuss whether or not EttR is still applicable, there is plenty of information in the article, including links, to help you understand the what and the why of EttR, as well as how and when to use the technique. (Fair warning, this is somewhat technical and possibly a little boring. Even if you choose not to use EttR, understanding what it is and what it offers allows you make an informed choice.)

A few years ago I wrote a couple of articles about EttR, including quite a bit of information about the legitimacy of the concept.  Fast forward to 2011 and legitimacy is no longer an issue … a substantial number of high end landscape photographers as well as many other photographers use EttR to determine their exposure settings.  Occasionally an article on a blog attempts to discredit EttR, but I’ve never seen one of them demonstrate it having a detrimental effect on the images if properly processed in the raw converter. On the other hand there is plenty of evidence available which demonstrate it’s benefits.

I’m sure the concept of ETTR was discussed at various times in the earlier years of digital capture, but it’s introduction to and subsequent adoption by the photographic community resulted from the efforts of Michael Reichmann on the Luminous Landscape, who began using the idea after a discussion with Thomas Knoll (co-inventor of Photoshop and Adobe Camera Raw).   His introduction of the concept in this article, Expose (to the) Right, in 2003 is the earliest reference I have ever seen regarding the concept of EttR. I’m pretty confident he coined the acronym (EttR) which is now common place with many advanced photographers. In 2003 cameras had less dynamic range, and noise (especially at higher ISO’s) was significantly higher than current cameras.  His article laid out the original premise of trying to judge exposure by moving the data as far right as possible in the histogram without clipping highlights.  This then moves the shadows to higher ranges, allowing more levels to record differences in detail, and increasing the signal to noise ratio.  If you then normalized the exposure in the raw converter, the resulting image has better detail and less noise in the shadows.  Rather than go into details about the technique, click the link and read the original article.  It explains it quite well.

Since the concept was first introduced into the mainstream,  there have been major advances in camera technology which overcomes some of the reasons one would use EttR (better detail and less noise in the shadows), so a logical question might be whether EttR is concept from yesterday and no longer useful.  I’ll readily admit that at some point in time noise might be so insignificant (such as zero noise at ISO’s below 800) and dynamic range so large EttR might not offer any benefit. We aren’t there yet.  Not only do I believe it is still a useful concept, I think it is the proper way to determine exposure for digital photography.

Before going any further, there are two important points I would like to make.  First, using EttR correctly to determine your exposure never hurts.  No file captured by using this technique is unusable or worse than a file exposed “normally”.  Second, while it seems like a technique which can only be used some of the time, in fact it can be used 100% of the time.  Granted it won’t offer obvious benefits every time and sometimes it ends up at the same setting as conventional exposure calculations would choose, but regardless of the circumstance, you can use the technique to determine your exposure.  So if it is the correct way to determine exposure, why don’t camera’s makers build their cameras that way?


Most cameras base their exposure evaluation with the goal of achieving an acceptable Jpeg image.  They do so by using legacy film exposure measurements, then processing the resulting data into an acceptable jpeg.  When digital capture technology was being developed it just seemed the logical way of doing things.  It also made it easier for analog photographers to transition to digital capture. It doesn’t have to be that way … camera manufacturers could bias the exposure value of the capture any way they wanted, and apply the appropriate processing of the data to yield an acceptable visual jpeg, so it’s sort of  a case of “if it ain’t broke, don’t fix it”.

Additionally, and this is probably the main reason, by using conventional exposure techniques, you end up with a cushion in the capture protecting against over exposure, something which leaves highlight detail unrecoverable. EttR isn’t for everyone … you must understand what you are doing – how to use it when capturing and how to process the resulting files.  Expose it wrong and you might end up with blown highlights without detail.  Process it wrong and you might end up with an image which just doesn’t look “right”.


A typical film characteristic curve

So what’s the difference, and why does it matter?  In a nutshell, film is not a linear capture, while digital is.  As film approaches it’s latitude at either end of it’s exposure range, it compresses the information … so the resulting capture is comprised of a “toe” (deeper shadows),midtones, and a “shoulder” (brighter highlights).  As humans we don’t see linearly – and engineers used this property of film to make sure it recorded images very much like we would expect to see them. Because of this property, the best way to expose film was to estimate the midtones … basically assume the image would blend to an 18% gray.  This would leave enough latitude to make a print.

A digital camera sensor records things linearly … no compressing information as we approach the sensors limits.  Unmodified, the linear data wouldn’t look right most of the time … nothing like we expect to see. This means the data must be corrected afterwards when the raw data is converted by applying a gamma correction to the data, be it in the camera before making a jpeg or later in the pipeline when working with raw images on a computer.  This function is built into the cameras firmware or raw processing software, so you never see the information without the gamma encoding – much like prints from film, digital files always look somewhat normal.  There is no single “right” gamma setting, and in fact many corrections in software as well as variations of scene settings in cameras are basically altering this gamma correction to achieve a different result.

Since digital capture is basically linear, why should we expose it like film?  If you are processing the raw data out of camera (on your computer), you can record the information anywhere inside the dynamic range of the camera, then normalize the exposure.  This isn’t “manipulating” the information … it’s just a different way of recording and processing the data.  In fact the end result is pretty much identical with advantage of possible lower noise and better detail in the shadows.  So unlike film exposure techniques which assumes the average scene would blend to 18% neutral gray, the idea here is to record as much detail as possible with as high of a signal to noise ratio as possible (as far to the right of the histogram without blowing the highlights).

So how much does it help?  I could shoot a bunch of tests, but better would be to recommend this is by Jeff Schewe, un-debunking ETTR. Jeff is co- author of Real World Camera Raw as well as Real World Image Sharpening, and is a leading expert in areas of digital capture and raw development.  His article offers some very compelling visual examples as to what EttR can do for image capture.


In Jeff’s article (as well in my previous article) we both mention using EttR when the scene’s dynamic range is smaller than the capability of the camera sensor. I’ve since changed my mind here … to me this just outlines the times when it has it greatest advantages.  But even as the scenes dynamic range approaches and perhaps exceeds the sensors capabilities, judging when the highlights will clip is still the best option. After all, what do we care about 18% gray  … something used by engineers when designing film?  The scene might even call for some highlights to be “blown” and pure white, and the detail in the shadow might be more important – but even then we are judging the exposure by what is happening on the right side of the histogram.

There are some important caveats.  To determine the correct EttR exposure, you take an image and adjust the exposure so the data is as far to the right of your histogram without clipping any of the highlights.  Another technique is to enable the highlight warning function of your camera (that’s when overexposed areas blink ) and then base your exposure on the one that results in nothing blinking, or with only very hot specular highlights blinking.  One problem is the histogram (and the areas that blink) on  most cameras is not based on the raw data but on the rendered image data.  This means it is quite possible to show clipping when in fact none of the raw data is clipped.  There are a couple of ways to handle this.

First, you could just say that’s good enough.  Sure you might get another 1/3 to 2/3rds stops without clipping, but it may not be significant enough to worry about.  Another method is to just experiment with different custom settings in the camera that are applied (scene settings is the common name) and find one that delivers a  highlight histogram close to what the raw histogram might be.  Remember these settings in the camera only affect the creation of in camera jpegs, but the raw files are not touched. Finally, the camera will normally be pretty consistent in its’ treatment, so you may just be able to open up another 1/3 of a stop or so after you get the highlight warning or see the histogram clip.

There are occasions where the shadow detail is far more important and as a photographer you might make a conscience decision to let the hightlights clip.  If so, I still recommend you base your exposure on the right side of the histogram … try to determine just how much clipping will be acceptable to you.  So as I mentioned before, I use EttR to determine 100% of my important exposures.  One advantage of this is I can also easily see if I need to do an exposure bracket.  If I crowd the right side of the histogram, and I see a lot of data and clipping in the shadow, I immediately know the only hope of not having a big black blob on the image is to open up a couple of stops and take another shot, and then use some technique to blend the two images together.

Are there any other caveats or pitfalls … reasons not to use EttR?  Sure.  It does slow the workflow a little.  It can slow your ability to capture.  It absolutely does not work and shouldn’t be used if you are capturing jpegs and not raw files. It makes the previews on the back of the camera look “wrong” most of the time … in fact if you are doing this around a group of photographers and they see your camera LCD display, they’ll think you don’t know what you are doing.  And to be honest, there are plenty of times where just letting the camera get you close is the best option, because you don’t have time to calclulate every exposure and normal exposures are good enough.  The only way to use EttR is to make a test shot, then look at your histogram and apply an adjustment if necessary.  Many times a standard adjustment works most of the time (I’ve found that increasing exposure by .7 stops is pretty typical) but you still have to check the LCD to make sure you didn’t clip anything.

One last thought.  Whether or not you choose to use EttR, trying to judge exposure correctness by simply seeing how good it looks on the back of your camera is not a good idea. Even if you don’t use EttR, you should still examine the histogram to verify your exposure setting.

In 2003, using EttR was an extremely valuable tool in improving image quality.  While today’s cameras have made significant progress, I still feel that EttR is actually the correct way to determine the exposure for a digital camera … in fact it really should be an option built into the cameras by the manufacturers.  It’s time we admit we are not using film and use optimum methods for digital photography.  (I know … they’ll never listen to me … ).

Post a comment here, log into Facebook and check the box to see it on Facebook as well

Leave a Reply

You must be logged in to post a comment.