I’m a professional academic and our currency is publication. In order to publish in academic journals, one must explain, in detail, precisely how experiments were carried out in order that the readership might be able to replicate published results. Here I give you my Materials and Methods section for my photography. Because this is a marathon post, I’ve included a table of contents that will take you from place to place and allow you to skip my many digressions.
Table of Contents
What is HDR? and What HDR isn’t.
What is dynamic range?
Why use HDR?
Dynamic range versus bit-depth, a complicated topic explained.
How does HDR solve the dynamic range problem?
How to produce HDR images.
How HDR and human sight interact.
What can I achieve with HDR?
Maybe I should have called this page something polemical like “HDR University”
What is HDR?
High Dynamic Range photography is a name we apply to a growing set of digital photography capture and post-processing methods. HDR is just a name! No universal technique or method defines HDR photography – it is what you want it to be. Cast aside any prejudice you have about what HDR “looks like” and read on to learn more.
I might as well ask “What is photography.” There are far too many different styles and uses for high dynamic range file formats to cogently synthesize them all here and I feel a bit like I am bringing coal to Newcastle in explaining HDR photography on the internet. Therefore I am going to give three answers to the question “What is HDR photography?”
Answer #1 is for the general public, #2 is for the technorati and #3 is for the photographers.
1). High Dynamic Range photography is a growing set of capture, rendering and post-processing techniques particularly popular with amateur DSLR photographers. These methods allow facile production of high-contrast, colorful, and dramatic images that have texture and detail in the highlights and the shadows. See #2 to appreciate its potential.
2). High Dynamic Range photography is a process that uses multiple exposure capture to produce .hdr radiance files (Radiance: a software program used to simulate lighting conditions). These files are 32-bit (4 294 967 296 possible shades of gray) files and are produced by a software program that calculates pixel luminosity based on a series of differently exposed images (the two beach photographs at the top of this page were part of a series of 7 exposures, each taken on a tripod and spaced by one f/stop). Monitors and photo paper cannot render these values accurately and therefore another program is required to apply a tone curve to bring the 32-bit data into the space of an 8-bit image. This is called tonemapping and is responsible for the “HDR look.” Tonemapping however, is only one way of masking many exposures to produce one simple file that has all light values.
3.) HDR photography is the set of your available tools to harness the incredible potential of the capture and processing methods mentioned in #2. The name HDR is a label and a useless one. Forget its name, forget all the lousy images you’ve seen and recognize its potential. Learn how to bend it to your will. People used to withstand heated mercury vapor to produce Daguerrotypes – you can wade through a little shit to master HDR.
Yes, there are a lot of images with that heavily tonemapped look (including my early iterations), yes there are a lot of people who think that the 8-bit standard “S” tone-curve is all photography should ever be, yes there is a Zen-like balance and beauty to the fragile and frustrating emulsion films of the past century. But there are also few innovations that have so thoroughly revolutionized their field as the digital camera has photography and increasingly sophisticated sensors and processors will allow something like “HDR” to further revolutionize digital photography within the next 10 years. For those who say, “Traditional capture and processing methods will always rein supreme, look how successful they are at the moment – no client is knocking down the doors for something else,” I counter that photography was no up-start when digital crept into the picture full–force. Film photography was a juggernaught and the cash cow of hugely influential global corporations. Digital threatened the supply/demand chain of film, photoprocessing, paper prints, etc and still supplanted film. By comparison, migration to higher resolution, higher dynamic range capture is a whisper on a scream. Where my landscape photography interests are concerned, those “traditional” capture and processing methods, which were once 8×10 inch view cameras and masterful dodging and burning, are now recapitulated with 35mm and medium format DSLRs and post-capture processing. Don’t expect everyone’s family photographs to become heavily saturated and harshly tonemapped images, but expect sensors to graduate to 16-bit, 45 dB capture and then beyond (medium format digital, after all, is already there). Experimental CMOS and CCD sensors are capable of adaptive 120+ dB capture – i.e. on-chip HDR. In-camera, on-chip, one-shot >16-bit captures are coming – to master the beast you will have to understand how and why these images are different from current 14-bit RAW capture.
Finally, to those who would avoid HDR at all costs, I will say that we need you the most. HDR has become popular for its far-out effects, but it is capable of so much more. I strive to find that sweet spot where image processing recapitulates properties of human sight – not all properties, but enough to make the connection between photography and memory stronger. We need you to help us develop methods for the sensible production of HDR/tonemapped images. You have no room to complain if you don’t try.
The pages of The Golden Sieve are filled with images made of a variety of techniques that might rightfully be called “HDR.” We will learn about all these methods soon, first we must understand what dynamic range, eyesight, camera-sight and what situations call for an increased dynamic range.
What HDR isn’t.
HDR is not a universal “un-suck” button that will fix or permanently improve your photography. Great photography is the result of great ideas and a combination of artistic expression and technical expertise. Image composition is the most important factor in whether or not a photograph “works.” This is the key to photography. It is really that simple. No one is born a great photographer just as no one is born a great writer. Photography is a form of self-expression and must come from a pool of experience, memory and heart.
HDR is not a wasteland of talent or interest either. There are purists who see tonemapped images and have an instantaneous and strongly negative reaction, but so too there are film purists who see digital photographs and run for the hills, and black and white purists who see color photographs and are left flat. “HDR” photography is not your punching bag, if you don’t like it then close your browser and go make some of your own beautiful imagery.
I’ve gotten emails consistently from incoming University of Chicago undergraduates who found my work by searching for photographs of their future academic home. They all seem to want to know what I did at U of C that most improved my photography. Which classes I took, etc. I wonder what they think when they hear that I never took a class or joined a photography club. Despite wonderful experiences in photowalks and workshops, photography is this introspective, personal experience for me that needs no instruction or company. I do, however, have vivid and wonderful memories of sitting behind the circulation desk of Harper Memorial Library finishing a problem set or paper as fast as I could so I could sit in peace for a while and read about photography. I would then grab my camera and run out into the world and see if I could do better than what I’d seen or apply and improve upon what I read. I am the same way today. The only thing you have to do to be a great photographer is to live a life and tell your story.
What is dynamic range?
Your eyes and your brain are very fancy – they do things that no piece of electronics has yet been designed to do. You see and hear over an incredible dynamic range, capable of seeing detail in heavy shade during high noon on a clear day and hearing a whisper over a freight train. Dynamic range is a measure of the difference in intensitybetween two power sources. Lovers of math will find a (overly) thorough exposition further on in this page. For photographers hoping to re-create something of the primary visual experience, the drawing below illustrates a huge problem. Each pixel in the drawing below represents 1,000 distinct shades of gray:
That enormous green box means you can read a paperback book in faint moonlight (seriously you can) and very intense noon sunlight on the equator during the equinox on super-glossy, 24-lb white paper lit with a one million candela flashlight . . . etc etc. If the green pixel at the top left were to represent pure black and the one at bottom right pure white, then you would be able to differentiate one billion shades of gray. See the tiny 4 pixel by 4 pixel square on the right side of the image – this is what my camera sees (careful readers will note I did not use the term dynamic range here – more later). The 1 pixel by 1 pixel dot below? What your monitor can show you.
So how can we overcome this problem? Well, there are many answers – one of the newest (in the historical context) and most exciting/controversial answer is something called HDR photography. There is an abiding sense of accomplishment that comes from fitting that huge green space into that tiny pixel of a monitor or photographic print – all roads lead to the same euphoria and wonderment – be they HDR, fill-flash, graduated neutral density filters, transcendent light, and so on. Some of the most amazing, breathtaking and important images of all time, however, do not need to cram any green square anywhere – they succeed by placing that tiny, tiny one pixel dot exactly where it ought to be. Not all images can be so masterfully composed and executed, nor are all scenes amenable to such treatment; therefore cram we must.
Why use HDR?
Close your eyes. You are standing on the edge of America with your toes stuck in the sand and the mighty Pacific lapping at your ankles. The setting sun and its golden light run across the wet sand and waves and you say to yourself “This is one of the most spectacular sunsets I’ve ever seen!” You snap a picture, hoping to show your family and your friends, hoping to relive a shadow of that moment’s former glory in the future.
At home you load up the photograph you took with your digital camera and are wholly disappointed. Maybe the image looks like the one on the left and the sun and sky “look right,” or maybe it resembles the image on the right and you captured the waves properly, but the sky looks like no sky you’ve ever seen.
You show it to your friends and their hearts don’t start fluttering like yours did when you saw the real deal. “You just had to be there,” you say and you think to yourself that this one “just didn’t come out.”
When you were on the beach your eyes were darting from place to place, pupils dilating and constricting, transmitting more or less light to your retina. You moved your head from here to there and together your retinas and your and brain were capturing and rendering enormous dynamic range. Your eyes are capable of something called visual masking, which basically means you can see the texture in those rocks at the same time that you can see the color of the sun – the real trick here, and the secret to why “HDR” photography is capable of making your friends’ hearts beat faster, is that your camera cannot mask like your eyes without its help.
Let’s use some computer processing to mask those images and create what some would call an “HDR” of the beach at sunset:
For our purposes, dynamic range is the difference between the brightest and darkest light sources in that beach photograph. Stated simply (a highly detailed explanation comes later), your eyes have a massive dynamic range (~90 dB, see below) and your camera, LCD monitor and photo paper do not. In order to re-create some of the magic from that beach sunset, we need to mask our photographs. There are so many ways to do this that listing them would be like mentioning every major photographic technique of the last century: dodging and burning, Ansel Adams “zone system,” flash, neutral density filters, etc, etc, etc. Just add “HDR” to that list and you’ll now appreciate why statements like, “I don’t like any HDR photographs!” are absurd. Have you ever said “I don’t any like photographs that were made with a flash!”? (If you have ever actually said the latter statement then you need to view the work of Gjon Mili, if you still feel that way, I think painting might be for you). You might not like my HDR images, you might not any you’ve seen, but for reasons that will soon become clear, you might have seen plenty you love without knowing they are “HDR.” Keep your mind open; close-minded people are exceedingly boring. Boring people take boring pictures. Don’t be boring.
Dynamic Range versus Bit Depth – a complicated topic explained.
After some great email exchanges I think I understand this business a lot better and have this to say – the dB in terms of output voltage is the relavent value here. The following discussion is still very useful however as it attempts to distinguish between tonal range, bit-depth and dynamic range (dB in terms of light intensities). Read here for the updated understanding of the relevant electronic dB.
I was careful not to claim that my camera’s dynamic range was equivalent to what it “sees” above for one reason. In the cartoon that contrasts your visual dynamic range with what your camera sees, I opted to keep the difference between shades of gray equal. That is, if my very expensive camera saw like your eyes – it would see 16,384 different shades of gray that together would represent that four-by-four pixel square. Those 16,834 different shades of gray are a function of the camera’s “bit depth.” My D700 takes 14-bit images, or 214 different gray values are possible in my RAW files . . . what’s that you say?
“You equated bit depth with dynamic range, they aren’t the same.”
Fine. But I was only trying to compare human sight to “camera” sight.
“Even so, if your camera took really REALLY big steps between gray values, it could see the sun and the moon and a black cat in a coal mine all at the same time.”
Sure, but these photographs would be really boring, it would be like a paint by number that only uses three numbers.
” . . . “
For example – here are two images that cover the same dynamic range – the extreme left and right of each image has the exact same value, but the top image has 254 values between the black on the left and the white on the right (8-bit; 28 values) as opposed to the bottom image’s four values between black and white (âˆš 6 -bit; 2âˆš 6 values).
(You could imagine that these images were both taken with the same sensor, but with different bit-depth A/D conversion – more later).
“So you admit you lied.”
“What, then, is the dynamic range of your D700, now that we have established that you are a liar?”
40 dB (decibel)
“What the hell is a decibel?”
It is one-tenth of a bel.
“I’m serious – aren’t those units of sound?”
Units of sound? What are you, high? The bel and decibel (1/10 of a bel) are the logarithm of the ratio of physical intensities (power, pressure, etc).
XdB = 10 log10(Ihigh/Ilow)
Ihigh/low = Intensity of field or power source (in our case light source)
Therefore, 40 dB simply means that the ratio between the darkest and brightest light sources my D700 can render is about 104. If the human eye can see 90 dB worth of dynamic range, then you and I can render sources of light differing in intensity by a ratio of 109, i.e. a factor of (in Dr. Evil voice) one billion. Does that answer your question?
“Yeah, but what are the units of the dB?”
Because it is a ratio of two numbers that have the same unit, the dB is a unitless measure.
“Ha ha! You’re unitless.”
. . . . I’m speechless, thank you for that. Seriously though, this is a confusing point, it is easy to (wrongly) conclude that the dynamic range your sensor can sample is equal to its bit depth. The quantum world is discrete and one may argue therefore that the universe is inherently digital, however, as far as your sensor is concerned, the world is analog. You are sampling quanta over time and area and therefore the value a given photosite can sense are (for all intents and purposes) continuous. So then, photons hit your pixels and generate voltage and the analog-to-digital-converter takes these voltage values and places them into bins. The number of bins is the bit-depth of your image. Because engineers are smart, exacting people, they build A/D convertors to follow a meaningful tone-curve. That is, in a perfect world, the A/D in your camera bins like this: photosite 1 receives X photons, generates X’ volts and is binned to X. Photosite 2 receives 100X photons and generates 100X’ volts and is binned to 100 times the value of X. So then, your 14-bit D700 RAW file has a total of 16,384 tonal values and, if A/D works perfectly the Imax/Imin is around 16,384 and your dB dynamic range is 42 dB. I think it is important to note here that it is possible for a sensor to have a 45 dB range and its output only 14-bit; because luminance values are relative, there is no absolute way to measure the dynamic range of your sensor simply by knowing the bit depth of its output. We are just fortunate to live in a world where engineers have made things relatively efficient – a camera can “see” in 42 dB – better have it output 14-bit. Noise, of course, will subtract from the actual dynamic range you measure as surely as leaving the lens cap on will ruin a good image.
The good folks over at dpreview.com empirically measure the dynamic range of the cameras they review and the D700, it turns out is very close to 40 dB in actual sensor capacity. Much ado has been made about how other cameras in Nikon’s lineup stack up to the D700 (turns out the circa $600 D5000 has just as much dynamic range – “Oh cruel fate to be thusly boned – why oh why did I spend so much money on my D700,” is the pejorative implication). Sometimes I wonder that all the obsession over dynamic range won’t drive some market-savvy engineers to vastly over-sample their sensor’s capabilities to output 16-bit images from a 40dB sensor. This is why I think it is valuable to know about how the guts of your camera physically work and how to objectively measure its properties or to support great (hopefully unbaised!) review websites which do.
There’s another point here that deserves mentioning – that tiny one pixel point up there in the cartoon at the top of this page. That is the dynamic range of our most oft-used digital photography output device – the monitor. There is less dynamic range in the output than in the real scene or our digital photograph of that scene. What gives? How do we overcome yet another obstacle. In my mind, this one is an easy one to overcome and people have been doing it with photopaper (the output device of choice for emulsion photography for the previous 100 years or so) for a long time: all you need to do is apply the right tone-curve and even an 8-bit JPEG can render an infinite dynamic range. Therefore the purpose of any technique we call HDR is to mediate visual masking needed to render bright textures on the same images and dark ones. Surely you have to combat posterization (that effect you see when the image looks like a lousy paint-by-number) and overcome color limitations, but this is preferable to building ever-brighter displays.
Imagine for a moment Apple releases an LED monitor that has a contrast ratio of 1,000,000,000:1. You get a second mortgage and buy the thing, set it up in your living room and proceed to irradiate your face. Seriously – sometimes I think current monitors are already too bright; I have to turn a desklamp on at night when editing photos as it is! Whereas there is certainly a market for monitors with increased dynamic range, I think we will always be talking about limited contrast ratios, if only to keep the amount of monitor-induced sunburn to a minimum. Of course, someone could come up with a incredibly high dynamic range monitor that accomplishes its visual feat by decreasing the value of Imin, but then again we have problems with glare, comfort, etc, etc, etc.
“Still, my point stands – dynamic range does not equal bit depth”
You’re right! So then you’ve proven my point, HDR is a lousy name. Most “HDR” photographs produced by people with DSLR cameras are constructed from a series of differently-exposed images of the same scene. A computer program takes these 12- or 14-bit images and turns them into something called a radiance file. But no “HDR photographer” ever publishes .hdr files on the internet. Instead, these “HDR” photographs, including mine, are presented as 8-bit images that are “tonemapped” from their 32-bit source .hdr radiance files. Tonemapping is a process of recapitulating detail in the shadows and highlights to produce a photograph without white or black clipping – it’s one way of producing the tone-curve we were after in the previous section. We name these files HDR photographs vis-Ã -vis their .hdr source file, but as you just said – dynamic range and bit-depth are not equal. I have seen THOUSANDS of tonemapped .jpg photographs from .hdr files of scenes with less than 40 dB of dynamic range just as surely as I’ve seen moving, transcendent, 8-bit images made from 90 dB moonrises.
Then again, what is the purpose of imaging? If you had a monitor or brain implant that could produce sensation that was indistinguishable from first-hand experience, the careful and exciting balance of fitting the right part of that green square into the frame would be lost. Limitations make things interesting.
Up to now then, I’ve spent far too much time considering the conceptual construction of digital image files and haven’t explained at all yet how HDR “works” or how to produce HDR images.
How does HDR solve the dynamic range problem?
Through masking of course! Dolby holds a patent to a technology that is capable of producing extremely high contrast ratios in monitors (the company that developed the technology was called Brightside) – but I suspect that this technology will only trickle into the consumer realm in the future as increased standard contrast ratios for the reasons detailed above, i.e. extremely bright or dark displays will increase eye strain. Prototypes were around $50K (according to wikipedia).
As we mentioned before, your eye doesn’t have a fixed “exposure” across your retina, instead the image in your brain is the result of adaptive processes that occur both physically inside the eye and via nervous system processing. Your camera does not adaptively adjust exposure, this is both a good thing and a bad thing. This is good because it means you have total control over the type of image you make. My Nikon D700 and many other cameras allow full manual operation, allowing the user to set the shutter speed, aperture, ISO speed, etc. You can drop shadows to black or bring them to white – total. control. This is bad because it means that there is a fixed number of photons a pixel must receive to be recorded as anything other than black (or noise) and a fixed number of photons over which the same pixel can’t read out anything but pure white. Therefore we must manually do what your eye does naturally every minute of every day. We must generate a photograph that records all pixels as something other than pure white or pure black and then we must apply a rather complicated tone curve to mask the range of light values in this image into something your monitor can display.
That photography and human vision exhibit dramatic disparity was never a mystery. William Henry Fox Talbot invented negative process emulsion photography in the mid 1800s. He called photography “the art of fixing a shadow.” I see two prominent interpretations to this simple but curious definition. One, Talbot refers to the chemical process of fixation and the ability of his method to forever trap a shadow into silver crystals on film. On the other hand, Talbot may have meant that photography is successful when one is able to overcome its fundamental shortfall: that it doesn’t replicate human vision. Deep shadows are rendered as pure black. Bright lights are pure white. Artful photography must necessarily use or overcome that limitation. I hope his statement had dual meaning as I rather like both interpretations.
Nor is the disparity between human vision and photography something without myriad potential solutions. I mentioned some of these earlier. Film photographers once used fill-flash, exacting exposure and graduated neutral density filters to increase the dynamic range that they were sampling (or, rather, to decrease the incoming dynamic range to one film could be expected to capture). HDR is the newest solution to this problem, but it raises new problems which can open the gap between primary perception and secondary imagery further. The day is coming where this page will be a relic of a time when digital sensors did not output 32-bit RAW files. The Institute for Electrical and Electronics Engineers (IEEE) lists over 8,000 abstracts, papers, talks, books and courses related to “high dynamic range.” Research appears to be focused on the development of algorithms and sensors capable of the HDR photographer’s wildest dreams. Surely a standard is a ways away, but it is coming.
How to produce HDR images.
This is where I detail and provide links to various software programs that can generate these amazing files. First, however, I need to note that the most important factor is input. Input into these programs means photographs. Only great photographs make great HDR photographs. Sorry, that’s life. As photographers, we are lucky that our cameras can function as data acquisition devices in the field where no processing can yet occur and our computers do not take photographs. Our two modes of production are mutually exclusive. Of course this means that one is better off shooting more frames than spending more time processing when he or she starts out, but there is a balance to be achieved. How to capture for HDR is no small subject, but it is made easier by some of the automatic features of our cameras.
Auto Exposure Bracketing (AEB) gives those of us with DSLR cameras the ability to set the camera on a tripod (or some other solid support) and create three or more images with differing exposure values. Here we have a listing of every camera model and its AEB capabilities on hdr-photography.com. You can look up your model and see if it has AEB capacity – from there you can find out how to activate this functionality in your manual. I will say that the D200, D300, D300s, D700, D2h, D2x, D3, D3x, D3s all are capable of doing 9 AEB frames, each separated by one f/stop (and various combinations of less than 9 captures and less than one stop). My D700′s AEB is activated by holding the “fn” button on the front (lens-side) of the camera, on the shutter release side, below the lens and rotating the rear selection wheel. Exactly how these exposures are separated (by one-third, two-thirds or one full stop) is controlled by the front selection wheel. Most consumer- and prosumer-level DSLR cameras (Nikon D50, D70, D80, D90, D5000, and Canon Digital Rebel (all flavors), D10, D20, D30, D40, D50, D60, 5D, 7D) are capable of doing 3 different frames each separated by two stops. These three exposures cover the same range of light values as five of my D700′s AEB frames.
For my purposes, five frames of +/-1EV (one f/stop) captures more than enough dynamic range to produce images with detail, texture and color in all pixels. On rare occasions, the dynamic range in a scene exceeds that captured by these five frames and I then shoot 7 or even 9. Therefore, the 3 frame, +/-2EV AEB function that is present on almost any DSLR is more than adequate for 99% of HDR photography and the ability to increase that range is just a luxury.
So, let’s review. You are out and about in the world with your camera and your tripod and you want to give HDR a try. You put your camera on the tripod and the tripod in inside Stanford Memorial Church. The dynamic range here is off-the-wall. There is bright sunlight streaming through the stained glass and deep shadows in the pews and alcoves. You activate AEB to take five exposures: each one twice as long as the previous (one f/stop).
I took a (lousy) snap with my iPhone to show you the setup:
Your camera settings look like this:
You’re shooting RAW images because, as advanced as the chip inside your camera is, you don’t want it doing your processing for you, f/8 because somewhere between f/8 and f/16 all lenses have an optimum sweet spot of color rendition, resolution and contrast, manual because you want to make exposure decisions in such a dramatically lit and difficult-to-meter space, and you are using AEB (here the camera is set to 9 frames, but I only used five (see below).
You use a remote release because it means you don’t touch the camera during the exposures (no touching!) and depress the shutter release.
“cliclick . clickclick . . click . click . . . click . . click . . . click . . . . click”
The following image previews appear in your LCD, these are the five exposures you need to construct an HDR image.
Now what we are going to do is put those five images through a computer program that will map each pixel to a new value more closely approximating its “true” luminosity. Rather than grabbing screen shots of each step along the way, I made a screen-cast video so you can watch exactly how I finish this photography by correcting white balance, generating an HDR (in Photomatix Pro – other program tutorials at a later date), tonemapping that HDR and then blending it with a series of the “normal” exposures and finally performing some transformations to boost local and then global contrast and color. The last steps utilize my HDR-glow photoshop action, which can be purchased here. Enjoy! I hope I don’t put you to sleep!
Here is another shorter video demonstrating the action’s capabilities:
How HDR and human sight interact.
The key to understanding how you see is understanding why you see. The answer to this is evolution. What forces shaped the development of sight in humans? What selections were at play? The answers to these questions are varied and very interesting; pattern, shape and color recognition are evolutionary developments arising from selective pressures early in primate/human evolution. One of my favorite examples of this is face-recognition. Human beings have an instinctual and powerful ability to recognize faces – at an early age a child can easily pick out his or her mother in a sea of other adult human faces. We see faces on the moon, in everyday objects and even in burn patterns on toast or tortillas.
Peculiarities of human sight do not end with facial recognition. Constancy phenomenon are some of the strangest aspects of qualia I’ve ever run across. Look at these optical illusion screen shots I took from eChalk (click the images to be taken to a pop-up menu of their color constancy illusions).
Clearly we process images in a highly context-dependent way. We are very bad ad judging absolute color or luminosity values, but we are very, very good at seeing relative differences. This is why the squares in the top example appear to differ so drastically in luminosity even though they are the exact same shade of gray, or why the squares in the center of the crosses in the bottom example look like two completely different colors. Animals have camouflage and we have evolved complicated methods for distinguishing motion, relative reflectance and relative luminosity in scenes with high dynamic range. Predators can pick out prey, prey can see predators, despite great natural camouflage. Moreover, we can see very small things and very large things and interpret scale and structure easily – this helps us understand at whose face we are looking and what mountain face is on the way home. I find these illusions really fun and a great way to facilitate thinking about perception.
What can I achieve with HDR?
The answer to this lies with you. My relationship to HDR photography has changed over the years for the better. In my opinion, HDR is best used to more accurately recapitulate the various beautiful textures, colors and light levels than previous modes of visual masking allowed. Why HDR “works,” that is, why so many laypeople immediately have such positive reactions to HDR images, is a function of its ability to hit the sweet spot of color, constancy and dynamic range that makes first-hand visual experience so pleasing in the first place.