I’ve often wondered about HDRI files and their advantages when compared to good-old JPG and PNG images. I’ve also wondered how difficult it would be to create full HDRI spherical environment backgrounds to use in 3D software, as the task is already difficult enough with regular images; so I set myself on a discovery path.
A bit of background (no pun intended…)
In case you still don’t know what HDRI is, the acronym stands for High Dynamic Range Images and in simple terms, HDR images store much more information than your average jpg picture can, particularly within the bright and dark areas of the shot. Whereas jpg and png and other LDR (Low Dynamic Range) files store 8-bit information, HDR files work in 16 or even 32-bit modes. As a result, the file size is typically much larger. HDR images are usually created by taking multiple pictures of the same scene that you shoot at different exposures, and then combine them into one HDR file.
Bright & Dark Areas and High-Contrast Photography
Here’s a situation we’ve all faced at one time or another: You find a spot that you like and you take a picture, only to realize that you can’t see the details as well as you hoped or as well as you see them with your naked eye.
So you figure: Oh well, I’ll just adjust the picture in Photoshop. While this may work for regular photography, it isn’t optimal if you’re planning to use the image for Lighting & Reflection purposes in 3D software.
The trick is to take multiple shots of the scene by varying the speed and aperture (or white balance) of your camera to produce brighter and darker shots. I’ll discuss these techniques in greater detail in a moment, but essentially, you need an underexposed shot to help with the details of the bright areas such as the sky and clouds in this example. However, while the bright areas come into play, the shadow areas get impossibly dark.
So you go the other way and set an overexposed shot. This time, the shadow areas under the bridge display beautifully but the rest of the scene is washed out and the clouds simply disappear.
However, when you combine the three pictures into an HDR image, and use the resulting image in 3D software, you notice that as you adjust the light balance and exposure of your 3D scene, the HDR picture responds much better than your original picture. Certainly the clouds and sky are more visible in the HDR image (and in your renderings), even when it’s dark; while shadow areas are more clear and vivid as the exposure gets brighter. Even glow effects are more natural in HDR photography.
Processing HDR Images
Once you take the same shot of a scene at different exposures (a process known as Bracketing), you can use HDR software to blend them together. Typically, three shots (one normal, one bright and one dark) are enough to get the job done but you can take more pictures by varying the exposure range even further.
As mentioned, I will discuss these techniques in a moment, but first, I want to differentiate HDRI processing towards outputting what I call a “beauty shot”, as opposed to outputting a true HDR image to use as an environment map in 3D software, as is my target.
Some cameras nowadays have the capability to process HDR beauty shots natively but if that doesn’t apply to your equipment, then you can use computer programs which often give you better results anyway.
When you google the words: “HDRI software”, you’ll find many applications that can be used to process HDR images. They all work the same way, by blending pictures taken at different exposures. Some of these applications are free; others cost money but most are around the $100 mark. Arguably, the most popular is a $99 program called Photomatix Pro by HDRsoft (http://www.hdrsoft.com), but there are others.
When you load bracketed photos in Photomatix Pro and blend them together, you get to adjust the resulting image by using a process called Tone-Mapping. The resulting image is well-balanced and all areas show the necessary detail, even those that were previously at odds because of high-contrast.
In fact, Photomatix Pro has various processing methods including presets and a multitude of controls to ensure the resulting image is quite to your liking.
However, this procedure is meant to create a resulting beauty shot that is pleasant to look at but that is still essentially saved as an 8-bit picture, which again is not optimal for 3D purposes. These may work well as camera backplates in compositing software but not so much for lighting and reflections in 3D software. For that, you had better save your files in true 32-bit mode to an extension such as .exr or .hdr
In Photomatix Pro, this is an option you choose BEFORE you process your bracketed images, in fact as soon as you load them. The option to show the 32-bit image is disabled by default, so you would need to enable it. This ultimately lets you save a true HDR image in .hdr or .exr formats prior to tone-mapping the solution. Once you tone-map the beauty shot, you lose the ability to save your file to 32-bit format.
Now that we have covered the main concepts of HDRI, let’s see how we can apply them to create full 360-degree 3D environments.
As you might have guessed, you first need a camera, and a good one at that. Ideally, you want to use a DSLR camera for best results. However, there are other features to look for:
The camera you use must be capable of bracketed shots, i.e. taking simultaneous pictures at different exposures. Not all cameras have that functionality. Certainly, most point-and-shoot cameras do not have this feature and even some entry-level DSLR’s are missing this essential feature as well. In my case, I used an Olympus E-PL5 which is known as a Mirrorless camera. It is a notch below DSLR in quality but significantly better than point-and-shoot. It can also do bracketing with a whole range of options, shooting as many as 7 simultaneous pictures at +/- 2EV difference, or up to 5 pictures at +/-3EV difference.
You also need a wide-angle or better yet, a fisheye lens to be able to shoot a full 3D panorama with the fewest number of pictures possible. Luckily, you can change the lens on mirrorless and DSLR cameras. With the proper 8mm fisheye lens, you can capture a full 360° panorama with as little as 6 to 8 shots, as opposed to 40 or more with a 24mm wide-angle.
Unfortunately, good-quality 8mm fisheye lenses can be overly expensive, sometimes costing well over $1000, and since I was on a budget, I found one under $100 on eBay. Of course, the quality is on par with the price but that’s to be expected. As I was only on a research and discovery path, it didn’t really bother me.
Next you need a tripod. This is an easier item to purchase as it is widely available. A tripod is essential to ensure there is no shift in the simultaneous pictures shot at different exposures, which will inevitably happen if the camera is hand-held. This is especially true in a night shot when the settings call for lower speeds. Just make sure you use a good sturdy tripod to prevent the wind from making it wobble.
A Panoramic Head, or a pano head for short, is an additional piece of equipment that goes between the tripod and your camera. Its sole purpose is to ensure the camera rotates about the lens’s entrance pupil to reduce and even eliminate parallax errors. It is an essential piece of equipment if you want to stitch your images properly and end up with a seamless panorama.
Again, pano heads differ in price and quality and as I was on a budget (did I mention this already?..), I went with a reasonably-priced offer with a product called Panosaurus (http://gregwired.com/pano/Pano.htm)
Shooting the scene
Once I had the equipment ready, I set out to find a spot that I liked. I ended up choosing a place not far from the Montreal Autodesk office. It isn’t an overly populated area, which added to its advantage as you want as little movement in the shots as possible.
It was a fairly overcast day with the sun refusing to pierce through the clouds, which meant the lighting was mostly ambient. Hardly any shadows showed from the flag and light poles on site.
I set up the equipment and turned to adjusting the camera settings. The first rule is to shoot in manual mode. You want to keep the same speed and aperture values as you rotate the camera around the pano head. On that day, I used an F-Stop of 16 and a speed of 1/160. This was the setup for the main shot. I also set the bracketing to +/- 2EV to get the darker and the brighter sets of images.
In order to stich the panorama later, you need take photos that overlap. With a fisheye lens, you can sometimes achieve this with as few as six shots, four shots at 90° for the horizontal sweep, one shot pointing upward called a Zenith shot for the sky and clouds, and another pointing down called a Nadir shot for the ground.
You simply need to make sure there is enough overlap between one picture and the next. If there isn’t, you may need to take more pictures.
In this particular case, shooting pictures at 90° gave me a very small overlap between pictures but it turned out to be enough.
It would probably have been safer to take six horizontal pictures at 60° instead of four at 90° but as I said, it worked out for me this time around.
The other setting I wondered about is whether to shoot to JPG or to RAW formats. By all accounts, the RAW format should be favored and the internet is filled with stories about how superior it is in terms of quality, white balance and level of detail. The problem with RAW format is that it has no standards. RAW files differ between camera manufacturers and even between camera models. A RAW file shot with an Olympus E-PL5 such as the one I used has different specs than one shot with a Canon T5, a Nikon D5200 or even an Olympus E-M5. This inevitably leads to support issues with software applications. Case in point, I was unable to load my camera’s RAW (.ORF) files in Photoshop, even after I downloaded the necessary plugin from Adobe which assured me was compatible with my camera model. I was also unable to load the RAW files in the stitching program I ended up using for my workflow, so I had to bite the bullet and simply use the JPG format. Based on your camera type and model, you may have better luck than I had.
And so I set myself to shoot the six needed pictures, and with the camera set to simultaneously produce three images of each shot, I ended up with a total of eighteen images, grouped in six “stacks” of three.
Stitching the Panorama
Once I had the images on disk, I wondered which software to use to stitch them together. I will mention in passing that many cameras already offer “panorama modes” but these come with limitations: They work mostly in cylindrical mode and usually up to 180° horizontal sweeps. To create a fully immersive 360x180 spherical panorama, you need to rely on specialized software. Not even Photoshop’s “Photomerge” function will do the job.
Ironically, we had our own stitching program not too long ago; I’m speaking of course of Autodesk’s Stitcher. For some reason, we stopped supporting it and selling it which is a shame, as I thought it worked rather well. So I looked for alternatives. A very popular solution is a stitching program called PTGui (http://www.ptgui.com). It is made by a company based in the Netherlands and retails at about $100 for a single license, which is very reasonable for what it does. There is a similar product (which I ended up using) called Hugin (http://hugin.sourceforge.net) that may be missing some of PTGui’s advanced features but that has the benefit of being open source and freeware. It is very robust and gets the job done.
The interface in Hugin is easy to get used to and although it offers a “simple” interface, I found myself switching early on to the advanced interface which simultaneously shows a panorama preview and a main window to fine-tune your settings.
When I loaded the eighteen images into Hugin, I was first asked to supply the program with some additional information that it couldn’t detect in the jpg metadata. In essence, I needed to specify that the lens was a Full frame fisheye and also had to manually enter the Focal Length and multiplier values.
The second question came when Hugin detected that there were stacks of images shot at different exposures and suggested that they should be linked together. That sounded like a logical step and I accepted the suggestion.
At this point Hugin loads the images but they still need to be aligned. This is done by pressing the Align button in the Assistant tab.
From then on, Hugin analyzes the pictures and finds common control points between them in the areas that overlap. When it’s done, it stitches the images together and reports its findings. If it doesn’t find a good fit, you may need to make manual adjustments to the control points.
The output you decide upon is left to you. You can even choose to save multiple outputs in many flavors, in case you need to decide later which one(s) you want to use. After some experimentation, I found three outputs to be particularly useful:
- The “Exposure corrected, low dynamic range” option is as it sounds. It creates an 8-bit image of the resulting panorama in TIF, JPG or PNG formats.
- The “High dynamic range” option is also self-explanatory and produces a 32-bit openEXR file. However, if you’re more of a radiance .hdr fan, then I found the third option to be quite interesting.
- The “Blended layers of similar exposure” option renders three separate panoramas; based on the three separate exposures you fed the program. This means that you can then take the three “layers” of produced panoramas and feed them back into Photomatix Pro, HDRShop or Picturenaut to extract a .hdr file format. This is the workflow I used to produce the Duke_PMP.hdr file I ended up using in 3ds Max.
If you decide to use that workflow, you have to be careful to fuse the layers without pixel shifting or cropping, otherwise you’ll end up with unpleasant surprises as far as the seams are concerned. Using Photomatix Pro as an example, when you load bracketed photos to process, you are asked if you want to enable options such as aligning source images, removing ghosts, reducing chromatic aberrations and others.
Some of these options are enabled by default but for this workflow, which involves processing panoramas created in Hugin, you want to disable all of these options particularly the Align source images option which takes it upon itself to crop the results. Cropping messes up the seams when the panorama folds into a sphere, usually resulting in a bad vertical line when rendered.
A Word About the Nadir Shot…
Looking at the resulting panorama, you may be wondering about the black and white strip at the bottom of the picture.
This is in fact the pano head and the tripod from the shot with the camera pointing down. I wasn’t too worried about it because I knew there would eventually be 3D geometry that hides this area, adding to the fact that the actual problem area is smaller than it appears in the rectangular panorama. When projected onto a sphere, it’s only a problem when looking straight down.
If it really bothers you and want to eliminate it completely, you can do so by taking more pictures and doing some fancy work with masks in Hugin. What you shouldn’t do though is try to retouch the panorama directly in photoshop. First it would be very difficult to do so because the bottom pixels stretch down to a point, but also because of seams you might mess up by altering the edges.
A nifty workaround is to convert the rectangular panorama into a vertical cross so you can fix the problem area in a way where you can easily use the stamp tool and other Photoshop goodies to replace the pano head with gravel and dirt.
Once that’s done, you can then convert it back to rectangular image to get a clean panorama to use in 3ds Max.
The problem with this technique is that not all HDR programs have the functionality to do panoramic conversions. In my case, only HDRShop enabled me to do this, which is why I called it the Swiss army knife of HDR software. Again, I reiterate the fact that this may not be a big issue as you’re likely to have 3D geometry that will hide most of the ground, certainly when pointing straight down.
Using the Panorama in 3ds Max
In 3ds Max, I started a new project and made some basic adjustments such as using the mental ray rendering engine and setting up a decent render resolution at 1280x720 pixels.
To setup the panorama, I started by loading it as an Environment Background in the Environment and Effects dialog. To display it in the viewport, I simply set the viewport background to display the current environment used in the scene. Although not always obvious, the background actually needs to be mirrored to be viewed properly. This becomes clear when you view or render an area that contains text.
The reason is that the image is projected on the outside of a virtual sphere, but you’re actually inside the sphere looking out. So, by instancing the environment background into the Material Editor, you can flip it horizontally by specifying a U-Tiling factor of -1
With the background in place, I still needed a light source for the 3D scene, so I used a Daylight System. However, I didn’t let the Daylight System creation drive decisions for me, since it likes to set its own values, (and for that matter its own environment background in the form of mr Sky). Instead, I simply set the time of day to 16:00 or 4pm, which was roughly the time I took the pictures. As for Exposure Control, I set it up to use Physical Camera Exposure Control as this is my preferred method since it was introduced. Here’s the important bit: when using HDR backgrounds you want to enable the option to Process Background and Environment Maps. This makes the background respond to EV changes when setting camera shots.
Typically when dealing with custom backgrounds and especially with Matte/Shadow materials as I had in mind, it helps to adjust the Physical Scale value in the Environment and Effects dialog to help with the process. There may be a bit of trial and error involved but I ended up using a value of 15000.
It was at that point that I realized that although the sun was present, it was well hidden behind the clouds and was not actually causing any shadows elsewhere in the background image. So I disabled the sun portion of the Daylight System, and set the Skylight to use the Scene Environment. This also had an immediate effect to the overall look in the viewport.
Another quick render to see the results and I was ready for another shot.
Of course, the nice thing about fully immersive spherical panoramas is that you can freely navigate to set another shot of the scene. You can also set multiple cameras each with their own exposure settings.
The real test came when comparing hdr backgrounds to 8-bit jpgs. Having saved an 8-bit version of the panorama in jpg format, I loaded it up in 3ds Max replacing the hdr background, and rendered the images to view the differences. The results were quite conclusive: in jpg mode, the background itself (sky and clouds) showed subdues colors and more importantly, the reflections lost a lot of vibrance.
If the sun is directly in the shot, the jpg background loses quite a bit of its highlights, particularly the glow of the sun as it tries to pierce through the clouds. These high contrast situations are where HDR Images really shine, again no pun intended…
Of course, once the scene is set up correctly, you can replace the teapot by other models that make more sense, even though it’s debatable whether a Formula 1 car is more at home than a teapot in these particular surroundings.
Having already dabbled with 360° spherical panoramas before, albeit using simple LDR images, I was actually expecting a much harder time moving to HDR photography. It turns out that with the right gear, it isn’t that much harder at all, and the benefits for 3D renderings are quite palpable. I think I will be adopting this method from now on and encourage you to do so as well.
I’m providing you with a link to some of the files I used, and there are also links in this story to various programs I experimented with throughout my research. I had a good time throughout the whole journey, as I hope you do too.