In this post I will show you how to create normal maps from photographs. You can often create normal maps using dedicated software or plugins quickly and easily. I created a very easy to use Filter Forge filter – DreamLight_Normal-Map-Maker. I also created a node map based surface you can use right in LightWave 3D to generate your normal maps as well. Once you understand how normal maps work you can even create them from scratch using photographs in any image editor such as Adobe Photoshop.
First I will cover some basic information about what normal maps are and how they work. I will then show you how to create normal maps from photographs in Filter Forge, LightWave 3D or Adobe Photoshop. I will finish by showing how I used this technique in LightWave 3D for a recent project.
You can get Gold Coins similar to those I borrowed for this project on Amazon.com
- What are Normal Maps?
- How to Take Photographs for Normal Maps
- How to Create Normal Maps from Photographs with DreamLight’s Filter Forge Filter
- How to Create Normal Maps from Photographs with DreamLight’s Nodal Surface for LightWave 3D
- How to Create Normal Maps from Photographs with Adobe Photoshop
- How to Create Normal Maps in Affinity Photo from Photographs
- How to Apply Normal Maps in LightWave 3D
- Viewing the Results of How to Create Normal Maps from Photographs
Normal maps are a more advanced form of bump maps. They add simulated detail to a 3D surface. Where a bump map is a grayscale image that encodes elevation data, a normal map uses an RGB color image to encode normal vector directions in 3D. Before learning how to create normal maps from photographs it is a good idea to understand exactly what normal maps are and how they are constructed.
Let’s examine a simple normal map of geometric shapes by Julian Herzog from an article about Normal Mapping on wikipedia. Two things stand out immediately about this image. First there is a rainbow effect that progresses around the edges of the shapes. Second, all the areas facing the viewer are a lavender color.
Vectors pointing toward the edges of the image progress through shades of red and green while vectors pointing out from the image approaching perpendicular progress through shades of blue.
Before we move on to learn how to create normal maps from photographs let’s take a look at how the XYZ vector data is encoded within an RGB normal map.
RGB Normal Map XYZ Vector Encoding
Normal maps use the three color channels R-red, G-green and B-blue to encode the X, Y and Z normal vector data in an 8-bit image.
The R channel in the image ranges from 0 to 255 and represents the left to right direction. R = 0 corresponds to X = -1 pointing toward the left side of the image and R = 255 corresponds to X = 1 pointing toward the right side of the image.
Then the G channel in the image ranges from 0 to 255 and represents the bottom to top direction. G = 0 corresponds to Y = -1 pointing toward the bottom of the image and G = 255 corresponds to Y = 1 pointing toward the top of the image.
Finally the B channel in the image ranges from 128 to 255 and represents how far out the vector points toward the viewer. B = 128 corresponds to Z = 0 pointing along the same plane as the image and B = 255 corresponds to Z = -1 pointing perpendicular to the image. The Z channel only uses half of the range, B = 128 to 255 and Z = 0 to -1 because the normal can only point between parallel to the image plane or perpendicular to the front of the image plane. The normal can’t go backward behind the image plane.
Sampling and Analyzing Colors from a Sample Normal Map
Note: In the following images the color values are not exact due to clicking manually on the image rather than typing values into the text fields.
Now that we have a solid understanding of what normal maps are and how they are constructed we are ready to learn how to create normal maps from photographs. So it is now time to take some photographs.
Take Four Photographs Lighted From Four Directions
- Set up a macro camera on a tripod directly above the subject. Prepare lighting. It could be as simple as a strong flashlight diffused through a tissue held on with an elastic or you could of course use a professional photo light with a diffuser. Just make sure it’s easily moveable and can get down pretty close to the table surface. This photo shows a simple flashlight setup with the overhead lights on so you can see the setup to photograph the client’s coin.
(You can get Gold Coins similar to those I borrowed for this project on Amazon.com)
- Turn off overhead lights and take photos of the coin lit only with the flashlight from all four directions; left, bottom, right and top.
- Load all four photographs into any image editor that supports layers to verify that they are all positioned correctly.
- Crop the document so that all four are identically positioned and cropped.
- Export each of the four photographs as individual image files in a high quality format such as PNG.
I created a filter for Filter Forge – DreamLight_Normal-Map-Maker. You may use the free version of Filter Forge with this free filter to generate normal maps very easily.
- Load the four photographs for top, right, bottom and left.
- Click the Normal or Occlusion checkmark to select which to generate.
- Adjust the Occlusion brightness if desired.
- Save the resulting Normal or Occlusion map.
The logic used to create this Filter Forge filter is outlined below where I show you how to create normal maps from photographs with Adobe Photoshop step-by-step manually.
You may find all DreamLight’s Filter Forge filters at the following URL. Enjoy!
I created a nodal surface for LightWave 3D – DreamLight Normal Occlusion Map Maker. You may use this free surface to create normal maps from photographs directly in LightWave 3D.
- Create a rectangle and set the camera to render it head on.
- Apply the DreamLight_Normal-Occlusion-Map-Maker surface to the rectangle.
- Load your photographs into the four image nodes for top, right, bottom and left.
- Adjust the scale and position nodes to properly position the normal map on the rectangle while viewing it in VPR.
- Adjust the Occlusion Brightness node if desired.
- Render the image and save the resulting normal map (in linear colorspace).
- If you also want the occlusion map then connect the Occlusion Map output to the Surface Color input and render again.
The following screenshot shows the four image nodes feeding into a compound node that creates and outputs a color normal map and a scalar occlusion map.
You could pipe the compound node’s outputs into the normal and diffuse surface inputs to use them directly if you wish. That would not be the most efficient way to use it though because the normal map would need to be created every time it is used. Instead I’ve set it to pipe the color normal map into the surface color input, set the luminosity to 1.0 and the diffuse, specular and reflection to 0.0.
Note: Normal Maps need to be in Linear Colorspace to work properly. When using this node map to create normal maps you should switch to linear colorspace while importing the four photographs, rendering and exporting the resulting normal map. Once done you may switch to sRGB color space if desired but be sure to then set any normal maps to linear colorspace in the Image Editor or they won’t work properly.
The following node map shows the nodes inside the compound node. The logic used to create this LightWave 3D node map is outlined below where I show you how to create normal maps from photographs with Adobe Photoshop step-by-step manually.
I’ve built a nodal surface that can be used to create normal maps and occlusion maps from four photographs. You may download the surface and try it out yourself. Enjoy!
Below I will show the step-by-step logic that was used to create the node maps for the DreamLight Normal Map Maker LightWave 3D surface and Filter Forge filter above. [There are even online utilities that may be used in a similar manner. Here is one example of an online normal map creation utility, NormalMap-Online. If you set that utility to a strength of 1 and invert the R, you will end up with a normal map close to what we will create below.]
Now for those who really want to roll up their sleeves and learn how to create normal maps from photographs manually in an image editor such as Adobe Photoshop (or for those using Affinity Photo refer to my newer post: How to Create Normal Maps in Affinity Photo from photographs.) I present the step-by-step process.
Create the Blue Layer in Photoshop
The trickiest channel to create is the blue channel. This represents Z vectors from blue = 128, Z = 0, laying on the polygon plane (in any circular direction indicated by the red/green amount) to blue = 255, Z = -1, pointing outward perpendicular to the plane. The completed blue layer may also be used as the basis to simulate an ambient occlusion map and specular map because vectors pointing perpendicular to the image would normally also be those that are less occluded and more specular.
You could just take a fifth photograph with the light directly perpendicular to the object, but I couldn’t get the right angle due to the size of my camera and the close proximity didn’t allow the light to be placed at the proper angle. So I figured I would construct it by compositing the four side lighted photographs instead. I had initially tried using the shadows from each of the side lighted images to make the composite blue channel but that ended up looking a little rough and splotchy. The highlights are smoother than the shadows so I’ve updated this section to composite the blue channel using the inverse of the highlights instead of the original shadows as follows.
- Import all four original photos into a layer group.
- Create a new layer group with copies of all four photo layers. Leave the original photo layers alone for use again later.
- Set the Top copy layer, which is the lowest layer in the layer list, to Normal.
- Then set each of the other three layers, Right copy, Bottom copy and Left copy to Lighten. This will composite all the highlights together.
- Next add a hue/saturation adjustment layer above the Left copy layer but not clipped to it, so that it will affect all the layers below it.
- Set the Saturation to -100 to create a grayscale version of the composited highlights. It is not really necessary to convert it to grayscale because when we copy and paste the final composite into the blue channel it will automatically convert to grayscale. But converting it first makes it easier to judge and adjust the gamma without being distracted by the color.
- Add a Levels adjustment layer above the hue/saturation layer but not clipped to it, so that it will affect all the layers below it.
- Set the Output Levels to 255 – 0 to invert the image. This turns the highlights into shadows.
- Tweak the gray gamma slider as necessary so that you get mostly bright image.
This results in an image where all the areas facing the viewer are light with smooth shadows for all areas tilting away from the viewer.
Create the Normal Map Layers
At this point in learning how to create normal maps from photographs we have all the necessary ingredients. We have the photographs that capture the lighting from the four sides, top, right, bottom and left. We also have an image composited from those photographs that simulates light perpendicular to the image. Now it’s time to put the images into the appropriate color channels to create the actual normal map.
- Create a new layer named TopRight. Copy the composite Blue Layer that you just created and paste it into the blue channel. Then copy and paste the original top lighted photo into the green channel and copy the original right lighted photo and paste it into the red channel.
- Add a Levels adjustment layer clipped to the TopRight layer. Set the Output Levels to 128, 255.
- Now create a new layer named BottomLeft and copy/paste the original bottom lighted photo into the green channel and the original left lighted photo into the red channel.This time fill the blue channel with solid black.
- Create a levels adjustment layer clipped to the BottomLeft layer and set output levels to 255 – 128. This will invert the colors and shift them into the upper half of the color gamut range. This is important so that it may be combined with the TopRight layer.
- Now set the transfer method of the BottomLeft layer, which is above the TopRight layer in Photoshop’s layer palette, to Multiply. This will combine everything together.
I used adjustment layers throughout so that you may tweak them as desired to adjust the overall balance if necessary.
Export this finished normal map as a png image.
- Load the new normal map into LightWave 3D’s Image Editor and set the Color Space RGB to Linear. This way if you are using sRGB for other image maps globally it won’t mess up your normal maps. Otherwise the normal map may be interpreted as an sRGB image and LightWave will change the colors which will twist the normal vectors unpredictably.
- Set the coin’s face disc polygon (facing the -Z axis) to a coin-front surface.
In LightWave’s node editor add a NormalMap node with the normal map image.
Use Planar mapping along the Z axis. Use Automatic sizing (and tweak Scale/Position if necessary).
- Sometimes the front of the coin will still receive light when the light is behind the coin. If this happens then you may set the Bump Drop-off to 100%. This will prevent the normal map from receiving light when it should be in shadow.
Repeat the same process with the back face of the coin. You may need to invert some of the axes on the Normal Map node for the back side. Especially if you flip/rotate the image for the back side of the coin on the +Z axis.
Move a light around with VPR on to make sure all the normal vectors are properly oriented. Verify that the light behaves properly as it moves around the object.
Here are some simple examples of using the results of this process of how to create normal maps from photographs. A still test render and a bullet dynamics animation test created in LightWave 3D for a new animation project currently in development – Heads or Tails. Enjoy!
This is an early work-in-progress (WIP) test render from another shot in the same animation project under development – Heads or Tails. This is just an initial test of using a pile of coins in a pot of gold at the end of a rainbow created by a volumetric light. Most of the plant models are from XFrog’s free plant samples and VizPeople’s free grass samples. I refined some of the models and improved their surface textures for what I needed.
And here’s the finished result…
You can get Gold Coins similar to those I borrowed for this project on Amazon.com