Making Of 'Combine APC'
The idea for this image came when a friend and colleague asked me to model a high resolution model of the Combine APC from Half Life 2, to be used in a short film he wanted to shoot. Since it was supposed to be live action, the vehicle had to look realistic and be believable as a real asset. I took this chance to push my modeling, texturing and lookdev skills while trying to achieve the best look possible, without limitations on the polycount, or number and resolution of textures.
Through the phases of this Making Of I'll focus more on the ideas I wanted to develop, the problems I faced on the way, and the methods I used to resolve them.
Technical proficiency is only one aspect of creating digital images. Balance, composition, mood are extremely important too, and can make the difference between a good image and a great one.
My friend asked me to maintain the overall shape of the original APC game model. For the rest, I was free to experiment and come out with my own ideas. The look of the model is peculiar, and has a very well-balanced mix of World War 2 and sci-fi elements in it.
Real military vehicles usually don't look very cool, but all the details like door handles, antennas, hatches, nuts, bolts, etc., are built in a proper way and serve a real purpose. On the other hand, fictional sci-fi vehicles usually have a great design and look very dramatic, but can lack functionality and logic.
So I gathered lots of images of real APC, trucks and tanks from past and present times, as well as stills of famous armored vehicles from legendary sci-fi movies or games like Aliens, The Dark Knight Rises, District 9 and Half Life 2 itself (Fig.01).
I took some reference shots of the original model from Half Life 2 and I modeled the main volumes and the wheels. Just using the basic modeling tools, piece after piece, I assembled a replica of the main volumes of the APC (Fig.02). If you break up any complex model into small areas and you focus on each piece one per time, there is nothing too complex to achieve. Modeling is pretty much all about that.
For the smaller, realistic details I just had to model them as they were in the reference photos I'd chosen. For other elements, such as the rims, the headlights or the roof, I tried different solutions. It was a trial and error method, and I discarded and remodeled them until I was happy (Fig.03). In cases like this, I use any method that will allow me to save time. Usually I take a screenshot of my viewport in a perspective or ortho view and do a quick paintover in Photoshop, so I can understand if what I have in mind works or not.
Another little trick that may not be obvious: over the years I've created a database of nuts, bolts and common, simple objects, with proper UVs. I selected some of these and I used them all around the model. During the UV work and texture work, I knew this would pay off a lot (Fig.04).
As I model, and the number of my objects grows, I always take some time to keep my scene organized and tidy. Giving proper names to objects, with a proper prefix, and creating some layers to quickly hide sets of objects saves a lot of time, and headaches, in the long run. Especially, if you have to pass your work to someone else! It's a good habit to keep your scenes clean (Fig.05).
When I was happy with the look of the model and I got it approved from my friend too, I moved on to the UV. At this point I had a reasonably good idea of how much detail I would need, and which areas would require more care.
I split the vehicle into six areas of roughly similar volume: front, main, rear, framework, wheel and another one for all the small details. I kept the rifle separate, since it was going to be used also as a separate asset. Each became a UV tile, and had a 4096 x 4096 texture (Fig.06).
It is very important to maintain a similar scale all across of the model when doing the UV. Often, during the process, I shift some geos from one tile to another if I can't get enough space. Keep a checker shader handy to constantly see if everything matches. Similar scale means the same level of detail across different objects, but also sharing the same maps during the texture work.
I try to avoid any kind of overlap, even if I have many insignificant objects that could share the same UV. That gives me full control during my texturing work, and helps me to avoid nasty results that I'll have to fix when I'll bake the maps.
Texturing and Lookdev
Before I start describing my work process I want to spend a minute explaining a couple of things about gamma and LUT. There are plenty of tutorials and discussions on the forums on this topic, and many different settings. I don't claim that mine are the correct ones; I'm just sharing my workflow.
In the Gamma and LUT settings of 3ds Max you can decide how your render engine and your viewport are going to handle your images and textures. Enable it, but leave everything as it is (Fig.07). Otherwise you will display overexposed textures in the viewport and you won't affect V-Ray at all (A - B). V-Ray has its own panel with LUT settings for the render, and those are the settings that count in the end. Max or V-Ray framebuffer will show the same result.
I don't use Gamma 2.2. Dark areas come too bright, even for daily exteriors like this one. I tend to keep it at 1.8; it gives a better contrast to the renders (C).
In Max's viewport I keep my models in constant color while I'm working on the textures, so I can see exactly what I'm doing in Photoshop, and I don't get distracted by the shading. If it's looking nice with flat colors, it's going to look even nicer once it's shaded.
Even with an extremely powerful render engine like V-Ray, I don't expect it to do all the work for me, and I try to make my diffuse textures appealing and the shapes of my models recognizable even without any lighting.
Texturing is the most important part of creating a great model. You can let an average model shine with fine texturing work, or an outstanding model can be very plain and boring. All the shading process, specular, bump, etc., will help, but the diffuse have to be flawless first.
Here is the basic workflow I used. First of all I decided on the overall color for the entire vehicle. It had to look like a police force vehicle, so I did some research for police force vehicles. Dark blue with red stripes is the typical color scheme of the Italian police force: the Carabinieri. I tried different palettes (grey/orange, black/red), but eventually I chose this one (Fig.08).
Then I started to play with screengrabs in Max or directly on the textures in Photoshop, quickly adding details while trying to maintain the characteristic minimalist look. I got most of the ideas from military airplanes or tanks, and when I started to search for scale models decals, I found loads.
Any metal, plastic or rubber surfaces of a vehicle will have some common elements that will determine how old, worn and ruined that surface will look. Those elements are mud, dirt, dust, rust, scratches and so on. I took a sample object from my model that had pretty much all the shaders I needed, and I did all my experiments with that object, so I could play with the textures and shader values quickly, and then I replicated it to the rest of the vehicle for a final tuning.
I wanted to auto-generate a map for each of those, and then work to correct and improve them in Photoshop. So I made extensive use of the VrayDirt map. I created a V-Ray material, and in the diffuse I added a VrayDirt map. Playing with the values and adding different maps, I created and then baked four different maps:
1) Ambient occlusion (to be used as it was and as a mask for small dirt)
2) Convexity map (to give a subtle highlight to the edges of metal surfaces)
3) Scratches (very basic map to start with when painting scratches)
4) Mud (for the rubber of the tires)
I have to credit Neil Blevins and his SoulBurn scripts. There are so many useful tools there. Among them there is one I use often: texmapPreview. That script will render a preview of the map I have selected in my Material Editor. It's awesome when I need to tweak values (Fig.09).
Once I baked all those maps, I masked them in Photoshop and I removed all the areas I didn't want. Then I used the results as a mask itself for the scratched metal texture or the mud texture. After working on those layers, I started to add another level of hand-painted scratches, stains, oil leaks, etc. On top of all that I put the convexity map, masking out all the surfaces but the metal. The convexity map simulated the fresnel reflections. On top of everything I applied 50% ambient occlusion (Fig.10).
In Photoshop I try to be as least destructive as possible and I keep all the layers separate. In this case, when I finished my diffuse texture, I was able to create the specular, glossiness, bump and paint masks on the fly, just by adding masked adjustment layers (Fig.11).
When I prepare a mask, I do some render tests with numeric values and, when I'm happy with the look, I convert those values into a base color in my maps, and then add brighter or darker areas according to the effect I need.
I used the same method to prepare all the other textures, sharing the maps as much as I could (Fig.12).
All the shaders were pretty simple. Since my friend was not going to use V-Ray for his renders, I tried to achieve every aspect of the shaders just with masks and textures. For example, for the car paint I needed to have a clean base with subtle and blurry reflections, and then a coat for higher ones. I also needed metal scratches with high specular values and, on top, flat shaded dirt and mud to cover the reflections underneath.
I used V-Ray's car paint shader, which already has a base layer and a coat layer, with a mask to cut off the reflections and specular from the coat (Fig.13).
As I was testing the shaders I started to set up the final scene with proper lighting, cameras and environment. I wanted to place the vehicle in a real urban environment, which I wanted to modify slightly to make it look a little bit more like the original City 17 of Half Life 2, which looks like a huge prison camp. I found a free set of photos I liked and an HDR image I could use to create my image-based lighting on www.smcars.net.
I made a selection of five shots, and I tried to match the photo angle with my cameras (Fig.14). I discarded the last two mainly because of bad lighting: the first was in favor of the sun, so was pretty flat; the last one had the opposite problem.
There are various methods to it, according to what you have on the original image. In my case I didn't have a reference cube, so I assumed that the shots were taken on a tripod that was 1.50 meters tall (except for the second camera). I created different V-Ray cameras and tried to match the existing fugue points with a grid placed in 0,0,0.
It was trial and error here, but every image had a quite recognizable grid on the ground, so it wasn't very difficult. The scale was another guess, but the parking lot lines helped.
When I work with a scene that will produce more than one still image, I create an animation: one frame per camera, and I key all the values I change between them. In this case I had three cameras, so I set keys for the three positions and rotations of the APC, and made some slight adjustments to the position of the lights and depth of field values, the angle of the HDR map, even shader values if needed, and so on. It may not be the most orthodox way to handle this, but it works for me, and allows me to maintain only one scene for all the renders, and gives me an easy restore point when I do experiments.
The light setup is very basic, as I just had to match the existing lighting of the backplates. I used one HDRI for image-based lighting and one VrayLight to simulate the sun. I didn't use VraySun or a directional light (which are better lights in a daylight condition) because I liked the shadows generated from the VrayLight more.
One of the great advantages of linear workflow is that there is no need to pump up the values of lights to get decent results. I kept both my lights invisible, since I didn't want them to interfere with my HDRI, and let only the IBL cast reflections (Fig.15).
One easy way to match the HDRI with the backplates and then the sun to the HDRI is to assign that HDRI to a huge sphere in the scene. Then in the camera view you can toggle the visibility of the background while rotating the HDRI. Then you have the position and a rough rotation for the sun too.
I use the HDRI rotation and the sun position as a guideline, then I try to move them a little bit to hide ugly reflections or to get better lighting. After all the image has to look cool; it doesn't matter if it's not 100% accurate (Fig.16).
I used V-Ray physical cameras. Since my backplates were raw photos, I could match the lens, aperture, f-stop and so on, which made my life way easier. Plus, since I had more than one camera, I tweaked the cameras values rather than the lights for each render (Fig.17).
To project the shadows of the APC on the tarmac, I created a plane and assigned a VrayMtlWrapper to it. It replaced the old matte/shadow of 3ds Max. With a few simple settings it was easy to make it work (Fig.18).
Finding the right composition in all the photos was quite complex. There were plenty of things to keep in mind, which is easy in theory but very hard to achieve in practice. I'll try to make a logical and easy path to follow:
Angle: As I finished the model, I knew which angles it looked nicer from. So all the images exposed the best angles and avoided the worst ones. Banking the camera a bit gave a tougher and sinister feeling too.
Lighting: I tried to highlight the nicest features, while creating nice contrast and darken less interesting areas at the same time. I had to match a pre-existing light setup, so it got even trickier, because I was constrained by the lights. I generally avoid flat lighting like hell, and I carefully place all my lights one by one.
Balance: I followed the "rule of the thirds" as much as I could, avoiding placing the vehicle in the center if the image, too high or too low. I made it a third taller or shorter than the closest elements in the background, so it popped out a bit. Anything obstructing the reading of the silhouette was removed from the background too. I decided to remove some of the background elements I added in my first sketches, since they were distracting.
Color: The vehicle had a little more contrast than everything else; just to make it the center of attention a little more. On the other hand it had to be integrated into the surrounding environment too. So color correction of the passes with the vehicle was extremely important.
Integration: The most important thing for make a composition believable is to have or create both background and foreground elements to surround the main asset. I couldn't do it here properly. There is just a hint of this in the second camera with the pebbles out of focus (Fig.19 - 20).
I split my final renders into several passes. It takes too much time and effort to create a single pass with everything tuned to perfection, and it's not versatile at all. Post-production is another fundamental process to convert a good render into a stunning image, but I don't create a huge number of passes if I don't need them. In this case I kept all the vehicle passes together, and I focused more on integrating the backplate with the CG elements. These are the passes I rendered (Fig.21).
During the lighting and compositing process I did some minor modifications in Photoshop to the backplates. This mainly involved removing structures and details that could distract the attention from the APC or make it harder to read the silhouette. The sky was extremely important to set the mood, so I spent some time trying to find one and retouch it (Fig.22).
This is the schematic view of my layer structure in Photoshop (Fig.23).
I am pretty satisfied with the results (Fig.24). I believe I maintained the original feel of the Combine APC, and I achieved a good degree of realism overall. Of course this work is far away from perfect; I could have worked on the model a bit in ZBrush or Mudbox to add details to the surfaces, or added a dust map to give some more detail to the surfaces. The glass on the roof is sparkling clean, but the underneath of the model is still quite rough. I'm not 100% convinced with the paint shader; I believe the vehicle is too shiny in certain areas, so I should have rendered the reflections on a separate pass and masked out some areas.
Anyway, I gave myself a deadline to finish this work by even If I didn't have to. There is always something left out in every work. In the end I wanted to improve my hard surface modeling skills and my capability of handling a complex asset like this one from the beginning to the end.
I hope you enjoyed reading through this Making Of and that the concepts I explained here are going to be useful for your own projects!