Model a stylized female with “Wild Hair” using ZBrush, 3ds Max, and Marvelous Designer
NSFW content warning below
Before starting I would like to introduce myself. My name is Javier Benítez although on social networks I use the nickname of Javier Benver. Since I was a child I have loved drawing and I have always known that I wanted to dedicate myself to creating characters. I’ve been working above all in the video game and virtual reality industry, although I have also created content for other types of audiovisual projects, not only creating 3D characters and environments, but also doing concept art work.
My workflow for Wild Hair is widely used by other artists that I know and follow, so it may not be a novelty for some, but I still think it can help many artists to have a global vision of a project like this.
I always start my search for references that inspire me on Pinterest, with an idea in mind in this case to make a girl with wavy hair. I start to store reference images, both of people and of textures, costume, hairstyles, similar works from other artists, and so on. For this character I was looking for Julia Roberts style hair in her early films. With all this material I created a moodboard that would serve as the basis for my concept.
When I am drawing I always have in mind how I am going to bring each piece to 3D, both the programs that I am going to use and the frame view of the render.
Currently, after years of modeling, I have several anatomical bases of my own that I use in my work, but I still like to carry out anatomical studies from time to time as a practice. I start in the blocking phase, where I make sure that the proportions fit the 2D design. For me the most important thing is to create a character with appeal, that premise goes above any design that I have taken as a reference, so sometimes I take licenses to gain appeal and I move away in some cases from the original.
For blocking I basically use spheres and ZBrush cylinders, always working symmetrically. I find it easier to create a humanoid by separating it into parts, it also means I can reuse it and adapt it to other characters.
Once I am happy with the volumes in this blocking phase I make a dynamesh and I begin to detail and polish the mesh. In the detailing phase of the model, I keep in mind that I will later bake it in Substance Painter, so I try not to create areas that later could generate conflict in the projection, especially the mouth and fingers, so I give them a reasonable separation.
Once I am satisfied with the result of the high resolution model, I move on to the creation phase of the low resolution model. Until a few months ago, I performed the model retopology in 3ds Max manually. It’s very important to create a good topology in our characters to make the next steps easier, both for rigging and animation.
Thanks to a plugin called Zwrap I don’t have to carry out the retopology of each new character. Thanks to this new system I can reproject my own mesh with a correct topology on my new high poly model. In this case I used the mesh of a realistic model that I made months ago. The projection is done fairly well and I only have to correct small details.
With the base of my character I prepare to do the rest of the elements. When I’m making the eyes, I prefer to separate it into two parts, one where I will apply my texture with the iris texture and the other with an outer covering of the eye that will act as a cornea and sclera. I draw the eye texture in procreate for Ipad.
With the eyes finished I prepare to make the last piece of modeling, the sweater. When I do this kind of render, I like to combine the simplicity of the cartoon style with aspects that give it realism in some parts such as clothing and hair. For this reason I decided to create the sweater in Marvelous Designer.
With this program it is possible to work with a default avatar but I like to create the garment directly on my model to adjust the volume of the garment to the character's body perfectly. When I started learning Marvelous Designer, it was very helpful to find examples of pattern making. Once I have finished my work with Marvelous I simply export it as OBJ in quads and rearrange the UVS.
The texturing process is carried out in Substance Painter. I have made the UVs of my character's body with the UDIM system, so when I’m creating the Substance Painter file I have to make sure to activate “Create a texture set per UDIM tile.”
The first thing I do in Substance Painter is to bake my model. In the Bake window I activate Normal, Ambient Occlusion, Curvature, and Thickness. Substance Painter has some advanced skin materials (Smart materials), in this case I apply the skin face material. As in all the materials that we find in this program we can play with some parameters. First of all I change the base color because by default the skin tone is darker than I want so I lighten it a bit. My next step is to duplicate the sss layer that contains the material, I like the effect it creates on certain parts of the face like nose and ears.
I apply the makeup in different layers so each will have different properties such as color and roughness. For the sweater I repeat this process, once I have baked it I apply a Smart fabric material. We can find a lot of materials that simulate a fabric effect in this program. I simply adjust the scale parameter in the UV transformations section. Sometimes it is useful to play with the way our material is applied, in most cases the best option will be UV projection, but other methods such as tri-planar can also be helpful in some situations.
Once I have textured all the pieces of my model, I export it. The configuration I choose is Arnold 5 (AiStandard) and in 16-bit TIFF format, texture size at 4k. The textures that result from the export and that I am going to use in Arnold are Base Color, Normal map, Ambient Occlusion, Roughness, and Metalness.
Xgen Hair Creation
Once I import my model to Maya, the next step is to do the hair with Xgen. Many artists perform hair creation directly on the model but I like to create another mesh specifically for creating hair descriptions. Then I duplicate the mesh and remove from the model all the polygons that will not be affected by the hair. This mesh must also have UVs.
There are different ways to generate hair in Xgen, I use the setting of placing and shaping guides. There are other artists who create curves on a ZBrush basemesh and then turn those curves into guides.
I prefer to create each guide and comb it one by one. I find it very helpful to create more than one description in the Xgen collection. In other words, I create a collection called for example “Wild Hair,” but within that collection I create a different description for each area. For example I have created a description for long hair, a new one for small fine hair, especially for hair growth on the forehead. Additionally of course I separate out different descriptions for eyebrows and eyelashes.
The hair creation in Xgen, although it has excellent results, is a delicate process that requires a lot of patience. Once I have created the guides in the first description my next step is to apply a density map.
Additionally I have applied a region map to separate the left side from the right.
When we have the guides and we have indicated the area in which the hair will be generated, I adjust the thickness parameter of the fibers to be generated. With this work done it’s time to start applying modifiers that will help us to “comb” and achieve the effect that we want in our model.
Xgen has some modifiers that will help us adjust the appearance of our hairstyle. In my opinion the most interesting ones are: Clumping, Cut, and Noise modifiers. The Clumping modifier function concentrates the strands of hair around each guide created; this modifier combined with others helps us to control the hairstyle. I normally use them several times in the same description with different values.
I use the Cut modifier to give an irregular measurement to the fibers so that the end of each strand does not have the same length, in my opinion it gives a more realistic appearance. As for the Noise modifier, we can achieve very different results depending on the parameters we apply, but basically what it does is break uniformity.
Xgen is a very advanced tool and I am still learning it. With the same guides applying the modifiers we can obtain completely different results.
In this section I am not going to elaborate too much, I don't really use a very complicated shader; I simply apply the texture maps mentioned above in Arnold's aiStandardSurface material. So I apply the color basemap to both the base color and the subsurface scattering channel. I also connect the map of Roughness and Metallic and the Normal Map in the bump channel at the end.
Lighting and rendering
The render was done with Arnold. The lighting set I use is basically three-point lighting: a key light, a fill light, and a rim light (light that helps me separate the character from the background, although in this case I used a flat color). For this lighting set I use Area lights. Additionally, I use a Skydome with an HDRI map that helps me create interesting reflections. Finally I export my image in PNG format and make some color adjustments in Photoshop. I want to thank the 3dtotal team for the opportunity to show my work process, I hope it can be useful.