Section:

Interview with Infinite-Realities

Can you first of all tell us a little about your company and the services you provide?

My company is called Infinite-Realities and was founded back in 2007. It's a 3D scanning and character creation studio based in Suffolk, UK, where I offer 3D modeling, face and full-body scanning, as well as 4D performance capture, to the visual effects and computer games industries.

In layman's terms, can you explain the process behind your technology and how it works?

The technology uses off-the-shelf hardware and software to take synchronized images from multiple positions in space, of a subject, and then uses that data to reconstruct, pixel by pixel, the surface contour and shape of the captured subject. I primarily use Agisoft Photoscan Professional Edition and a large amount of Canon cameras to achieve this.

Can you give us some examples of the benefits of this technology and the contexts in which it proves valuable?

There are a myriad of benefits. In a split second, this system can capture any kind of dynamic pose. IR used the system last year with Nike to capture the fastest woman on the planet, Carmelita Jeter... and damn is she fast!

Previous generation white light or laser-based scanners could take anywhere from 3 - 17 seconds to scan a full sweep of a body in separate sections, thus rendering dynamic poses impossible to capture. IR's system also captures around 1.27

gigapixels of color data from 360 degrees, something the other systems cannot do, as well as being a volumetric-based reconstruction method, which means it's a single shot capture process. There's no need for aligning mesh slices, splicing or fusing, nor are there any calibration issues or distortions. It's quite a renaissance for 3D scanning.

This system is ideal for visual effects and computer game content creation for quick digital double replacement, as well as having many uses for the medical, fashion, TV, military, police, music and other industries.

What motivated you to develop this technology?

My motivation was simulating virtual humans with the end goal to move into robotics. I'm obsessed. I dreamt of being able to use "laser" scanning technology back in my early 20's, but I could never have afforded the laser scanning or white light system at the time - they varied from £75,000-£250,000!

I started out as an FMV and environment artist when I was younger, and then changed to character work in my mid 20's. I got tired of using Google reference images or 2D human reference data to try and replicate the human form. No matter how good an artist is at anatomy - and there are some incredibly talented ones out there (I am not one of them!) - I honestly think you can't beat the real thing. So I investigated some more. I tried various solutions over the years, and as hardware and software costs came down and newer, more innovative, solutions appeared on the market, I saw a gap to try and leverage a better scanning solution to offer products and services related to the medium.

Do you feel as though replicating humans perfectly may remove the artistic process entirely and do you believe that robotics is the logical conclusion of this pursuit?


Maybe in 10 or 15 years. I think databases will be built eventually, storing the various forms and shapes of the human body, as well as specifically for faces, leading to cataloguing and sampling of various races, ages and body types, and the introduction of AI and processing of these databases. I would imagine digital splicing could become commonplace, and random seed generation of virtual humans. Much like most role-playing, or MMO computer games do now, but based on real life, realistic-looking data.

The next step is 4D motion capture of body movement and the analysis of that data, which can be used to train and drive body rigs. I think robotics will be one logical progression from these types of systems, as understanding movement will be invaluable to that part of the industry and our evolution. What better way to understand the human form than to analyze and study captured scan data at high resolutions and frame rates? It's comparable to Muybridge, but using 21st century technology.

This capture system was born out of an artistic approach; it is an art form in itself, more than a science. At the moment the data and processing still requires a fair amount of manual intervention to control it, convert the data and edit it into usable material, which requires a certain degree of artistic and anatomical understanding. Over time however, I think this will fade.

What has been the hardest part behind developing a successful stereo photogrammetry process?

Money. I started a business when I was in debt (which I have now paid off, finally!) so I could have been doing this two or three years ago, but it took time to build the system, save up, and research different solutions and software. First came 2 cameras, then 4, then 18, then 32. I'm now running 74 cameras. It involved a lot of trial and error, a lot of late nights, sacrificing my social life, a huge amount of patience, research, risk and careful, diligent bookkeeping!

You have to be prepared to do the leg work, especially if, like me, you don't have a strong technical, financial or academic background. You have to take some big risks, but the key is research: understanding the market and what others are doing. Ironically, while I was trying to study and build the system, Google searches kept on revealing me. Everywhere I looked, figuratively and literally, my 3D face and body stared back at me! There are so few others out there doing this kind of research who are willing (and confident enough) to share their exploits and results, so it was a huge learning curve. Because I don't have funding or investment, I have no board of directors or self-imposed NDA's so I can share most of what I do. I started to finally see good results this year and I really encourage others to get involved and share their results and research data as well; it helps the industry grow and evolve, instead of harboring secrets and innovations. Personally I can't stand the idea of patents, it stifles growth.

How complicated is it to get the raw data into a useable format ready for 3D rendering and animation?

It's a good question; I see a lot of incorrect assumptions online and in forums about how to handle this kind of scan data. It's really quite trivial. Thanks to the amazing new tools in programs like ZBrush (Remesh, AutoUV) and other applications it's super-easy to retopologize a mesh or use an existing low poly frame (already UV mapped and grouped) to shape conform and re-project with, transferring over the details. This process can be handled with ease in less than 10-20 minutes. Luckily the data this system can now produce is 90% cleaner than it used to be, so very minimal clean up is needed.

What differentiates your company from ones offering a similar service?

It's just me and a few dedicated freelancers overseas, like a talented artist called Alexander Tomchuk. My overheads are very low compared to other studios and all projects get my full attention. My company isn't technology-based, or founded on money. It was purely founded on the passion for creativity, art, and specifically the replication of digital humans, which is something I am extremely passionate about and continue to try and excel in every day. There are also other studios dabbling in similar forms of capture, like Ten24 and Digicave, who are also getting interesting results. It's not just the big VFX studios and research institutes anymore. It's becoming more mainstream and anyone can do it.

Technology is constantly evolving, so after having developed your system how do you anticipate it changing in the future?

I'm already looking into grant funding and investment, so the next phase is to go 4D, capturing


at 60-120fps which is now quite feasible with current technology, but the amount of data involved is astronomical. There is also the looming specter of the next version of Kinects and TOF (time of flight) hardware, which is just on the horizon. Soon I anticipate people will be able to do extremely high resolution scanning from the comfort of their living rooms. The current generation of Kinects aren't quite there yet (even with the use of Reconstructme, which is amazing), as the Kinects were really designed for motion analysis as the IR laser pattern is too coarse for 3D reconstruction. The camera sensors are also too low at the moment, but I think the next iterations (if Microsoft don't hold back to milk the franchise) will be incredible.

How would you like to see your efforts and technology best used?

I would like to see it used to improve virtual character content in games and film. This technology, specifically used for face capture, is fairly new; some studios are starting to test and adopt it, but I think it will soon become commonplace in most studio pipelines until the next iteration of technology kicks in.

This kind of change and progression is a good thing. It will help tell more engaging and emotional stories as I think at some point, game and film will merge.

From IR I would like to say thank you to 3DTotal for the opportunity of this interview. Sample Scans and 3D models are available to purchase at www.triplegangers.com

Fetching comments...

Post a comment