Dec 182013
 

Robocow created in Blender dressed in Allegorithmic's Substance Painter

Today we tripped into a new light bending paradigm, secret software with secret sauce that we have to remain mum about. This is literally the very first thing Joe has painted with it, and we’ve only scratched the surface, or etched it’s Normals if you are of the 3D digital artist persuasion. We’d love to tell you more, but maybe we’ve already shared too much. A new kind of fun will arrive next year and it will blow plenty of minds; and all without the benefit of drugs.

Dec 132013
 

It could be argued that 1995 was a pivotal year, the year that the Internet (as it was spelled back then) became the Next Big Thing. Almost 20 years later and the internet (modern spelling) seems nearly ubiquitous and is now essential to our very existence. Most people would readily admit that they would die if they weren’t connected. And it is through that ancient filter of looking back into the dark ages, that I flip the mirror and look forward.

Floppy disk of Netscape by Kalleboo per Creative Commons

What I see is the next Next Big Thing, it will be called Virtual Reality at first, except that 20 years from now it will simply be known as virtual reality. Decades in the making, think Arpanet and its long history before “Internet” really started to happen. In late 1994/early 1995 there was quiet talk about things like www and Mosaic aka Netscape (early geek terminology for “Browser”).

Today we are seeing history repeat itself with HMD, Motion Tracking, Immersion, 4K, and Oculus. The language is being prepared for the masses, roll-out will begin next year and by 2015 – ubiquity will be on its way. Like the internet, why has this taken 20 years or more to finally become the best thing since the remote control? Dreams and large systems to support complexity take time to proliferate and mature to a point we can all join in. As for the internet we needed cheaper computers that could act as servers, better, more efficient algorithms to move a stream of 1’s and 0’s over a copper wire, and of course easy-to-use software, sitting on a relatively stable and simple visual operating system that nearly anyone could figure out, or at least be taught.

This phenomenon is precisely what has occurred  and is dictating that now is the time for VR (vr if you are an early adopter hipster). Cell phone screen technology led a garage inventor, Palmer Luckey to see that the billions of people using mobile tech had helped drive down the cost of a small screen and that the phone manufacturers had mastered the miniaturization of many of the parts that could be used to build a new HMD (Head Mounted Display). He Frankenstein’d some parts together with some special lenses and the light went on – this is the Virtual Reality experience we’ve been dreaming of. While he went out and showed his duct taped together magic black box to others, those manufacturers who made his screen went about improving their phones. Good thing competition is so strong when you are selling billions of something that make billions of dollars because one-upsmanship has driven these innovators to go from the tiny LCD screens of just a few years ago, to something called Retina Displays that are wildly popular on smart devices. But before those screens could get old or even used much, the industry is rolling out even higher resolution phone screens – 4k screens to be exact.

56K Modem by Monado via Creative Commons

Now those won’t make it into the first generation of VR headsets (HMD’s – the vocabulary is quickly evolving even before the consumer stampede begins). It’s kind of like 1995 in this regard. Back then people were using 9600 baud modems, those on the cutting edge were using 28.8k modems, while people who were feeling rich by 1998 were putting their eye on the emergence of 56k modems; in those times we had no idea that megabit and gigabit speeds would happen in our lifetimes. Our first VR headsets will likely be that big old 64 inch 1080p TV in your living room shrunk down to phone size for the sake of portability and ease of use while it’s mounted to your face.

Soon though we are going to see “The Cable Modem” of headsets as 4K (3840×2160 pixel resolution or 8.3 million pixels – compared to 1080p or 2 million pixels) screens start to become one of our new luxuries. The problem right now is that we need a lot of horsepower to make that car get past 15mph. You can only look so cool in a Lamborghini that moves slower than a moped. So this new technology needs a hot engine and that’s on its way too.

Nvidia, the makers of the GPU that drives the gaming market to new heights have some new tricks up their digital sleeves. Recently they announced that you’d need to rob banks to play games in 4k. What they literally said was that you’d want to stuff your computer with three GTX 780Ti’s, but that would cost a cool $2100 on top of the $3200 you’d shell out for a 4K monitor right now. A slide was leaked in the last few days that shows our friends at Nvidia getting ready to give us Maxwell. Maxwell is not a robot, nor a corporate talking head, it is what might be known as the GTX 800 series of GPU’s – also known as video cards.

Old Monitors by Victoria Reay via Creative Commons License

This new card, rumored to be coming out before the end of March 2014 has four times the performance of certain tasks in comparison to the current series of GPU’s. UltraHD or UHD aka 4k screens are still expensive, although Dell is promising a 28” 4k monitor before the end of March 2014 for under $1000, this is still a bit expensive for the average consumer. But cell phone screens being much smaller and made in the 100’s of millions are a lot cheaper, could this mean that 4k screen could find their way into VR headsets in the not too distant future? Yes, but.

Once the screens are available and the GPU’s are available we still have a production pipeline where the people who make games (btw, this process can take years) have to not only update their hardware, add lot’s of storage, learn new tricks (such as jumping through stereo vision hoops), they have to deploy new software and teach their staff how to make content that won’t embarrass them as they try to rush to market with stuff that will take advantage of other new tech like DirectX 11, Physically Based Rendering, and the interpretation of motion data coming from systems like the Virtuix Omni and the Sixense Stem.

Oculus Rift by Sergey Galyonkin per Share Alike 2.0 Creative Commons License

The world is about to change, on a magnitude greater than the shift we’ve seen in the past nearly two decades. Oculus has the opportunity to be the new Apple, Nvidia could be our next Sony or Samsung, Epic with its Unreal Engine 4, the Unity 3D engine, or maybe Valve with their Source engine, are they the contenders to be the new Google’s and Amazon’s of our age?

When the flat screen and the aging web page seen through the old fashioned browser is no longer the dominant method for viewing our content, when we are immersed and lost in a world in which we make purchases by driving our virtual car, excuse me, crashing our virtual car into the VR department store to steal what we came for (while PaypalVR politely debits our accounts) who says we’ll ever visit a web page again?

We’re in the waning days of the two-dimensional flat surface dominated by traditional media. While the internet has given all industries a run for their money and shaken the foundation of how business gets done, Virtual Reality is going to rattle the evolutionary tree of rapid change. That revolution is creeping up on us, just like another one did in 1995. Matter of fact, it was just today that Oculus announced that a16z led by Marc Andreeson (the guy who gave us Netscape) and Chris Dixon (Hunch) have joined the board of Oculus and pushed $75 million dollars into the company. Watch out world, the future is starting to look a lot like 1995.

Images by Kalleboo, Monado, Victoria Reay, and Sergey Galyonkin. Rights reserved by Creative Commons License.

Dec 122013
 

Driving from Phoenix, Arizona to Los Angeles, California

A couple of weeks I learned about a preview that would be taking place where the French company Allegorithmic would be showing off a new product. I sent in my RSVP and asked if Joe could attend too, with his NDA signed and sent it, he’d be joining me. So early Wednesday morning Joe and I left Phoenix, Arizona for the 400 mile (650km) drive across the vast desert to Southern California.

Map of the route from Paris to Rome

To put this in perspective and show you how important it was to us that we see this amazing new software, driving to and from Los Angeles is close to being the same as driving from Paris, France to Rome, Italy (only one-way). Going west we cross into a different time zone and find ourselves in the notorious traffic of LA around lunch time. The plan had been to arrive early enough that NO traffic would stop us from reaching our destination, even though the event didn’t start until 7:00 p.m.

Joe Cunningham enjoying a rausch beer at Wurstkuche in Los Angeles, California

I’d had my sights set on a particular place for lunch; Wurstküche. Located right in the warehouse district on the border of Little Tokyo, Joe and I sat down for some rattlesnake, rabbit, and jalapeno bratwurst, Belgian fries with truffle oil, and I ordered Joe a rausch beer (smoked German beer). I had him try the bratwurst before I told him what it was, turned out he was digging it. It only took him a few moments to start recognizing that we don’t have anything like this in Phoenix, not necessarily the food and beer, but the whole package; loud music, creative types, great atmosphere.

The Giant Robot store on Sawtelle Blvd in Los Angeles, California

From downtown we continued west over Olympic Blvd to Sawtelle Blvd, we are going to one of my favorite shops; Giant Robot. My wife and I had a subscription for years to this Asian / American magazine called Giant Robot, it has since stopped publishing, but the store and a gallery across the street continue. It was in that magazine that one of the founders, Eric Nakamura, launched a column asking different personalities to describe their perfect day. Well, that’s how I often approach these days of doing something out of the ordinary – like going to LA for a two hour presentation. With a couple of things picked up and having actually finally met Eric, it’s time for something sweet and new.

Snow Cream from Blockheads Shavery in Los Angeles, California

Around the corner is a place called Blockheads Shavery. They serve a kind of shaved ice cream, try to think of Hawaiian shave ice with a twist. Yep, creating the perfect day. Joe had something more traditional with strawberry ice cream, egg custard, fresh strawberries, caramel, and the largest boba we’ve ever seen. I opted for the exotic with black sesame and green tea ice creams, red beans, and condensed milk drizzled on top. Interesting stuff and a first for both of us. We did a little more sightseeing in Hollywood stopping in at NerdMelt on Sunset Blvd, it really lives up to its name and may be the greatest comic book shop either of us have ever visited. It’s now approaching the time to get our next nerd melt on.

CEO of Allegorithmic Sébastien Deguy with Joe Cunningham in Los Angeles, California

We arrived early at RFX, this is the location for our meeting with the guys from Allegorithmic. RFX is a huge place that’s been selling visual effects software to the film and game industry possibly longer than anyone else, they are now offering Allegorithmic’s stuff too. A caterer is setting up some things that will be offered to us attendees to munch on, there’s also beer and soft drinks. In minutes we are meeting with Sébastien Deguy, the CEO of the company that has created Substance Designer 4.0 (SD4). Along the way we also get to meet Alexis Khouri, the guy who sold me Substance after walking me through understanding their amazing software, and Jeremie Noguer who I’ve become familiar with over the forum and on Polycount as the man with the answers regarding how Substance Designer works. We are also introduced to the tutorial guru Wes McDermott (3D Ninja) who has flown in from Kentucky today and is joining Allegorithmic as the on-staff tutorial guru.

Jeremie Noguer left and Alexis Khouri right getting ready to present Allegorithmic's Substance Painter in Los Angeles, California

It’s showtime. Sébastien starts by recapping the release two weeks ago of Substance Designer 4.0 and then sitting down with Wes, we are given a demonstration of the software’s capability. Great having these two here, especially the question and answer period. I think everyone in the room learned a couple things tonight, but we are anxious for something else.

A slide from Gee Yeung's presentation demonstrating his work with Substance Designer 4.0

Next up was Gee Yeung from Dreamworks Animation. A few weeks ago he took up Substance Designer and was here to show us his progress in using it for the rapid development of looks when creating characters. We were able to see his sketches and a sculpture he created in ZBrush and rendered in the new Marmoset Toolbag 2.0 package. Nothing like seeing the work of a pro to let you know how amateurish you are, but Gee is open and full of information. Rumor has it that he might be creating some tutorials showing how Substance Designer fits in the pipeline of a Hollywood workflow.

Now what we truly came for; Substance Painter! But there’s NO screen shot? The software isn’t even in alpha version yet, so we have to wait before they publicly show the interface for the first time. Until then it’s best to look at their website and what they have announced, such as the Popcorn FX particle effects coming to this 2D/3D painting revolution. They did recommend we download the Popcorn FX Editor NOW and start learning how to write the simple code that will extend and create our own particle systems. There’s other stuff in here, like Surface Mimic’s displacement brushes, they promise to be soooo cool, but other than what’s already been released, I’m not YET at liberty to say more. What I can say is; Substance Painter (SP) is going to make waves, big waves!

Nearing midnight in Los Angeles at Original Tommy's Hamburgers - home of the double chili cheese burger

With the presentation done, the caterers cleared out, desks put away and only a few people left, we reluctantly leave. Not that we want to, but our heads are overwhelmed and imaginations in full swing. Checked in to our hotel, we don’t dare drive back to Phoenix this late, we go for a midnight meal; Original Tommy’s Hamburgers. This place has been here forever and I’m not sure they have ever closed. They are open 24 hours a day 7 days a week. I’ve been here on Christmas eve 30 years ago when I worked in LA at about 2:00 a.m., there was a line and a dozen burgers on the grill. Oh, these aren’t any old burgers either, they are double chili cheese burgers and Tommy’s is the inventor of this great big sloppy, greasy, burger.

Joe Cunningham at The Pantry in downtown Los Angeles, California

After little more than five hours of sleep, I have to take Joe to one more iconic LA landmark; The Original Pantry Cafe. Besides having never closed once since they opened in 1924, they serve a breakfast that will satisfy any appetite. I think Joe was too tired to truly appreciate his stop in history, he’s obviously NOT a morning person. By the time the fog cleared we were nearly in Arizona again. All we talked about on the drive home were the possibilities that are in front of us and how we can’t wait to get our hands on Substance Painter, and every other amazing piece of software we are working with, or hope to be one day, such as Unreal Engine 4 – are you listening Epic?

Dec 102013
 

Substance Designer 4.0 Interface realized in Virtual Reality

Visions of the new school. Our classroom will only exist within the electronic bits which pulse particles of light into the ether that lies between the flat panel and the optic nerve. We’re taking the instructor and turning him into the digital character of our choice, no one is afraid of computer monsters around here. We’ll materialize the software of our choosing into a virtual reality playground that enters the third dimension, maybe the fourth if any of you can point us to the right download. You’ll walk between tools, grab the axe size paintbrush and slash brushstrokes onto the canvas, kick virtual buckets of color into space, and you can forget about finger paints as your hands do the fancy-dance frenzy to smear the image into a psychedelic landscape. Learning will become physical exercise in our new world.

Right now we’re amateurs, painfully aware of that status too, but we’re not standing still. The computer industry has given us scissors that have been trained to cut up reality and we’re running as fast as we can with them. Taking our cue from the many innovators, inventors, and thinkers who have created this digital age, we are using their inspiration as the text book and video tutorial for what our next steps have to be. We bow down before the countless super-minds who have invented the algorithms that bend the light of the universe to our liking and we thank each of you.

Dec 072013
 

A slot canyon wall created in Blender using the displacement modifier with a projected displacement map

The other day Joe picked up on a tutorial offered by CGCookie, on their Blender-centric site, about creating a canyon inspired by Antelope Canyon up in northern Arizona. The tutorial focuses on using a generative noise displacement process to project the noise onto a mesh. Joe didn’t get far in the tutorial before deciding he needed a high-poly mesh, as his plan was to bake off a normals map in the new Substance Designer 4.0 and get right to texturing his new creation and start learning about this software update.

Screencap on the front page of Surface Mimic

Before we get to that, this past week Allegorithmic (the makers of Substance Designer) announced that Substance Painter, a new product, will be released early next year. At the bottom of the announcement page were two links, one for PopcornFX and the other for Surface Mimic. Well, curiosity took us to both company’s pages, but in this post we look at Surface Mimic, they sell Displacement Maps. Hmmm, how in the world is Substance Painter going to use these? A tutorial video from Allegorithmic goes some way in answering that, but we had to experiment so we could better understand it.

Ten dollars of free Surface Mimic displacement maps for new signups

Sign up for a new account with Surface Mimic and get $10 of free displacement maps. So we did and now armed with two of these gray scale images we were about to plug the images into the new Substance Designer 4.0 when Joe has the idea we can use one of the maps as a displacement projection image in Blender’s Displacement Modifier. Wow, the results were amazing, way better than the Voronoi algorithm alone. As a side note, I tried sculpting in Blender just yesterday with one of the free downloaded displacement maps (they are watermarked – heavily) with mixed results. Definitely need some practice with that process, but I’m guessing we’re going to see some interesting results as we figure things out!

A displacement map from Surface Mimic

About these displacement maps, the guys at Surface Mimic are 3D scanning surfaces from all kinds of objects and apparently have already made more than 650 displacement maps available. Their website says they work great in all the major sculpting engines, hence our testing in Blender’s sculpt tool, we’ll also try them in 3D-Coat at some point soon. What these maps do specifically is give great amounts of surface detail to meshes, need some leathery frog skin for a character? Surface Mimic has you covered. As we figure out more about the details and strengths of working with displacement maps, we’ll be sure to cover these guys in future posts and tutorials.

Normals baked in Substance Designer from a mesh created in Blender

So we took the new displaced high-poly and low-poly meshes out of Blender as fbx files into Substance Designer for baking. Hmmm, we need a cage, problem is we can’t figure out quick enough how to build a cage for this type of mesh and so we have some fairly obvious normals baking errors, but we’re running with it until we find someone who can advise us on how to build this type of complex cage.

Building a graph in Substance Designer using a displacement map from Surface Mimic

Now with our normals map, it’s time to start assembling our material while testing out a Surface Mimic displacement map in the process. We started by watching Jérémie Noguer’s tutorial on setting up a base material in Substance Designer – click here to watch it on Youtube. We not only followed his instructions, we modified the basic setup by adding the normals map we baked and a bitmap texture of some rocks we downloaded from CGTextures. After about 30 minutes experimenting in the graph design and finally agreeing that we like what we we’re seeing, it was time to take everything into Unreal Development Kit (UDK).

Oops, didn’t follow our own advice from a previous post, forgot to triangulate the mesh before leaving Blender. No problem, back to Blender and apply the Triangulate Modifier. With the mesh exported again as fbx, Substance Designer auto updated the meshes and we rebake the normals. Save the SBS file, publish the SBSAR and we’re ready to import into UDK – with a mesh not covered in holes.

Testing a Substance Designer material in Unreal Development Kit created with a displacement map from Surface Mimic

Pardon the UV stretching, this wasn’t about creating a perfect landscape, it was about learning how to use displacement maps from Surface Mimic and taking our first serious run through Substance Designer 4.0. While this particular experiment isn’t something we’ll be throwing into production, it did teach us some of the new capabilities in this recent release from Allegorithmic. We are yet to properly explore another new exciting feature; Physically Based Rendering, also known as PBR (not to be confused with Pabst Blue Ribbon).

Nov 262013
 

Header for Unreal Development Kit displaying an environment being developed in DX9 (DirectX 9)

This week we hit a performance issue so we started testing elements within game that could be creating the impact and lowering our FPS (frames per second). Problem is what we have found does not bode well for our development, we have learned that DX11 (DirectX 11) running on UDK (Unreal Development Kit) does not perform as well as DX9 (DirectX 9) – and it is easy to see that Epic would have no reason to fix this with UE4 in the pipeline. It is also a known issue in the Epic forums – we now know.

Had someone warned us NOT to develop in DX11 (a best practices sticky page on the Oculus VR forum site would be a good idea), we would have built things in DX9. Except that in DX9 we don’t have the same material functions, or at least that’s what it looks like as the DX9 version kicks errors on the materials we’ve created while developing in DX11 mode. So maybe we have to go back and redo materials, no big deal right? But what if we are developing for a moment two years out and want DX11 capability and want to maintain the hope we can migrate to UDK 4? Remember that UE4 does NOT support DX9, or so we’ve read.

Then there’s the issue of FPS playback in different modes using the same scene, in the Editor window we see 117fps. In the Viewport at 1920×1200 it slows to 85fps, and finally in 32bit DX9 Game mode at 1280×800 it runs at 52fps. The lesson to be shared here is that when your game is running in stand alone mode (without UDK running in the background) you will get yet another FPS reading. Sorry but we’ve not run this exact test example, we’re basing this last statement on the fact that we change the display size of UDK and get one FPS reading, and then minimize it and get yet another reading.

BTW, our landscape may still be too big and we have found out today that having multiple normal maps plugged into the landscape material knocks down our FPS rate – a lot, as in nearly 50%. We will experiment more with trying to make one normal map suffice for the entire surface, it might work. I feel like we’re amateurs stumbling about, but I guess that’s about right.

Nov 172013
 

Food, mineral, pottery, metal, music, religion, books, art, adventure, transportation, technology, the Oculus Rift

Everything you think you know is about to be turned on its head. The coming revolution is a wave of tsunami proportion that will fundamentally alter humankind’s course. This historic moment will be instigated by the Oculus Rift, though the impact will only be seen through hindsight.

Most who know me likely think I’m a bit too liberal with the hyperbole regarding my enthusiasm for how I perceive the future. That’s okay, as I don’t claim to be clairvoyant and readily admit I may be quite wrong, but I really believe I’m being too conservative – even if my time line proves to be short. You see, I think we are on the precipice of extraordinary change on the scale of when humans discovered how to work with fire, pottery, metal, or agriculture.

For nearly 150,000 years, while reality has been all around us, our mark on it, our art, has been in front of us – and it wasn’t always portable. What I mean is that when we learned to map a location, our ancestors likely drew a diagram in the dirt, this might have lead to our recognition that we could use a rock to mark a tree and then mark a wall. Art was born. Since that time we have become more sophisticated in our ability to place art before ourselves by putting it on statuary, canvas, celluloid, glass tubes, and now on thin flat glowing panels. What all these things share, from the cave wall to a bendable OLED screen is that they are before us, they are in front of our faces and are an element of our reality.

We are about to embark on a new paradigm, one where the art is no longer in front of us; instead it will supplant reality, placing us in the middle of a new reality. Some may look at this merely as a means to play a video game, and that is how it will be sold. Others will think it is a perverted tool that will make pornography all the more evil, though they themselves will likely have to know that first hand. Hollywood may see it as a savior that will deliver more eyeballs to see the same movies all over again as they work to remake yet more sequels, this time though they’ll be immersive. The paradigm I speak of is virtual reality, also known as VR.

What the Oculus Rift, and a host of similar products I’m sure, promise to bring, is the ability to be anywhere – except where we are. I won’t argue that it will take time for a generation brought up on shooting everything that moves to shift to taking an interest in exploring the sublime. This is in part because those of us venturing forward to create such content will need a lot of time and probably some external capital to allow us to employ artists, scientists, programmers, and musicians. But I see a problem with this, curiosity leads to…well, curiosity. Why is that an issue? Curiosity is a cornerstone of greater intellectual capacity and we are on a 50 year binge of banality and conformity that has intentionally or inadvertently commercially benefited a certain segment of the population from our dumbing down. How will those interests either cede control or evolve their own content away from being manipulative and trivial?

Without simulated rape, drug use, chainsaw death, torture, shooting, and other negative stimuli to rail against, how will the powers that be leverage media hysteria on how “Educational” or “Enlightening” VR is, corrupting whichever segment of society should be targeted for being its victim? Is it really by consumer demand that our movies, books, and video games nearly always have an evil character? Why then when we travel do we spend time exploring the arts, music, exotic cuisine, and beautiful nature, instead of dodging zombies or going on shooting sprees? We explore because life is interesting, amazing, and full of learning opportunity. Media contrived art is not imitating life, it is extorting the masses.

When the individual returns to painting on the virtual cave wall, to drawing in the digital dirt, and watching the flicker of electronic light bouncing off a 3D caterpillar metamorphosing into a butterfly in an immersive world as seen through the Oculus Rift, they are going to feel in control and even more curious. They will wonder what they’ve been missing while they’ve been living comfortably numb in a society that has been celebrating mediocrity. Virtual reality is going to peel back the facade that ignorance is bliss, it is going to have us all dreaming of where we can go next and wondering what the story is behind those Mayan ruins, folding proteins, supernovae, and the mechanics of how a flower unfurls in the morning sun.

Watch out world, here comes curiosity.

Nov 152013
 

Recently we learned of the importance of baking our normals off of “high-poly” meshes using cages to best capture details that will end up in those maps. If you understand this last sentence, you are probably geek enough to continue reading this bit of obscurity, or specialization, depending on one’s perspective.

Of course we are working in Blender, because it’s cheap, free actually – can’t say that enough! For the sake of sharing our workflow regarding virtual reality and helping you learn how to do this too, we’ll be working with a pretty simple mesh. What’s more simple than a cube, like the default Blender cube? But we’ll make it a hair more difficult, we’ll add numbers (cue the suspenseful music).

Sculpting a simple cube to make a high-poly mesh in Blender to be used in making a normals map from a Cage

First off, duplicate the default cube (shift+D), this will create an exact copy right in place of the other. Duplicate it again, you need three copies in your scene. It’s important to leave these copies right where Blender created them, this is going to be your “low-poly,” high-poly” and your “cage.” Note: if this were a complex or organic object, like a creature, you create the high-poly model and then retopologize that model to come up with the low-poly, and then duplicate the low-poly to create your cage, but that is a more complicated process that won’t be covered in this tutorial.

Name your three meshes in the “Object” tab, it will make your work easier; maybe you append something like “cube_lowpoly,” “cube_highpoly,” “cube_cage” to your mesh name. Now select “Sculpt Mode” with your high-poly mesh selected. For the sake of our example we are simply sculpting some numbers onto the simple cube, since we are only interested in getting you familiar with cage baking in xNormal or in Substance Designer from Blender. Hope you already know something about enabling “Dynamic Topology.”

At this point you can unwrap your UV’s. You only want to unwrap the low-poly, this is really important to pay attention to. While it appears that xNormal can deal with the extra unwrapped UV’s on the cage, Substance Designer threw us an error. So only unwrap the low-poly mesh! We use Smart UV Project for our simple example, but depending on your mesh you will use the appropriate unwrap method for your project.

Making a Cage mesh in Blender for baking normals from a high-poly mesh in Substance Designer or xNormal

Time to build the cage. Select the cage mesh, go to Modifiers tab, Generate column, and choose Solidify. Click on “Even Thickness” and “High Quality Normals” then check off “Fill Rim.” Go to the “Thickness” operator and set this to a negative value. As you lower this value the “Cage” will grow around the high-poly mesh, once the entire high-poly mesh – on ALL sides, is contained, you can hit apply.

Go to “Edit Mode,” option Face Select, select a face, hit Ctrl+L to select linked faces, hit Ctrl+I or press “W” and choose “Select Inverse” – this should highlight the inner box of the cage by inverting what you selected. Next, hit Del (Delete) to remove this inner mesh.

Export all three meshes as FBX and you should be ready to test this in xNormal and Substance Designer. Don’t forget to export “Selected Only.” NOTE: Substance Designer will render your mesh as black if you do not UV Unwrap your model prior to export!

Baking normal map from a cage and high-poly mesh from Blender in Allegorithmic's Substance Designer

Drag your three meshes into Substance Designer and drop them into a new Package. Right click the low-poly mesh and select “Bake Model Information”, then choose “Normal Map from Mesh.” Choose your high-poly mesh, select the option to use “Cage” and connect the cage mesh. Select your image format and press OK – you should now have a normal map baked off your high-poly mesh!

Baking a normal map using a Cage and high-poly mesh from Blender in xNormal

If you are creating your normal map in xNormal, launch xNormal, add your high-poly, low-poly, and in the low-poly field where you can check on “Use Cage” do so. On the same line as your low-poly file, right click and choose your cage file using “Browse external cage file.” Click “Generate Maps” – it should be that easy.

Caveat: We didn’t pay as close attention to UV unwrapping as we normally would (check previous post), this tutorial is supposed to get you closer to understanding baking normals using cages. We also skipped some information about getting more accurate normal maps by converting meshes to triangulated meshes before unwrapping and exporting. We are planning to post a tutorial about UV unwrapping and all of its intricacies in the near future.

Nov 102013
 

This little helper entry is here to show you how easy it is to work between Blender and Allegorithmic’s Substance Designer.

UV unwrapping a mesh from Blender for use in Allegorithmic's Substance Designer

First we have to assume you know something about Blender because you need to unwrap the UV’s. But in case you don’t, here’s the bone. Start with Blender’s default cube. Open up the UV/Image Editor, go back to your 3D View, switch to “Edit Mode,” press “U” for unwrap and choose “Smart UV Project” and set “Island Margin” to 0.25. Export the mesh as .fbx.

Creating SVG (vector graphic) masks from UV Maps for use in Allegorithmic's Substance Designer

Drag this FBX mesh into Substance Designer, dropping it onto a new “Package.” Right click this mesh and we’re going to make a Vector Graphic from the UV. Under the tab “Information To Bake” select from the drop down menu “Convert UV to SVG” and then in the tab “Output File” select the option to have the file “Embedded” – this will allow you to edit the SVG in Substance.

Drag the SVG you just created in the “Bake” process to the “Graph” window. Double click this node. In the “Edit” window select the “Transform” tool, choose the vector graphic element that you have decided that you want to work on by left clicking it (you’ll notice these elements relate to the UV islands you created in Blender). Right click this element and select “Copy Selection to New SVG.”

In the “Explore” window you will see the new SVG node (it is now a MASK), drag it into the “Graph” – double click it to view it in the “SVG Editor.” Turn off “Alpha” – this will turn the SVG black. Select the SVG element (UV Island) and click the “RGBA” tool and set the color to solid white (255,255,255). In the “Graph” window right click the screen (or hit Spacebar) and add a “Blend” node.

Setting up a Substance material using vector graphics to build masks

From the “Mask” node drag out the noodle and attach it to the “Opacity” input on the “Blend” node. From the main “SVG” node drag out that noodle and attach it to the “Background” input. Now select the Substance or bitmap that will want to fill the masked area with (the white area of the mask is about to filled with your new texture) and drag it to the “Graph”. Take the noodle from this Substance or bitmap and drag its noodle to the “Foreground” input.

As you can see in the screen capture our tiling is not perfect yet. Right click the screen or hit spacebar and add a “Transform 2D” node. You need to drag the noodle from the Substance to the Transform 2D node and drag its output to the “Foreground” input of the “Blend” node. Double click the node that follows the Transform 2D node, in this case the “Blend” node so it is pictured in the “Paint/SVG” edit window. Now single click the Transform 2D node, this will display a scalable bounding box in the paint window. Drag the corners and edges to affect how your texture is tiling. As you scale and move this box you will see the effect of your action on the mesh in the 3D view.

Creating materials in Substance Designer using masks made from vector graphics using UV's

Tip: On the right side of the screen in the “Specific Parameters” element when you have Transform 2D selected, look for the “Stretch” tab where you will see an “x2” and “/2” option. Instead of dragging, scaling, rotating the box in the paint view, you can choose to multiply or divide how the pattern in the texture repeats by a factor of two!

To add a second texture to the next vector graphic element (UV Island) we are going to repeat the above process. Again, double click the SVG node, select your next element, be sure to turn off alpha, click the RGBA tool and change the color to white. Right click and select “Copy Selection to New SVG.” Drag this new mask onto the “Graph.” Create a new “Blend” node and drag the new output noodle of the new mask to the “Opacity” node input. From the previous “Mask” node (the one we just setup above) drag the output noodle from that “Old” blend node to the “Background” input on your new “Blend” node. Add your next Substance or bitmap to the “Graph,” add a “Transform 2D” node if you know you are going to need to tile this “Substance” and then take the output noodle and drag it into the “Foreground” node input.

You continue like this until all of your “Masks” are textured to your liking. By the way, look at the screen capture above and you’ll see that we drag the output noodle of the “Last” Blend node in the chain to the “Output” node that places our new material on the 3D model.

Nov 072013
 

In the 1984 version of the movie “Dune,” the Guild Navigator speaks of seeing “Plans within plans,” and so it is with trying to build a VR environment that we are engineering “Changes within changes to our plans within plans.” The first months of this exercise yield the best personal learning experience and the answer to the question, “Why do others take years to finish creating their first video game?” – because you have to redo everything more than a few times. I won’t tell you precisely how many as I don’t want to discourage anyone, but the repetition of performing what seems like the same task over and over, trying to get something right will make you “almost” an expert in the software you are working with. Almost, because as soon as you think you’ve got it figured out, a new issue will raise its head and show you how much you really want to try it all again.

Looking in the virtual landscape as created by Joe Cunningham and John Wise

We thought we had a landscape figured out, but it ended up not being big enough, so we scaled it in UDK (Unreal Development Kit). This was great for a couple of weeks, until we noticed that shadows weren’t working the way we thought they should. Figuring that this was due to scaling issues and that the original landscape was only about 2.5km by 2.5km (1.5 miles by 1.5 miles) it was time to go to work building new land.

Choosing parameters in TerreSculptor for creating a Landscape in Unreal Development Kit (UDK)

There are many options out there for building landscapes (terrain) but we stumbled upon a solution we have grown to love, it’s called TerreSculptor. The first reason it became a favorite was that it was free (until July 1st, 2014 anyway). While many landscape building tools have a free evaluation or functionality limited version to learn with, buying the “Pro” version is often very expensive. So even if we find a great piece of software, we may not be able to afford it when we need to migrate to full functionality. With TerreSculptor we have had full functionality from day one and according to the user manual when this “Alpha” software is finally released, its price could be $129 for the pro version. I really must emphasize “could be” as the price has not been set.

So we open TerreSculptor to quickly knock out our new landscape (pay special attention to that qualifier “quickly”). I’m writing this days after we started our expedition into new lands because “quickly” should not be in a game developer’s vocabulary and this is the universe’s way of teaching us that fact. It wasn’t that the geometry for the landscape gave us any particular problems, it’s that we got it in our heads that having snow capped mountains would be really cool.

Rough version of Landscape created in TerreSculptor for use in Unreal Development Kit (UDK)

Up until this point we had thought that we could only have three textures on a landscape in UDK. Why should anyone read the manual from Epic when brute force, really-sincere attempts at figuring everything out through trial-and-error has proven so effective before? We are, after all, guys. Just to be clear, we didn’t go right to the manual for the next part, we got there through Google first. Ah, the LandscapeLayerBlend expression is the node we want to use so we can move past three textures in our material! From glancing over the doc, no we haven’t read it yet, well not all of it, we figure we can add maybe 10, 12, 15 texture layers.

We make our way through creating our new landscape choosing Generators/Ridged, followed by a bunch of options pertaining to the height, roughness, lowlands, and mountain ranges we would like our land to have. Off to export and then importing the heightmap into UDK to see what it looks like in game. A few adjustments and next up it’s time to create the weightmaps.

Adding erosion to a landscape created in TerreSculptor that will be used in Unreal Development Engine (UDK)

UDK uses weightmaps in the creation process of making landscapes as guides for where materials will be placed. In TerreSculptor we can choose where we want each particular weightmap layer to be created by degrees and altitude. So we might start off with a “Slope” map, stipulating that surfaces between 0 degrees and 25 degrees of angle will be a grass layer, 25 to 35 degrees will be a rock layer, followed by another rock layer that covers 35 to 60 degrees, with a final rock texture covering faces between 60 and 90 degrees. Using the “Altitude” channel we can create a weightmap in which we choose that above 11,000 meters (33,000 feet) surfaces should appear covered in snow. Then setting our “Fall Off” we can choose how far the snow layer transitions over the rock layers below.

Using Substance Designer from Allegorithmic to choose and create textures for a Landscape in Unreal Development Kit (UDK)

This all seems to be going to plan, but things aren’t always that easy when working with “Alpha” software and game development engines that can’t anticipate the infinite number of ways we can do stuff. So it was as we tried pulling in seven layers of weightmaps onto a landscape. At first we started having quadrants of landscape showing up black or white, or even without materials. We’d back off the number of weightmaps and then when connecting the normal maps, something would go wrong and our landscape would turn black, or white, without giving us any clues to why this was happening. We figured that maybe the LandscapeLayerBlend node didn’t like working with normals, or maybe because we were using textures from Substance Designer, UDK was having problems with the heightmap created in TerreSculptor, then again maybe it was the weightmaps being generated in TerreSculptor.

That’s how the day played out, or was it two days? Next we did what everyone would do, reinstalled software. Tried another alpha version of TerreSculptor, rebuilt everything from scratch and like magic, it all starts to work. We have found out that we need to limit our materials to six textures. Substance Designer textures are working great, crazy great on our landscape. Normal maps are being attached and providing a level of depth to those materials without a glitch.

Our only problem now is finding the perfect landscape. While we love the general layout of the terrain we find ourselves going over a dozen iterations of erosion patterns in TerreSculptor. Sometimes the mountain peaks are too sharp or maybe the valley is too rocky. Another group of settings and we have these enormous steps that are a bit too uniform for our liking. With each change in the lay of the land we see that the weightmaps have to be recreated as they no longer accurately represent the underlying geometry. But this is all part of the learning process and we’re digging it so we continue with “just one more.”

Building up a material in Unreal Development Kit (UDK) from elements created in Substance Designer from Allegorithmic

Along the way Joe has to setup a wicked material using countless nodes to deal with the Pixel Depth Ratio. Terrain Coordinates are dealt with for each texture to determine how a texture tiles. And then the normals won’t connect, we are left with barren landscape displaying blue and gray default material. One normal channel can be plugged in, but we need six.

Next day we go back to the drawing board, this time we will scale down our ambition. Our previous landscape was 4096×4096 units, or about 10km by 10km (about 39 square miles). Turns out we have much better success with a 7.5km square chunk of earth (23 square miles – much better number). We can only get four channels of normals working on the six textures, but we figure this is a good compromise and go with it. Joe has already tried creating a blowing snow effect coming off of one of the ridges that looks great, viva la snowy mountain peaks. Time to restore the developing city and get back to the work of modeling stuff in Blender.