Archive for February, 2016
We were invited to be present at SINFO 2016, one of the biggest tech conferences in Portugal.
SINFO takes place in the Campus of Instituto Superior Técnico in Lisbon.
Thursday was the game development day, so we brought Syndrome to the Campus, to see if we could induce a few scares there.
There were indeed a lot of scares there, which means the game is having the desired effect.
Also great player feedback and of course, some pesky bugs that only show up when the game is displayed in public.
Oscar Clark from Unity gave a lecture during the afternoon, so almost nobody came to try the game at the time.
Or maybe it was because of the sunlight: the event was taking place inside a huge tent, and our stand was facing the sun, so for a couple of hours Syndrome looked like this:
The sun eventually moved and the place got a bit darker. The darkness brought the players out!
We had a great time, and got the chance to show Syndrome to the public one more time. It’s good to have player feedback, but it’s really interesting and super important watching folks play the game.
Thanks SINFO for the invite!
We’re going to describe a bit of our workflow, on how we get a character in the game.
Pretty much all the meshes in the game started as a very high poly mesh.
Just to give you an idea, this character has 49 million polys.
You can see the mesh in wireframe below:
Why do we do it this way?
Because with this amount of polygons we can get an amazing amount of detail, which we then bake into normal maps.
Once the mesh has a lower polycount, we create uvmaps on the lower polygon asset to project the high poly count mesh onto it, and generate normal maps, occlusion maps and cavity maps.
By doing this, all combined with real time lighting, allows us to have the same detail in a lower mesh as we would in a high polycount mesh.
This is where the classic type of character / asset ends and the modern type of asset begins (remember Doom 3).
Now when we create this super high poly mesh with all these little details, and then generate textures from it, we end up needing 3 textures so that we can make all these details visible: a normal map, an occlusion texture and a cavity texture.
You’re probably thinking that we will combine these textures into a single diffuse texture, but that’s not how physical based shading works.
The new next-gen physical based shaders work in a different way; basically you apply all these textures into the Unity 5 based shader.
Using Substance Painter and Quixel 3d, we then paint complex final textures and export them directly to the engine. The tool generates 4 textures which must then be applied to the final shader.
So in the end this character is using normal texture, occlusion texture, diffuse or color texture and a special texture which combines two textures – metalness.
We try to achieve the best realism as possible, with materials reacting like real life and accurate lighting.
You can see this character below in an ingame screenshot:
Our first round of testing is taking place right now, and it’s been a success so far.
We gathered a small (but good) group of testers and collected important feedback from their playthroughs.
There were several bugs reports, but that didn’t come as a surprise, as we’re not in the “bug hunting” phase yet.
What’s important to us right now is to have the understanding of what the player feels about the game and the story.
And so far the feedback has been very good, with great suggestions from the testers.
We can’t wait to have the reports and completed surveys from everyone.
So everything is coming along fine and according to plan.
We’re now looking into character voice overs as some of the VOs currently ingame are still placeholders and some dialogs need some tweaks.
A new gameplay trailer is coming soon, so everything needs to look perfect, including voice acting
What do you think about the sleeping quarters? Cozy, right?