Monday 20 April 2015

"That's not right. It looks pretty, but it's not right."

Oh shaders. How I love you so. And I mean that in 'the abused housewife still loves her husband' kind of way. I think this, once again, confirms my suspicions that I am a masochist.

So last week I began my journey into the perilous realms of Terra Unity Morte, specifically in the aspect of custom shaders. My plan was to create a God Ray Post-Processing shader, and it was a fairly straightforward one.

The Plan

Step 1. Create a new render texture, render the scene onto it
Step 2. Iterate through each pixel. If the pixel alpha is larger than 0 (ie. object exists) then draw the pixel as black (Occlusion Pass)

Step 3. Draw lighting shade onto every other pixel, ignoring the occludes. (Lighting Pass)

Step 4. Apply a God Ray effect (Effect Pass)
Step 5. display the material with render texture on a screen sized quad, placed in front of the scene with alpha blending (Scene Render)

Step 6. Pretty!



Unfortunately, at this point old man Unity came along and looked over my plan.
"Nice plan you've got here," it noted, "It'd be a shame if.."

Unity's answer to my plan

Step 1. Disable render textures in Unity 4's free version (what I am using)
Step 2. Set shaders to outright ignore alpha values by default, drawing to RGB channels only
Step 3. Allow multiple passes in a single shader, but texture samples for preceeding passes will refer to the newly rendered texture with previous pass effects

Step 4. Most (if not all) Post-Processing effects, including God Rays, require the use of multiple render textures, not just one

Step 5. updating a quad's position every update to remain in front of the camera will cause issues with physics (for some reason)


Satisfied with its work Unity wandered away, leaving me to weep over what remained of my plan.



So yes, there were issues.

Render textures not being available was the biggest one, since that is the core ingredient of any Post-Processing effect. I eventually settled on a CPU-based alternative, ReadPixels(), which uses the CPU to read every pixel currently on screen and dump the data into a 2D texture. If that sounds like an extremely expensive operation to you, that's because it is. I am essentially performing a screenshot after every frame, manipulating the screenshot, then re-rendering the edited screenshot into the game. Good God.

The second biggest one is the frankly labyrinthine system that is ShaderLab. In the spirit of all things Unity, this system attempts to be 'User Friendly' by automatically performing several functions for you. That's great, except that I now have no idea what is actually happening when the scene is rendered. It was only after hours of researching that I discovered shaders only render RGB by default, completely ignoring the alpha channel. Any transparency can only be done via alpha blending, and alpha-inclusive renders can only be performed under certain settings with the right flags declared. Overall the thing is so unintuitive I feel like Theseus wandering the Minotaur's Maze. Except I don't have Ariadne guiding me with a piece of string.


Still, I have managed to get somewhere. It's far from what I want, but I am currently up to step 4 in my once-foolproof plan. It's very slow going since I have no idea what will actually be rendered 90% of the time, but that's less to do with Unity and more to do with shader coding in general. I'm hoping that I can get to at least a passable rendition of the effect I want, at which point I will declare victory and look the other way.

And to think, all this could have been void if we had used Unity 5.



Some very incorrect (but nonetheless pretty) render results

 





Monday 13 April 2015

The glorious Godrays - shaders in Unity

 [http://s8.postimg.org/7e8xjn9gl/rays002.jpg]

When I first began to think about programming as a life choice waaay back, the thought of writing graphics code was not one that sprang to my mind. My idea of interesting, fun-to-write code was AI programming, and that's the kind of work I wanted to do. Now, I still do love me some AI (my latest blog posts about Alien AI and GOAP are dead giveaways) but I never imagined that my second best interest would be in shaders and graphics.

I usually detest the the Games Industry's blind obsession with graphical fidelity so prefect you'll weep tears of joy. But once I started learning GLSL and the visual effects you can achieve with it, I was hooked. I did toon-shading, bump mapping, dynamic lighting and shadowcasting. And now, since I am in the process of making a game involving God, I'm writing a Godray shader.

Godrays, or the official term Crepscular Rays, refers to shafts of light that appear to radiate from aingle point, usually behind an obstacle like clouds or a person. The light rays often darken the obstacle hiding the light source, and as a result it appears as though a bright glow is emanating from it.

 Neat little online demo of the effect in action






After a bit of research, I've managed to get a good idea of how you would simulate this effect in a shader. Simply render an occlusion map onto an FBO, render the lighting as normal, then overlay the occlusion onto it with blending to match the brightnesses of each. Overall this isn't that complex a concept, and if I were to write this in GLSL within an openGL framework it would be done in minutes. However, there are a few potential roadblocks:

1. Unity

I'm not sure how Unity will handle specific lighting effect shaders, especially if each lit object has to perform its own separate godray effect. I may have to manually add a light source behind major areas, and if so that will get annoying fast.

2. Unity

Unity uses its own shader language called ShaderLab. Why they didnt just use openGL, directX or even CG is beyond me. Espscially since most ShaderLab shaders use tunnelling to access the underlying CG code anyway. I dont think it'll pose too much of an issue, but it's still possible that ShaderLab will try and trip me up every 20 seconds.

3. Unity

Unity's system of materials, lighting and shaders is confusing. There are no ways to easily edit shader values and settings, and barely any notion of texture maps. According to the online Unity manual, there is a function concerned with render textures. But knowing Unity it won't be entirely code-handled, which means placing template materials in the scene, which means linking issues, which means...

Basically, I don't enjoy using Unity.

Monday 6 April 2015

Alien AI: The Perfect Algorithm



 [http://www.giantbomb.com/videos/quick-look-alien-isolation/2300-9543/]

Recently I purchased Alien: Isolation on Steam, which had been on my wish list for quite some time. 75% off, game + 3 DLC for $20. Praise Lord Gaben!

I'd heard that the game was very good at achieving its goal, which is to say, scaring the living crap out of you. I had also heard that a large part of its success came from the alien, whose AI algorithm gave it very believable, lifelike behavior; it was (so I'd heard) like being hunted by a thinking breathing, perfect organism.

So when I booted up Alien: Isolation, at about midnight for maximum immersion, I was feeling hopeful and excited. Firstly, since I am a long-time fan of Horror Games, it had to tick all the right boxes in regards to what a horror game should do. Secondly, I was interested to see if this Alien AI was really all that it had been talked up to be. Would it really think strategically, react to its environment dynamically, and generally behave in a lifelike manner?


*4 hours later*


WELL. That was... that was something.

The alien, to put it elegantly, is freaking terrifying. It appears at any moment, slithering out of  ceiling vents like some great serpent. Once on the scene, the AI performs in a way that I feel is very unique in terms of video game AI systems. Points I found particularly interesting about it are:

  • The AI seems to act on a FOV basis, meaning it cannot spot you if you are not visible. Hiding behind boxes and walls is a core mechanic used to avoid the alien, and if it cannot see you it may walk straight past you.

  • The alien hunts you based on sound, meaning noises such as footsteps and colliding objects attract it. I found that as long as you stay completely still, it would often be lost. Conversely, make any sort of noise at all and the AI will pinpoint our exact location and come running.

  • Items such as flares, flashbangs and noisemakers are introduced, creating the gameplay  mechanic of 'distract it while you run'. However, this seemed to lose effectiveness over time, as the alien learned from its past experiences.

  • The AI always seemed to know the general are of where I was, even if it couldn't find my exact location. Whether this is another clever AI aspect or simple rubber-banding I'm unsure.

Overall the AI was pretty amazing, and as I played I began to wonder how the developers had accomplished this complex, lifelike behaviour. Unfortunately google search didn't come up with anything too useful, which is understandable. Since the game has only recently been released the developers are not willing to unveil too many behind-the-scenes aspects.

I did, however, manage to find several interviews withe the devs talking about how the AI system works. Based on this info, I can scrape together my theory on Alien: Isolation's AI system.

Interview A (skip to about half-way)
Interview B / Quotes
Interview C (no. 6 onwards)


My Theory / Concept


Generally, the AI system functions based on a complex decision tree to act upon outside stimuli. In this case, that stimuli is audio cues such as player footsteps, doors closing, and objects being knocked about. Once the AI 'hears' these noises, it will decide to investigate. How it does so is, I imagine, similar in approach to GOAP; it can simply walk over, use a vent, or even take a circuitous route to try and delay its arrival.

Next comes the more interesting aspect, reactionary decision-making. According to interviews, the system possesses a series of pre-defined behaviour patterns that are locked off from the main loop. These auxiliary instructions are unlocked in reaction to player behavior and remain as permanent options for the decision tree.

For example if the player is using a lot of distraction devices, normally the AI will be obliged to investigate the noise in accordance with its basic behaviour. However, after a certain number the 'doubt sources of noise' flag is turned on and an option to ignore that noise and look elsewhere is activated. From that point, on, distractions become less effective since the AI can choose to ignore it and look elsewhere. Improving this even further would be to guess where the player is likely to be based on where the distraction was placed (ie. the player wants me to go over there, lets go in the opposite direction).

Of course these are just theories, and I may be completely wrong. But I also think that, even if I am, the basis for a very clever AI system is in here somewhere, and these ideas I came up with will likely be designed and developed when the oppoortunity presents itself. I think I'll add this AI concept to my ideas notebook.