Monday, 13 April 2015

The glorious Godrays - shaders in Unity

 [http://s8.postimg.org/7e8xjn9gl/rays002.jpg]

When I first began to think about programming as a life choice waaay back, the thought of writing graphics code was not one that sprang to my mind. My idea of interesting, fun-to-write code was AI programming, and that's the kind of work I wanted to do. Now, I still do love me some AI (my latest blog posts about Alien AI and GOAP are dead giveaways) but I never imagined that my second best interest would be in shaders and graphics.

I usually detest the the Games Industry's blind obsession with graphical fidelity so prefect you'll weep tears of joy. But once I started learning GLSL and the visual effects you can achieve with it, I was hooked. I did toon-shading, bump mapping, dynamic lighting and shadowcasting. And now, since I am in the process of making a game involving God, I'm writing a Godray shader.

Godrays, or the official term Crepscular Rays, refers to shafts of light that appear to radiate from aingle point, usually behind an obstacle like clouds or a person. The light rays often darken the obstacle hiding the light source, and as a result it appears as though a bright glow is emanating from it.

 Neat little online demo of the effect in action






After a bit of research, I've managed to get a good idea of how you would simulate this effect in a shader. Simply render an occlusion map onto an FBO, render the lighting as normal, then overlay the occlusion onto it with blending to match the brightnesses of each. Overall this isn't that complex a concept, and if I were to write this in GLSL within an openGL framework it would be done in minutes. However, there are a few potential roadblocks:

1. Unity

I'm not sure how Unity will handle specific lighting effect shaders, especially if each lit object has to perform its own separate godray effect. I may have to manually add a light source behind major areas, and if so that will get annoying fast.

2. Unity

Unity uses its own shader language called ShaderLab. Why they didnt just use openGL, directX or even CG is beyond me. Espscially since most ShaderLab shaders use tunnelling to access the underlying CG code anyway. I dont think it'll pose too much of an issue, but it's still possible that ShaderLab will try and trip me up every 20 seconds.

3. Unity

Unity's system of materials, lighting and shaders is confusing. There are no ways to easily edit shader values and settings, and barely any notion of texture maps. According to the online Unity manual, there is a function concerned with render textures. But knowing Unity it won't be entirely code-handled, which means placing template materials in the scene, which means linking issues, which means...

Basically, I don't enjoy using Unity.

Monday, 6 April 2015

Alien AI: The Perfect Algorithm



 [http://www.giantbomb.com/videos/quick-look-alien-isolation/2300-9543/]

Recently I purchased Alien: Isolation on Steam, which had been on my wish list for quite some time. 75% off, game + 3 DLC for $20. Praise Lord Gaben!

I'd heard that the game was very good at achieving its goal, which is to say, scaring the living crap out of you. I had also heard that a large part of its success came from the alien, whose AI algorithm gave it very believable, lifelike behavior; it was (so I'd heard) like being hunted by a thinking breathing, perfect organism.

So when I booted up Alien: Isolation, at about midnight for maximum immersion, I was feeling hopeful and excited. Firstly, since I am a long-time fan of Horror Games, it had to tick all the right boxes in regards to what a horror game should do. Secondly, I was interested to see if this Alien AI was really all that it had been talked up to be. Would it really think strategically, react to its environment dynamically, and generally behave in a lifelike manner?


*4 hours later*


WELL. That was... that was something.

The alien, to put it elegantly, is freaking terrifying. It appears at any moment, slithering out of  ceiling vents like some great serpent. Once on the scene, the AI performs in a way that I feel is very unique in terms of video game AI systems. Points I found particularly interesting about it are:

  • The AI seems to act on a FOV basis, meaning it cannot spot you if you are not visible. Hiding behind boxes and walls is a core mechanic used to avoid the alien, and if it cannot see you it may walk straight past you.

  • The alien hunts you based on sound, meaning noises such as footsteps and colliding objects attract it. I found that as long as you stay completely still, it would often be lost. Conversely, make any sort of noise at all and the AI will pinpoint our exact location and come running.

  • Items such as flares, flashbangs and noisemakers are introduced, creating the gameplay  mechanic of 'distract it while you run'. However, this seemed to lose effectiveness over time, as the alien learned from its past experiences.

  • The AI always seemed to know the general are of where I was, even if it couldn't find my exact location. Whether this is another clever AI aspect or simple rubber-banding I'm unsure.

Overall the AI was pretty amazing, and as I played I began to wonder how the developers had accomplished this complex, lifelike behaviour. Unfortunately google search didn't come up with anything too useful, which is understandable. Since the game has only recently been released the developers are not willing to unveil too many behind-the-scenes aspects.

I did, however, manage to find several interviews withe the devs talking about how the AI system works. Based on this info, I can scrape together my theory on Alien: Isolation's AI system.

Interview A (skip to about half-way)
Interview B / Quotes
Interview C (no. 6 onwards)


My Theory / Concept


Generally, the AI system functions based on a complex decision tree to act upon outside stimuli. In this case, that stimuli is audio cues such as player footsteps, doors closing, and objects being knocked about. Once the AI 'hears' these noises, it will decide to investigate. How it does so is, I imagine, similar in approach to GOAP; it can simply walk over, use a vent, or even take a circuitous route to try and delay its arrival.

Next comes the more interesting aspect, reactionary decision-making. According to interviews, the system possesses a series of pre-defined behaviour patterns that are locked off from the main loop. These auxiliary instructions are unlocked in reaction to player behavior and remain as permanent options for the decision tree.

For example if the player is using a lot of distraction devices, normally the AI will be obliged to investigate the noise in accordance with its basic behaviour. However, after a certain number the 'doubt sources of noise' flag is turned on and an option to ignore that noise and look elsewhere is activated. From that point, on, distractions become less effective since the AI can choose to ignore it and look elsewhere. Improving this even further would be to guess where the player is likely to be based on where the distraction was placed (ie. the player wants me to go over there, lets go in the opposite direction).

Of course these are just theories, and I may be completely wrong. But I also think that, even if I am, the basis for a very clever AI system is in here somewhere, and these ideas I came up with will likely be designed and developed when the oppoortunity presents itself. I think I'll add this AI concept to my ideas notebook.

Monday, 23 March 2015

The calm before the storm


This may be an odd and somewhat controversial statement, but I am mostly convinced I am actually a masochist. Now, before you all label me as some sick, twisted pervert and lock me up, allow me to explain.


Being a programmer, my days are mostly spent in an endless cycle of frustration and joy; spend 8 hours trying to fix a bug, banging my head against the wall, followed by 5 mins of yelling out in praise of whatever Gods saw fit to show me mercy. As you can see, this endless cycle has a ratio of about 80 pain, 20 pleasure. It's not something a normal person would choose to do as work, much less willingly in their own free time.

Yet here I am, plodding away for hours on end. Even now, as I write this mess of words resembling a blog post, my mind is planning a complete system overhaul and major redesign of a game I had previously developed. Titled 'Haunted Within', the project was forced reach code lock and release prematurely. This meant a lot of rushed code, hacky workarounds for issues, and overall poor standards that I would not normally tolerate. I've regretted a lot of what I did, and now that I have some free time on my hands I figured I can revisit it.

Exhibit A: a work in progress


My plan to 'fix' the game is fairly straight forward; since most of my desired features are already in I simply need to remove all the hacked-together bits, correct the underlying problem at its source, and overall polish the fundamentals of what already exists. In the cases where I do need to change major aspects, I'll simply scrap what I have and rework the system from the ground up. Obviously this will take a lot of work and time, so I'll try to avoid doing whole changes if I can. But I know for a fact that some systems really need the overhaul.



System A: Graphics

This one I don't think needs too much fiddling. I do want to touch up on the draw order; currently the objects are being drawn in order of  most recently to least recently spawned. By introducing a z-buffer, I will be able to rearrange the draw order so that objects closer to the foreground are drawn before others.

System B: Game Logic

This system is where things get messy. When I say game logic, it refers to things such as object spawning, collision handling, object logic, etc. Overall, this system tries to handle far too much on its own and does so in a really disorganised way. I plan to split the game logic into 2 separate systems, one for object spawning / logic, and one for object interactions.

System C: UI

Ah the UI system. The trouble with UI in this project is that it uses a third-party system, named CGUI. Normally third-party systems aren't a bad thing and can significantly make things easier, but CGUI is... well, it's not exactly pretty. It seems to be designed using c# syntax and this makes it rather tricky to integrate into my own code. To be honest I'm not sure what I can do here... I guess I can try to keep it as clean and organised as possible. Pretty much all I can do, really.

I'm going to be working my way through these systems and more (a LOT more) through the coming weeks, whenever I have time away from my other duties. My long-term goal is to have the game polished enough for sale via Steam Greenlight, so if I want to get there I need to get working.

Monday, 9 March 2015

VR, AI and the graveyard shift


People, praise and rejoice! For my house is FINALLY hooked up to the World Wide Web. I should be grateful, but in all honesty I'm feeling a little depressed.

You see children, it just so happens that our exchange area (South Brisbane) is ruled by the demonic overlord Telstra. They decided that our area, and our area alone, should possess a lovely alternative to NBN called Fibre Access Broadband.

Go and see their site. Just look at it.

"Now you can enjoy all the speeds of NBN fibre, arbitrarily reduced to ADSL2 speeds for some reason! Oh and this stuff isn't cheap so we're also going to triple the connection costs. Don't you love us!"

Screw you Telstra. I hope all your children are eaten alive by spiders and any money you made seized by the government under suspicion of racketeering.

But enough dark muttering. Let's move onto a more positive topic: games and coding! (^_^)



GAMES:

The other week I managed to get my hands on a Joystick Controller, a second hand Logitec extreme 3D. I'm not usually a fan of flight sims and such, but when I received it a stroke of genius struck me. Joystick + Oculous Rift + War Thunder.

War Thunder is a Free-To-Play War-vehicle Combat Sim, with multiple players battling in either tanks, planes or a combination of both. It is a very polished game considering it's FTP and still in development; there are future plans to implement a Naval Combat aspect in warships, and integrate all three (air, land, sea) into a battle royale mode.

 It may be glorifying the horrors of war, but my god is it glorious.

I've been playing it for a while now, and it occurred to me that having an actual joystick to fly WWII war planes would be much better than the mouse + keyboard alternative they offer us.

But that's not the end of it, ladies and gentlemen. I also happen to own an Oculous Rift, of which War Thunder supports. The plan was to plug in my Rift and joystick, boot up the game and experience what it's like to fly a Mitsubishi A6M Zero  in all its virtual glory.

The result was everything I had hoped for. Controlling the plane felt authentic, the view from the cockpit in VR was incredible, and trying to perform immelmann turns was exhilarating. Unfortunately I couldn't experience it for too long once the motion sickness set in; since I own the DK1 version of Oculus Rift motion sickness is more pronounced as it is, let alone without the further mind bending that comes from flying a plane in VR. It's also not very viable for actual combat, since orientation and spatial awareness takes a nosedive when you don't have a tangible sense of the object in motion. I am definitely going to keep using the joystick, but the Oculus experience will have to remain for joy-flights only.


CODING:

My most immediate concerns regarding programming is the AI bot tournament, which is to be held this Wednesday. I was pleasantly surprised to see my bot doing so well in the last tournament, coming in at 4th. This is an admirable result considering I wrote the whole thing in 2 days and didn't really bother with tweaking. This time, however, is the added challenge of navigating a maze and the pathfinding that comes with it. Apparently, many of the others in the tournament placed great emphasis into pathfinding, and that's the only reason they lost in matchups against superior shooters.

Hmm. That's not good. Movement is possibly the weakest part of my bot code. So how to improve it? My original plan was to use a simplified form of Goal Oriented Action Planning to govern my bot's overall behaviour, with standard A* to figure out the actual pathing.

Unfortunately, the more I read about GOAP the more I realised that it wasn't suited to the task at hand. GOAP is designed to simplify large, nebulous goals by breaking it down into a list of possible actions and determining the most appropriate plan. When your goal is as simple as 'find bot, shoot, kill', the rigidly structured system and large overhead is wasted and the result becomes more confusing than before.

GOAP readings. Surprising amount of games use GOAP in their AI

In the end, I have simply stuck to standard A* pathfnding with some plans for maze sector-based hiearchal A*.  It does seem like Finite State-Machines are best suited for smaller projects such as this, and in retrospect it makes sense that it would be.