Tuesday 1 September 2015

Master Reboot

So, for the record, I'm still alive.

I'm still here.
Not everyone would be happy to hear that, but to hell with them.

Opening this blog once again, I notice that the last post was on 7 May. It is now September. Where the HELL did the time go? I seem to vaguely remember July... and there was that one day in August... But honestly, I feel as though I got in to the passenger seat of a Delorian and took a nap.

Of course, none of what I just said(wrote) is true; I'm just being facetious in my trademark, lovable way. I recently was hired / volunteered to a position on a game news website as a journalist. Which reminded me that hey, I used to write a blog; hey, I should look at my blog; hey, I should talk about what I've been up to on my blog. So, what I've actually been doing recently is two things: thinking about my game, and thinking about Game Design in general.


1. The Game

By 'the game' I refer to that project I was working on, 'Haunted Within' a.k.a 'Project Zoe-trope'. At the moment, it is technically a finished product and is therefore a closed project. But the product I'm left with, unfortunately, is far from what I had imagined in my head and I really do want to keep working on it. Which is why I spend most days thinking up improvements, changes, little design additions that would flesh out the overall feel to it, etc.

There are, however, a couple of problems. One is that working on a customised c++ engine was really tiresome, let alone building a whole game based on that customised engine. So I thought to myself, "Let's port everything to Unity!" and installed Unity 5. the problem here is that porting to Unity means discarding everything I've worked on and, essentially, starting again from scratch. May I just say, as a solo developer relying on self-discipline for morale and time-management, doing such a thing is mighty disheartening.

So all my time is spent thinking about doing it and never actually doing it. This is one my major flaws, in that I find it difficult to start anything. Once I get going I'm prepared to march on, but taking that first step is, for some reason, very challenging. I'm sure I'll eventually get around to it; I usually reach a point where I become fed up with deliberating and just jump-start myself into action. I just worry that when I do get started, I won't be celebrating my retirement as well.

2. Game Design

This is more of a nebulous, back-of-my-head thought more than anything, but it is something that has been weighing heavily on my mind for a while now. See, I know a lot about making games. Or more specifically, how to develop games. The problem is, I'm not too sure on how to design them. I have the hammer, saw, nails, and I know how to use them; but designing a house so it looks nice and stays upright? No clue.

Which is why I've been trying to learn more about Game Design in general; the philosphies, the princinples, the techniques and tricks often used, that kind of thing. This is another reason I've held off on working on a project; instead of making another game that functions well but is shoddy in design, I'd rather learn about the pontential pitfalls first so I can avoid them.

Because let's be honest, Haunted Within as it is right now is... well, it's not the prettiest to look at. I figure it's better to take a step back, re-examine all its facets and point out all the bits and pieces that may not be so great.

Thursday 7 May 2015

Self-Reflection Sit-Rep

It's been almost 2 months since I began this blog, and I've evolved from feeling blue to feeling introspective. We've had some fun, we've had some laughs, and we've o so much ranting. But now, it's time to sit back, gaze upon our work and think to ourselves "...What was I thinking...?"

Let the introspection and self-reflection commence!

Disclaimer

Let me start off by saying that any praise I give for myself, is tempered by the fact that I made mistakes more often than I succeeded. I like to consider myself a genius, coding savant, but the truth is that I'm really... not. I've only been at this for a few years and I'm full aware I'm not the best. I have a lot more to learn, and so many areas I need to improve in. So if I come across as arrogant and smug, just know that I'm being facetious. Every good point I raise should be taken with a fistful of salt.


AI

So the first task I tackled was AI programming. This was an aspect of coding that I was always interested in, and it's actually the reason I began programming in the first place. Trying to simulate thinking, breathing human behaviour inside a computer fascinated me. That being said, I had never actually tried to program a proper behaviour algorithm before. So trying my hand at programming AI for bots in a fully-automated tournament seemed like an exciting challenge.

Having said that, however, I think I handled this particular discipline of coding rather well. Although I had never programmed a specific AI system before, I had tried my hand at basic behavior algorithms; collision avoidance, chasing targets, Finite State Machines and the like. In particular, the actual mechanics behind the high concept of 'Heuristics-based pathfinding' was much easier than I had first thought. I was worried that this would be the most difficult aspect of AI, but the dynamic list creation and iterating thorough multiple custom objects was something I was very familiar with. Overall, I believe that having prior understanding and knowledge in how to achieve these algorithms allowed me to progress in my code much more smoothly than I would have otherwise fared.

The downside was, as always, over-scoping and poor time management. I had originally planned to do so much more in the field of AI, including the implementation of a fairly complex finite state machine that enabled / disabled specific AI sub-routines. This concept came from my research into GOAP, and I had planned to create a simplified system that still followed the same basic principles of GOAP. Unfortunately by the time I had managed to perfect my A* pathfinding, my time had run out. I was forced to quickly cobble together a basic FSM that selected the next pathfind destination, and that was all; a farcry from my original vision.

The silver lining here is that this error was not very significant in the grand scheme of things, and only occurred in the first place due to my relative inexperience with AI. Now that I have a much better understanding of what I'm doing and how to get the code working, the time it takes to complete each individual task should be considerably shorter. If I was to try something like this again, I'm confident that I can get more done within the same timeframe.

Game Dev

The other major task I must ruminate upon is the Game Development Project, in which I and several others collaborated to create a game. I was in charge of basic game-play code (movement, collision handling, items, etc), as well as graphical effects; namely the god ray shader. I had originally assumed that developing the scripts in Unity, seeing as how I was familiar with c# and had used Unity's system before. This would have been true had I opted to develop things properly, and not half-heartedly decide to use the Rigidbody physics in conjunction with everything else.

Oh boy, what a mistake that was. In hindsight, I have no idea why I decided to develop ALL game-play related code based in the physics simulation. I seem to recall thinking that since 2D platformers involve gravity, using physics within the code was the logical choice. Of course I realise now that this was definitely not the right way to go, and in future I'll be sure to take note of this valuable lesson.

Now, he shader part of my involvement in the project was very good. As I've stated before, I'm rather proud of what I've managed to achieve with the limitations of Unity Free. I don't think I need to reiterate the positive aspects of my work, and if you, o reader, require a memory refresher you can refer to my earlier blog posts.

However, this is not a self-reflection of the work I did; this is a self-reflection of how I approached the work I did. And i this regard, I may have made yet another mistake. You see, because I was so enveloped in my own work, I sort of uh... neglected, my other duties. I worked on the code within the shader for 4 days straight, barely sleeping or eating. This attitude and style of work was fine in previous projects; since they were solo undertakings, the only consequence was the delay in schedule. Howe3ver, as part of a team I was responsible for keeping in communication, remaining update on current progress, and being available for assistance in various other tasks.

Obviously I failed to tick off any of these requirements, and this I can only attribute to may lack of experience working in a group. Because I am usually only involved in solo projects, I am not accustomed to contantly communicating with others whilst developing. This is something I can only improve on with experience, and I can only attain this epxerience by participating in more group-related projects. However, I will try to keep this aspect of teamwork in mind when a similar opportunity arises once again.



The End

So there we have it, my thoughts on how I behaved during the past 2 months. Overall, I'd say my work ethics and attitude was up to standard, despite the occasional slip-ups. I will make it clear that whatever the mistake was, it was not made intentionally. I approached things with the appropriate attitude, I worked on my tasks diligently, and attempted to assist my development team in any way I could. So  I suppose all the errors, at the end of the day, can be chalked up to inexperience and naivety; I arrive on scene with the right mindset, but I simply do not know enough and have not experienced enough, to get the job done perfectly. It will take time to get it right, but I think that being able to recognise 'I am inexperienced' shows I'm on my way to improving already.

Monday 4 May 2015

Good News or Bad News?



The past week has been... tumultuous.

On a scale of 1 ~10, I've had a really good moment that brought the week up to 8, only to be followed really bad things that drop it down to 2. At the end of it all, I am currently sitting on a solid 5, a very half-hearted "meh". If you'd be willing, o reader, I shall regail to you the sad sorry tale that is my life so far. Starting with the good news, since I'm such a cheerful optimist.



The Good News

I finally managed to get a God Rays post-processing shader working in Unity, using nothing but the tools and functions available in Unity 4 Free Edition. In the beginning I thought it would be a simple task; simply render the scene to a render texture (a feature that very much exists in Unity), apply a hand-written shader effect to it using Unity's built-in custom shaders, and voila! Pretty visual effects! Of course, I'd failed to read the fine print that stated all this was only possible in Unity Pro. Therefore, I had to get creative.


1. Capture the entire screen display into a texture

 This process is, conceptually, identical to rendering the scene into a texture. The key difference is in the method; 'capture the screen display', not 'render the scene'. Because render textures were unavailable, I was forced to perform another function that produced similar results: ReadPixels. I talked about how this functions works previously, and how expensive it is... No kidding, I'm forcing an extra draw call to be performed entirely by the CPU. I managed to optimise it by running the function in a sub-routine thread, but using this technique still dropped the FPS from 120 to about 50. It pained me, but it was necessary.


The worst part is nothing has seemingly changed (-____-)


2. Run Shader Pass A: 'Occlusion'

Once I'd managed to somehow cobble together an image I could work with, I began my work on actually writing a shader to edit this screenshot texture. The good news here is that Unity's shader language, Shader Lab, supports multiple render passes in a single draw call. This made it much easier for me to layer the separate effects I needed to achieve my end goal of God Rays. Now, the first thing one needs to create a God Ray shader is to create an occlusion pass; essentially, draw every object in the scene as solid black. this allows us to represent everything that will be blocking (occluding) the light as it shines through.

Achieving this in a shader is, to be honest, a cakewalk. Simply sample the base texture, and for every pixel that has an alpha value, render black.


Finally, some visible changes!


This method relies on the fact that the screenshot / render texture will, when drawing the scene, only modify a pixel if it sees that an object occupies that pixel space. If an object exists, it will be rendered to the texture (alpha = 1.0); if not, the render texture will not edit the pixel in any way and leaves it blank (alpha = 0.0). Therefore, if the sampled pixel has an alpha value it must mean there is an occluding object occupying that pixel space.


3. Run Shader Pass B: 'Lighting'

Once the occlusion pass was in, I wrote the next block of shader code: lighting. This is just the simple 3D directional lighting technique used in all languages, so I won't go into too much detail about it here.It should be noted though, that I wrote the standard vertex-based lighting model from scratch. Since Unity already has the phong shading and blinn shading models built-in, it may have been easier to use those instead of writing a model myself. However, as with all pre-built code I dislike not knowing how the logic flows internally. As such, I felt more comfortable writing one I knew would work than trusting Unity.


I see the light! Hallelujah! 


4. Run Shader Pass C: 'Blur'

Last but not least, the radial blur. In order for to simulate the God Rays effect, a radial blur is applied to the whole texture with the blur originating from the light source. Or more accurately, where the light's position is in screen space.  This involves passing in the light's position in-game and translating that into screen space, disregarding the z-pos since we are working with 2D textures. Once the light position is found, we set that as the origin for a radial blur that decays in strength the further from the origin. Something along the lines of:

void main()
{
 pixel_position = texCoord[0];
 delta_pixel_position = pixel_position - light position;
 color = renderTexture.sample(pixel_position);
 
 for(int i=0; i < NUM_SAMPLES ; i++)
 {
    pixel_position -= delta_pixel_position;
    new_sample = renderTexture.sample(pixel_position);
    sample *= godray_strength_decay;
    color += sample;
 }
 
 color out 
}

This site also helped.

 ...It's... it's so pretty...!


I'm rather proud of the results to be honest, and I'll more than likely use the technique again in future.... Actually, no I won't. In future I'll just use an engine that has render textures to begin with, save myself the numerous headaches. I briefly talked about some of these headaches previously, so I won't devolve into another rant here. Needless to say, I still very mush dislike working with Unity. if it weren't for its layout and the relatively intuitive component system, I'd never look at it again.



The Bad News

...Aaaand here comes the downside. So as you can imagine, when I had completed this wonderful project of mine I was ecstatic. I was cheering, I was laughing, I even did a little dance. Best of all, this tech that I had managed to complete was to be displayed to the public in a Games Exhibit! I could see my work proudly displayed, and people would see me for the genius, coding savant that I am!

"This calls for celebration!" I thought, "break out the liquor!"


...Yeah.

Two shots of Tequila and half a bottle of Whiskey later, I awoke to find that the whole world was constantly spinning and my legs had transformed to jelly overnight. I could barely walk without feeling nauseated, let alone making it to an exhibit on the other side of town.

Sigh.

It really is my own stupid fault, and that's why I'm feeling rather blue at the moment. I had a wonderful opportunity to see my own work on display, and I miss out on it being a drunken idiot. I can only hope that the exhibit went well and I didn't screw everyone else over by not being present for the preparations. My apologies if I did.


I think I'll quit drinking.

Monday 20 April 2015

"That's not right. It looks pretty, but it's not right."

Oh shaders. How I love you so. And I mean that in 'the abused housewife still loves her husband' kind of way. I think this, once again, confirms my suspicions that I am a masochist.

So last week I began my journey into the perilous realms of Terra Unity Morte, specifically in the aspect of custom shaders. My plan was to create a God Ray Post-Processing shader, and it was a fairly straightforward one.

The Plan

Step 1. Create a new render texture, render the scene onto it
Step 2. Iterate through each pixel. If the pixel alpha is larger than 0 (ie. object exists) then draw the pixel as black (Occlusion Pass)

Step 3. Draw lighting shade onto every other pixel, ignoring the occludes. (Lighting Pass)

Step 4. Apply a God Ray effect (Effect Pass)
Step 5. display the material with render texture on a screen sized quad, placed in front of the scene with alpha blending (Scene Render)

Step 6. Pretty!



Unfortunately, at this point old man Unity came along and looked over my plan.
"Nice plan you've got here," it noted, "It'd be a shame if.."

Unity's answer to my plan

Step 1. Disable render textures in Unity 4's free version (what I am using)
Step 2. Set shaders to outright ignore alpha values by default, drawing to RGB channels only
Step 3. Allow multiple passes in a single shader, but texture samples for preceeding passes will refer to the newly rendered texture with previous pass effects

Step 4. Most (if not all) Post-Processing effects, including God Rays, require the use of multiple render textures, not just one

Step 5. updating a quad's position every update to remain in front of the camera will cause issues with physics (for some reason)


Satisfied with its work Unity wandered away, leaving me to weep over what remained of my plan.



So yes, there were issues.

Render textures not being available was the biggest one, since that is the core ingredient of any Post-Processing effect. I eventually settled on a CPU-based alternative, ReadPixels(), which uses the CPU to read every pixel currently on screen and dump the data into a 2D texture. If that sounds like an extremely expensive operation to you, that's because it is. I am essentially performing a screenshot after every frame, manipulating the screenshot, then re-rendering the edited screenshot into the game. Good God.

The second biggest one is the frankly labyrinthine system that is ShaderLab. In the spirit of all things Unity, this system attempts to be 'User Friendly' by automatically performing several functions for you. That's great, except that I now have no idea what is actually happening when the scene is rendered. It was only after hours of researching that I discovered shaders only render RGB by default, completely ignoring the alpha channel. Any transparency can only be done via alpha blending, and alpha-inclusive renders can only be performed under certain settings with the right flags declared. Overall the thing is so unintuitive I feel like Theseus wandering the Minotaur's Maze. Except I don't have Ariadne guiding me with a piece of string.


Still, I have managed to get somewhere. It's far from what I want, but I am currently up to step 4 in my once-foolproof plan. It's very slow going since I have no idea what will actually be rendered 90% of the time, but that's less to do with Unity and more to do with shader coding in general. I'm hoping that I can get to at least a passable rendition of the effect I want, at which point I will declare victory and look the other way.

And to think, all this could have been void if we had used Unity 5.



Some very incorrect (but nonetheless pretty) render results

 





Monday 13 April 2015

The glorious Godrays - shaders in Unity

 [http://s8.postimg.org/7e8xjn9gl/rays002.jpg]

When I first began to think about programming as a life choice waaay back, the thought of writing graphics code was not one that sprang to my mind. My idea of interesting, fun-to-write code was AI programming, and that's the kind of work I wanted to do. Now, I still do love me some AI (my latest blog posts about Alien AI and GOAP are dead giveaways) but I never imagined that my second best interest would be in shaders and graphics.

I usually detest the the Games Industry's blind obsession with graphical fidelity so prefect you'll weep tears of joy. But once I started learning GLSL and the visual effects you can achieve with it, I was hooked. I did toon-shading, bump mapping, dynamic lighting and shadowcasting. And now, since I am in the process of making a game involving God, I'm writing a Godray shader.

Godrays, or the official term Crepscular Rays, refers to shafts of light that appear to radiate from aingle point, usually behind an obstacle like clouds or a person. The light rays often darken the obstacle hiding the light source, and as a result it appears as though a bright glow is emanating from it.

 Neat little online demo of the effect in action






After a bit of research, I've managed to get a good idea of how you would simulate this effect in a shader. Simply render an occlusion map onto an FBO, render the lighting as normal, then overlay the occlusion onto it with blending to match the brightnesses of each. Overall this isn't that complex a concept, and if I were to write this in GLSL within an openGL framework it would be done in minutes. However, there are a few potential roadblocks:

1. Unity

I'm not sure how Unity will handle specific lighting effect shaders, especially if each lit object has to perform its own separate godray effect. I may have to manually add a light source behind major areas, and if so that will get annoying fast.

2. Unity

Unity uses its own shader language called ShaderLab. Why they didnt just use openGL, directX or even CG is beyond me. Espscially since most ShaderLab shaders use tunnelling to access the underlying CG code anyway. I dont think it'll pose too much of an issue, but it's still possible that ShaderLab will try and trip me up every 20 seconds.

3. Unity

Unity's system of materials, lighting and shaders is confusing. There are no ways to easily edit shader values and settings, and barely any notion of texture maps. According to the online Unity manual, there is a function concerned with render textures. But knowing Unity it won't be entirely code-handled, which means placing template materials in the scene, which means linking issues, which means...

Basically, I don't enjoy using Unity.

Monday 6 April 2015

Alien AI: The Perfect Algorithm



 [http://www.giantbomb.com/videos/quick-look-alien-isolation/2300-9543/]

Recently I purchased Alien: Isolation on Steam, which had been on my wish list for quite some time. 75% off, game + 3 DLC for $20. Praise Lord Gaben!

I'd heard that the game was very good at achieving its goal, which is to say, scaring the living crap out of you. I had also heard that a large part of its success came from the alien, whose AI algorithm gave it very believable, lifelike behavior; it was (so I'd heard) like being hunted by a thinking breathing, perfect organism.

So when I booted up Alien: Isolation, at about midnight for maximum immersion, I was feeling hopeful and excited. Firstly, since I am a long-time fan of Horror Games, it had to tick all the right boxes in regards to what a horror game should do. Secondly, I was interested to see if this Alien AI was really all that it had been talked up to be. Would it really think strategically, react to its environment dynamically, and generally behave in a lifelike manner?


*4 hours later*


WELL. That was... that was something.

The alien, to put it elegantly, is freaking terrifying. It appears at any moment, slithering out of  ceiling vents like some great serpent. Once on the scene, the AI performs in a way that I feel is very unique in terms of video game AI systems. Points I found particularly interesting about it are:

  • The AI seems to act on a FOV basis, meaning it cannot spot you if you are not visible. Hiding behind boxes and walls is a core mechanic used to avoid the alien, and if it cannot see you it may walk straight past you.

  • The alien hunts you based on sound, meaning noises such as footsteps and colliding objects attract it. I found that as long as you stay completely still, it would often be lost. Conversely, make any sort of noise at all and the AI will pinpoint our exact location and come running.

  • Items such as flares, flashbangs and noisemakers are introduced, creating the gameplay  mechanic of 'distract it while you run'. However, this seemed to lose effectiveness over time, as the alien learned from its past experiences.

  • The AI always seemed to know the general are of where I was, even if it couldn't find my exact location. Whether this is another clever AI aspect or simple rubber-banding I'm unsure.

Overall the AI was pretty amazing, and as I played I began to wonder how the developers had accomplished this complex, lifelike behaviour. Unfortunately google search didn't come up with anything too useful, which is understandable. Since the game has only recently been released the developers are not willing to unveil too many behind-the-scenes aspects.

I did, however, manage to find several interviews withe the devs talking about how the AI system works. Based on this info, I can scrape together my theory on Alien: Isolation's AI system.

Interview A (skip to about half-way)
Interview B / Quotes
Interview C (no. 6 onwards)


My Theory / Concept


Generally, the AI system functions based on a complex decision tree to act upon outside stimuli. In this case, that stimuli is audio cues such as player footsteps, doors closing, and objects being knocked about. Once the AI 'hears' these noises, it will decide to investigate. How it does so is, I imagine, similar in approach to GOAP; it can simply walk over, use a vent, or even take a circuitous route to try and delay its arrival.

Next comes the more interesting aspect, reactionary decision-making. According to interviews, the system possesses a series of pre-defined behaviour patterns that are locked off from the main loop. These auxiliary instructions are unlocked in reaction to player behavior and remain as permanent options for the decision tree.

For example if the player is using a lot of distraction devices, normally the AI will be obliged to investigate the noise in accordance with its basic behaviour. However, after a certain number the 'doubt sources of noise' flag is turned on and an option to ignore that noise and look elsewhere is activated. From that point, on, distractions become less effective since the AI can choose to ignore it and look elsewhere. Improving this even further would be to guess where the player is likely to be based on where the distraction was placed (ie. the player wants me to go over there, lets go in the opposite direction).

Of course these are just theories, and I may be completely wrong. But I also think that, even if I am, the basis for a very clever AI system is in here somewhere, and these ideas I came up with will likely be designed and developed when the oppoortunity presents itself. I think I'll add this AI concept to my ideas notebook.

Monday 23 March 2015

The calm before the storm


This may be an odd and somewhat controversial statement, but I am mostly convinced I am actually a masochist. Now, before you all label me as some sick, twisted pervert and lock me up, allow me to explain.


Being a programmer, my days are mostly spent in an endless cycle of frustration and joy; spend 8 hours trying to fix a bug, banging my head against the wall, followed by 5 mins of yelling out in praise of whatever Gods saw fit to show me mercy. As you can see, this endless cycle has a ratio of about 80 pain, 20 pleasure. It's not something a normal person would choose to do as work, much less willingly in their own free time.

Yet here I am, plodding away for hours on end. Even now, as I write this mess of words resembling a blog post, my mind is planning a complete system overhaul and major redesign of a game I had previously developed. Titled 'Haunted Within', the project was forced reach code lock and release prematurely. This meant a lot of rushed code, hacky workarounds for issues, and overall poor standards that I would not normally tolerate. I've regretted a lot of what I did, and now that I have some free time on my hands I figured I can revisit it.

Exhibit A: a work in progress


My plan to 'fix' the game is fairly straight forward; since most of my desired features are already in I simply need to remove all the hacked-together bits, correct the underlying problem at its source, and overall polish the fundamentals of what already exists. In the cases where I do need to change major aspects, I'll simply scrap what I have and rework the system from the ground up. Obviously this will take a lot of work and time, so I'll try to avoid doing whole changes if I can. But I know for a fact that some systems really need the overhaul.



System A: Graphics

This one I don't think needs too much fiddling. I do want to touch up on the draw order; currently the objects are being drawn in order of  most recently to least recently spawned. By introducing a z-buffer, I will be able to rearrange the draw order so that objects closer to the foreground are drawn before others.

System B: Game Logic

This system is where things get messy. When I say game logic, it refers to things such as object spawning, collision handling, object logic, etc. Overall, this system tries to handle far too much on its own and does so in a really disorganised way. I plan to split the game logic into 2 separate systems, one for object spawning / logic, and one for object interactions.

System C: UI

Ah the UI system. The trouble with UI in this project is that it uses a third-party system, named CGUI. Normally third-party systems aren't a bad thing and can significantly make things easier, but CGUI is... well, it's not exactly pretty. It seems to be designed using c# syntax and this makes it rather tricky to integrate into my own code. To be honest I'm not sure what I can do here... I guess I can try to keep it as clean and organised as possible. Pretty much all I can do, really.

I'm going to be working my way through these systems and more (a LOT more) through the coming weeks, whenever I have time away from my other duties. My long-term goal is to have the game polished enough for sale via Steam Greenlight, so if I want to get there I need to get working.

Monday 9 March 2015

VR, AI and the graveyard shift


People, praise and rejoice! For my house is FINALLY hooked up to the World Wide Web. I should be grateful, but in all honesty I'm feeling a little depressed.

You see children, it just so happens that our exchange area (South Brisbane) is ruled by the demonic overlord Telstra. They decided that our area, and our area alone, should possess a lovely alternative to NBN called Fibre Access Broadband.

Go and see their site. Just look at it.

"Now you can enjoy all the speeds of NBN fibre, arbitrarily reduced to ADSL2 speeds for some reason! Oh and this stuff isn't cheap so we're also going to triple the connection costs. Don't you love us!"

Screw you Telstra. I hope all your children are eaten alive by spiders and any money you made seized by the government under suspicion of racketeering.

But enough dark muttering. Let's move onto a more positive topic: games and coding! (^_^)



GAMES:

The other week I managed to get my hands on a Joystick Controller, a second hand Logitec extreme 3D. I'm not usually a fan of flight sims and such, but when I received it a stroke of genius struck me. Joystick + Oculous Rift + War Thunder.

War Thunder is a Free-To-Play War-vehicle Combat Sim, with multiple players battling in either tanks, planes or a combination of both. It is a very polished game considering it's FTP and still in development; there are future plans to implement a Naval Combat aspect in warships, and integrate all three (air, land, sea) into a battle royale mode.

 It may be glorifying the horrors of war, but my god is it glorious.

I've been playing it for a while now, and it occurred to me that having an actual joystick to fly WWII war planes would be much better than the mouse + keyboard alternative they offer us.

But that's not the end of it, ladies and gentlemen. I also happen to own an Oculous Rift, of which War Thunder supports. The plan was to plug in my Rift and joystick, boot up the game and experience what it's like to fly a Mitsubishi A6M Zero  in all its virtual glory.

The result was everything I had hoped for. Controlling the plane felt authentic, the view from the cockpit in VR was incredible, and trying to perform immelmann turns was exhilarating. Unfortunately I couldn't experience it for too long once the motion sickness set in; since I own the DK1 version of Oculus Rift motion sickness is more pronounced as it is, let alone without the further mind bending that comes from flying a plane in VR. It's also not very viable for actual combat, since orientation and spatial awareness takes a nosedive when you don't have a tangible sense of the object in motion. I am definitely going to keep using the joystick, but the Oculus experience will have to remain for joy-flights only.


CODING:

My most immediate concerns regarding programming is the AI bot tournament, which is to be held this Wednesday. I was pleasantly surprised to see my bot doing so well in the last tournament, coming in at 4th. This is an admirable result considering I wrote the whole thing in 2 days and didn't really bother with tweaking. This time, however, is the added challenge of navigating a maze and the pathfinding that comes with it. Apparently, many of the others in the tournament placed great emphasis into pathfinding, and that's the only reason they lost in matchups against superior shooters.

Hmm. That's not good. Movement is possibly the weakest part of my bot code. So how to improve it? My original plan was to use a simplified form of Goal Oriented Action Planning to govern my bot's overall behaviour, with standard A* to figure out the actual pathing.

Unfortunately, the more I read about GOAP the more I realised that it wasn't suited to the task at hand. GOAP is designed to simplify large, nebulous goals by breaking it down into a list of possible actions and determining the most appropriate plan. When your goal is as simple as 'find bot, shoot, kill', the rigidly structured system and large overhead is wasted and the result becomes more confusing than before.

GOAP readings. Surprising amount of games use GOAP in their AI

In the end, I have simply stuck to standard A* pathfnding with some plans for maze sector-based hiearchal A*.  It does seem like Finite State-Machines are best suited for smaller projects such as this, and in retrospect it makes sense that it would be.