Using Cookie Clicker to learn JavaScript?

What is Cookie Clicker?

For those who don’t know Cookie Clicker is a game about clicking and collecting cookies. That’s it. I consider it to be in the same sort of vain as many other Cow Clicker style games, but not quite as bad. It’s a simple nice waste of time and a good example of a HTML 5 Game that doesn’t use the canvas tag (at least not yet).

Cookie Clicker

What do you do in it?

The main thing you do in the game is click things to gain cookies or increase the rate at which you accumulate them. There’s no real money store (thank god) and no real way of getting ahead of someone whose started before you without cheating horribly and using the JavaScript Console. And the fact you can edit and fiddle with the game in such a way is the subject I’d like to discuss.

What’s Special About it?

Because it’s made using HTML and JavaScript (and isn’t minified or obfuscated in any way) you can take it apart and learn from it, devise the best strategy or analyse how the underlying systems work. In my eyes this could be a gold mine for teaching people JavaScript, HTML and some basic web programming. You could use it as a way of introducing concepts like variables and later methods to students.

For example typing: Game.cookies into the console will cause it to output the amount of cookies you currently have in the game.

I use Firebug in Firefox

I use Firebug in Firefox

And that’s just one thing you can find out, the source code is littered with values that would be useful or interesting for players to see.

With access to the JavaScript and HTML you can introduce someone slowly to how clicking buttons effects its inner state, and later actually get them to write some scripts to assist them in playing the game or downright do everything for them.

I’ve actually written a small bot that can actually play the game just as well as I can. And I’d love to see others do the same to see how much better or different they can make things.

In fact that’s my challenge to those reading this; make a bot that can play Cookie Clicker.

Below is my bot (shared on github) for inspiration, bookmarklet is below, simply drag it to your favourites and click it when you are on the Cookie Clicker game to see it in action:

Cookie Clicker Bot

Sorry for the short and long delay between blog posts, will likely be more coming soon!

Making an analog clock using the HTML5 canvas tag

It’s been a long while since I have posted anything, this keeps happening but ah well. Anyway, in this post I am going to show you how to create an analog clock using JavaScript and the HTML5 canvas tag.

The HTML

First let’s create a basic HTML5 page that will contain the canvas tag and other markup:

The JavaScript

As you can see I have made the page link to an external JavaScript file and call a set up method, these will be explained below. I’ve done this so that it is easy to include the HTML5 clock on any page with minimum hassle. The JavaScript code itself makes use of no external libraries like jQuery so it is portable.

Below you can see the contents of the JavaScript file:

This file consists entirely of the method setupAnalogClock(). I will now run through the file and how it works:

  • setupAnalogClock() takes 2 parameters; the first is the canvas element, the second is the width (diameter) of the clock.
  • Once called we begin by get the canvas context and getting the centre coordinates of the canvas.
  • After doing that I defined a method called tick() that will be run once a second and will update and process all the drawing of the clock.
    • Within tick() I defined two other functions called drawStatic(), and drawHand().
    • drawStatic() simply draws all the static parts of the clock, this includes the face, the centre point and the lines representing numbers around the face of the clock.
      • The lines representing numbers are drawn in their own function called drawNumbers() which counts backwards from 12 and draws each line in place.
    • drawHand() is a method that takes 2 parameters; the first being the length of the hand (this should be between 0 and the radius of the clock) and the angle at which to draw the hand.
    • The colour and line width of each hand is set before calling drawHand() in the tick() function.
  • At the end of setupAnalogClock() we call tick() once (to draw onto the screen) and then use setInterval() to call tick() once a second while we stay on this page.

The results look like this (for a live version click here):

Example of Analog Clock made using HTML5 Canvas

The canvas is slightly larger than the clock face to accommodate the line width of the face.

How the Hands Work

The hands (and the hour markings) work by using transforms supported by HTML5′s canvas tag and some basic mathematics.

Firstly I worked out a few equations based upon some facts:

  • There are 360 degrees in a circle (which in radians is 2 * pi).
  • There are 12 hours on a clock face.
  • There are 60 minutes in an hour.
  • There are 60 seconds in a minute.

From these you can work out the degrees a hand should rotate based upon what it is representing:

  • 360 / 12 = 30. Which is the number of degrees the hour hand should rotate per hour. e.g. at 9 o’clock the hour hand should of rotated 9 * 30 (270) degrees.
  • 360 / 60 = 6. Which is the number of degrees the minute hand should rotate per minute AND the number of degrees the second hand should rotate per second.

Of course the rotate methods for the canvas tag use radians instead of degrees (like most programming languages and libraries) so we then need to translate these degrees into radians by simply multiplying them by pi / 180.

So using these the method for drawing the hands becomes simple;

  1. First we translate to the centre of the watch.
  2. Then we rotate by the desired angle. (taking into account that the canvas origin is at the top left).
  3. Then we draw the line from where we are up to the desired length.

Hopefully this will help others to create a little HTML5 analog clock. Feel free to use the code, just don’t try to claim it as your own! I wrote this quickly while trying to make a nice replacement for the analog clock on iGoogle (since iGoogle will be disappearing soon) using HTML5, I will next try to create one that makes use of CSS3 transforms instead of the canvas tag.

You may have noticed the commented out code for numbering, I didn’t have it active because I couldn’t get them looking satisfactory and I think it looks better without them anyway.

Happy Coding!

Planetside 2 API Experiments

PlanetSide_2_LogoRecently I have been playing Sony’s Planetside 2 game after having not played it for a while. As well as simply playing the game again I’ve also begun experimenting with their Developer API that SOE has provided. All you do is make request to the Planetside 2 API and get returned data in the form of a JSON object.

Below are a few of those experiments, note that I am still relatively new to Python so am using this as a way of teaching myself more about it so the code may not be the most efficient or follow correct code styles but works! I’ve made use of the requests Python module to make the API requests as it seemed easy to use.

Below is a simple example that will display the average amount of certs you gain per minute from when you started the script. It can be modified to work on different intervals (e.g. hourly or every 10 minutes). I was surprised at one point to find that I had an average of 2 or 3, being not a great First Person Shooter player:

And one of the first things I made was bot that periodically told me what the last event was that happened on the server I play on via a text to speech engine called pyttsx and to the console. Note: It doesn’t follow Python variable naming guidelines:

The next thing I hope to make is something that will record my progress each hour of the day and then plot it into a graph to show me when I am active and how long it will take me to gain enough certification points to buy wanted weapons and equipment.

I would love to hear what other people have done with the API and what people would like to see done with the API so leave a comment if you know of anything interesting or have done something cool yourself!

Xbox One Thoughts/Rant

Note: This is my first attempt at reviewing a games console announcement like the Xbox One or attempt at any kind of gaming article at all. For some better professionally written articles on this subject see Kotaku, or another professional gaming news site.

After watching the Xbox One reveal after work I wasn’t too impressed with the console itself and with Microsoft’s apparent business strategy. I was watching it with a friend over Skype and at almost every turn we were disappointed. Now we weren’t expecting to be enthralled by much of it but had high hopes that it would be better than the previous Playstation 4 reveal. Quite frankly it was worse.

I can sum up a few of the reasons it was worse with one of the top trending YouTube videos on the subject:

Summary and Thoughts on The Xbox One Announcement

If you can’t be bothered to watch that (and I don’t blame you if you don’t), the Xbox One seems to no longer be a games console, it looks like it’s a TV Set Top Box that can play games. In almost every other sentence the word Television or it’s abbreviation TV was used. A great song and dance was made about it’s new TV related features; watch TV on it, keep up to date with sports on it, there’s a TV guide on it. A lot of these things were touted as revolutionary but I hate to break it to Microsoft but we’ve been able to do these things on our actual TVs for years, in fact here in the UK all Televisions now come with Freeview, and every one of those has a guide, a lot of them have reminders and record functionality. We have a TiVo box in my house, it’s brilliant; it has all those functionalities and more. Why there is this obsession at Microsoft with being the all in One media centre is beyond me. There’s already an established market for these products and it seems widening your focus from games to media box will have an adverse effect on your sales not to mention drive up the price of manufacture and cause you to compete in multiple markets.Xbox One

Another gimmick they put emphasis on was the Kinect 2. From a hardware stand point it is impressive but marketing it as a replacement for a remote and controller seems to be a bold strategy. One thing I thought was slightly comical was the hand gesture used to return to the home screen; it seemed imprecise and looked like it could get very irritating, compared to controlling with a controller that has very precise controls (this is why we still use keyboards when working and playing on a PC). Voice control also may seem cool but my first thought when seeing the Xbox One turned on via voice was a wish for someone to yell “Xbox Off” in the audience. Constantly yelling at a machine seems very imprecise even if it supposedly recognises your voice (this is one reason I dislike Android and Apples voice recognisers). Perhaps these issues have been addressed in the Xbox One, we won’t know until it is actually out, if they have been then there’s a small chance the gimmick might catch on.

There is an obsession that large companies have at the moment to do with social media. Not just the Games Industry, countless other industries have taken to Twitter, YouTube and Facebook to promote themselves and their products, as well as add social features to said products. We’ve even done this where I work (I added social media buttons to one of our products). At Sony’s PS4 unveiling they showed how you could instantly share clips of yourself, stream and interact with others whilst playing (to the scary degree of giving control of them game to someone else). At this event we had some similar things shown, although not many to do with games. They showed how could see TV that was trending and popular for example. This is a slightly personal complaint, even though I use social media I don’t see the need to always be up to date and interacting with others constantly.

Speaking of a lack of game related content, where were the games? We got shown trailers for a few titles but no actual gameplay at all. I felt disappointed after watching the Call of Duty: Ghost trailer as the EA Spokesperson specifically said this was done all in engine but didn’t show anything that couldn’t of been an CGI movie. They spoke about exclusive games and Halo then announce a Live Action Halo TV Show directed by Stephen Spielberg. That’s great and all but what about games? People don’t buy a games console to watch TV on, surprisingly you already have a Television for that.

Another feature shown was the ability to multi-task; you can watch a movie or play a game whilst using Internet Explorer you surf the web. Granted this is a good feature, and I see little bad from it other than the fact that most people have multiple devices nowadays, something that the developers of the new Xbox realise as they showed support for using a mobile as a remote control, so why would someone not more easily use their mobile or tablet to surf the web than the Xbox? Granted the TV screen will be bigger but the side by side view will squish the game or the movie and ruin your gaming and viewing experience not to mention breaking your immersion. And that’s ignoring the problem of the lack of keyboard.

At least they showed the console I guess, even though that’s not important in the least; most people care about what it can do rather than what it looks like (one reason I was a little confused about people being upset at the PS4′s announcement). Personally I think it looks a bit ugly but as I said it’s irrelevant, all it will do is sit on a shelf or under your TV, the Kinect on the other hand looked like it could be problematic to place.Cheering at Xbox One Event

People online pointed out that the cheering wasn’t the press (that’s good) but developers. This caused some people to be angry and say it was a misrepresentation of the excitement at the event. However, if you think about it rationally; of course the developers would be excited about the unveiling of something they’ve worked hard on so why wouldn’t they clap and cheer, however the way the microphones were set up may of intentionally focused on the cheering, again that doesn’t really matter though since you shouldn’t care about the audience’s reaction when viewing a console.

In summary from this event, I think that Microsoft are placing a large bet on their brand name in the hopes that they will be able to make sales on that alone and a few gimmicky features. They’re new Xbox One will compete with two of Sony’s Markets, Smart TVs and Games Consoles.

It’s rather risky gamble really since Sony has a big advantage over them in the TV front and will always make money off of selling TVs that can be used with either console.

After the Event

After the event it was confirmed that there would be a fee to pay when wanting to install a pre-owned game onto an Xbox One. With some rumours saying this fee will be the full price of the game, which I highly doubt will be the case. This does however eliminate the ability to lend games to your friends, an advantage console gaming has had over PC games for a while now since the proliferation of DRM systems like Steam and Origin. And may require you to pay the fee to play multiplayer modes with friends on the same console under a different account, something I do frequently on my brothers Xbox 360. This is bad, From a business point of view I can see why they will do this, it stops shops from reselling pre-owned games that do not give Microsoft or the developers revenue but it also lessens the incentive for these stores to care about the consoles beyond stocking them. They also remove a very nice part of console gaming, a very social aspect beyond the like, share and subscribe mentality of YouTube. If you put up fiscal barriers between players and their enjoyment it will detriment the console and the games. Imagine a group of 12 year olds wanting to player the latest shooter game together on the same console only to realise that they can’t because it requires them or their parents to pay, which they can’t afford to after purchasing the console. It is also detrimental to rentals, why rent a game when you can buy it for a similar price? This was claimed to be only a ‘potential scenario’ later so we will have to see.

They also said that it wouldn’t be always online. However it will need to connect for every new game installed, and possibly connect daily to keep you able to play your games. That’s fine if your only problem is a dodgy internet connection, but all these features they showed; the social interaction, internet browsing and TV will require an active internet connection so it might as well be always online to get the full experience.

There also won’t be any backwards compatibility, an issue some people won’t care about. Backwards compatibility in a console expands the potential library of games you can play on the console. Reducing the need for great exclusive titles and meaning that consumers don’t need to purchase additional games on top of a new console. The new Xbox One makes use of a different architecture to the Xbox 360 which makes the previous games difficult to port however I am confident you could emulate the Xbox 360′s architecture on the Xbox One’s hardware without to many drawbacks, seeing as the Xbox 360 has been emulated PC which makes use of the same architecture as the Xbox One. It doesn’t make sense with Microsoft’s idea of DRM however, since with Xbox 360 games you will have no way of knowing if the disc has been used on another machine. So no backwards compatibility seems to be more of a way of controlling the games market on the Xbox One then being technologically infeasible.

I think Angry Joe summed up a lot of people’s feelings towards Microsoft after this event. We can only hope that at E3 there will be some more announcements, clarifications and actual games shown, as for now it seems like Microsoft werent targeting gamers in their event.

Minimum Distance between a Point and a Line

I haven’t published a blog post in quite a while! Whoops!

I have several in the works, one in particular comparing the JavaScript Engines Nashorn and Rhino that is near completion, and another one on something I’ve been doing with my Raspberry Pi.

For now here’s a short web page I put together showing how to work out the minimum distance between a point and a line based upon a solution posted on this website, which has some very useful formulas for working out problems in 2D and 3D space. I’ve only written an example for the first formula on this page, hopefully seeing it as an interactive visual representation might help anyone struggling with understanding how to find the minimum distance between a point and a line.

You can view the example here: http://lyndonarmitage.com/html/points_and_lines/

I’d suggest viewing it in a web browser that supports the range input type (Opera, Chrome, IE10) as it let’s you adjust the values more easily, although it works just as well in a browser that does not support the range input type yet.

GlassFish + Ant = Bug?

Note: This is a blog post detailing a perceived bug with the GlassFish Application Server. It has nothing to do with Games programming or the Raspberry Pi like the rest of this blog.

Recently at work I’ve been working on a PDF to Android App converter (link there for those interested). The application itself uses one of our already created and well maintained products and is not the focus of this blog post. What is the focus is the issue I ran into when attempting to add the option to our online PDF conversion service that uses Oracle’s GlassFish 3 as the application server behind it.

Our service itself is pretty simple it let’s a user upload their PDF file and the conversion is run on our GlassFish server with the results served back to the user (be they HTML, SVG or some other format). Adding the Android Converter to this mix should have been as simple as adding any other mode, and it was, for the most part.

The problem I ran into was when I attempted get the server to also build the converted file as an Android application, something relatively simple to do as it uses Apache Ant to build the apk file. What happens is I encounter an error to do with the classloader Ant is using that looks like this:

C:\androidsdk\adt-bundle-windows-x86_64\sdk\tools\ant\build.xml:109: taskdef A class needed by class com.android.ant.GetTypeTask cannot be found: javax/xml/xpath/XPathExpressionException
using the classloader AntClassLoader

That’s the relevant part, I shortened it for clarity as there was also the call stack. After struggling with finding a solution for this error for about a week I turned to Stack Overflow in the hopes of finding someone who knows more than I do about class loaders and the problem I am facing. My post can be found here.

I persisted further in my efforts to find a solution (balancing attempting to solve the issue with my other work load), and found that the same code when run using Apache Tomcat worked fine. Case closed some of you might say, switch to Tomcat, except we used Tomcat before at work and had a few problems with it.

So now I am stuck for ideas, so I thought I’d create an example project to see if other people including the GlassFish developers would weigh in on the subject of what the issue could be and whether there’s a coding solution I am not seeing (hopefully there is) or if there is in actuality an issue with GlassFish.

The example can be downloaded here. It contains a Netbeans project with the appropriate code and a basic Android Project with a valid build file.

The example NetBeans project file contains some instructions in it’s JSP file on how to set it up but I will reiterate them here:

  • Make sure to include the Apache Ant libraries; ant-launcher.jar and ant.jar that come with the current version of Ant.
  • You also need to include the tools.jar from your JDK lib directory.
  • And you need to have the Android SDK installed with the environment variable ANDROID_HOME set along with ANT_HOME and JAVA_HOME.
  • Make sure the BuildingServlet points to the example Android Application.
    /**
     * Change this to the directory of the Android application you want to build.
     * In the project I have been working on this is a file uploaded by the user
     */
    private static final String pathToAndroidFolder = "C:\\Users\\Lyndon\\Desktop\\antBroken\\BlankAndroid";
  • You might also need to change the sdk.dir in the local.properties file in the Blank Android  project.

After making sure that it’s set up right you can run the JSP and will get a screen like this:

testingJSP

Upon pressing submit the Application server your using will attempt to build the Android Project using Ant, for those interested it uses this simple piece of Java code:

    private void buildAndroidApp() {

        File dir = new File(pathToAndroidFolder);
        System.out.println("Current Android dir: " + dir.getAbsolutePath());
        File buildFile = new File(dir, "build.xml");

        Project project = new Project();
        project.setUserProperty("ant.file", buildFile.getAbsolutePath());
        project.setBaseDir(dir);

        DefaultLogger consoleLogger = new DefaultLogger();
        consoleLogger.setErrorPrintStream(System.err);
        consoleLogger.setOutputPrintStream(System.out);
        consoleLogger.setMessageOutputLevel(Project.MSG_INFO);
        project.addBuildListener(consoleLogger);
        project.init();               

        ProjectHelper helper = ProjectHelper.getProjectHelper();
        project.addReference("ant.projectHelper", helper);
        helper.parse(project, buildFile);

        project.executeTarget("debug");
    }

If you were to run this outside of an application server as part of a normal Java program you will find no problems.

Upon running the JSP you will notice some errors in your GlassFish tab of NetBeans, or the build succeeding if you’re using Apache Tomcat.

You will then be presented with the Servlet’s response that will list any errors that occurred.

Ideally you would see this for both Tomcat and GlassFish:

tomcatSuccess

But sadly you will see this when run using GlassFish:

glassfishbug

If this article has been unclear I can potentially record a short video describing the problem. Hopefully it’s been informative for anyone wanting to help or experiencing the same issue. I know at least one other person who had the issue as they contacted me about it asking if I ever found a solution.

If you know the answer to this problem feel free to comment here and/or the Stack Overflow question I asked, I’d very much appreciate it.

Making my Raspberry Pi Tell Me the News

In my previous Raspberry Pi post I just told you quickly how to use your Pi to watch movies in a pinch, today I am going to walk through how I got my Raspberry Pi to speak out the current trending news headlines from reddit.com/r/worldnews.

To begin with I built a small Java application that made use of the freetts text to speech library and a Java wrapper for the Reddit API called jReddit. This worked well but the implementation seemed a bit too heavy for putting on the Pi, I had to make sure to have all the library files and make sure Java behaved itself etc.

For those interested you can see my Java Source file here. Please note it’s rather messy code and makes use of another time library.

So I opted to then recode it using Python since it came on with the Pi (as I am using a Linux based OS). In order to do this I needed to install a few Python modules for use, specifically pyttsx (the text to voice module) and praw (Reddit API module). Installing these was relatively easy using pip, the go to tool for installing Python modules.

Just to make sure you have python installed and to check the version you are running you can attempt to run the following command on your command line:

python --version

You should then see something along the lines of the following on your console output:

Python 2.7.3rc2

That is my version of python on my Raspberry Pi, yours will likely be similar! This isn’t a tutorial on Python programming itself (I know very little about it myself) but it’s worth mentioning there are some big differences between Python 3.x and Python 2.x, so programs written in one do not always work in the other.

Now in order to install modules easily I needed to install a module called pip.For this I found somebody had written a simple python script/program to do it for you. Instructions I found recommended running the following on the command line:

curl https://bitbucket.org/pdubroy/pip/raw/tip/getpip.py | python

This should pipe the contents of that url (it’s a pip installer someone made) into python to execute. Of course I ran into a problem doing this since I forgot to run this as a root user (by placing sudo in front of the command) and then resorted to downloading the script in the following and installing it with another command:

wget https://bitbucket.org/pdubroy/pip/raw/tip/getpip.py
sudo python getpip.py

As far as I know this will do the same thing, it installs pip so you can now easily install other packages/modules. And when I say easily, I mean easily. To install praw and pyttsx all I needed to do was the following:

sudo pip install praw
sudo pip install pyttsx

As easy as installing a Linux program using apt-get!

So after installing them I began my task of first learning Python. Having never coded in it before I found it surprisingly easy to get started in even with it’s strict white space rules. As a language it feels like it takes some of the best bits of C/C++ and JavaScript and rolls them together. I’ve yet to program anything Object Orientated in it yet but from the looks of it it’s easy to pick that up too.

To start with I wanted to test both modules separately. So after starting up an instance of the python interpreter (by running the command python) I typed the following:

import pyttsx
engine = pyttsx.init()
engine.say("Hello World!")
engine.runAndWait()

And got a bunch of errors to do with the ALSA library (that’s for the sound output cable I believe)! I then tried again and it worked through my HDMI lead. It also worked headless from an SSH connection with speakers plugged into the correct socket.

So now I had my Pi talking I needed to make sure I could get it something interesting to say. So I then wrote a simple python program to get the top ten news articles from Reddit based on the examples on praw’s documentation:

import praw
r = praw.Reddit(user_agent="Lyndon's news reader  by /u/LyndonArmitage")

subs = r.get_subreddit("worldnews").get_hot(limit=limit)
headlines = []
for sub in subs:
    print sub.title

This printed out the titles of the headlines to my console. Success!

Now I knew both of them were working I set out to translate my Java code to Python and dumped the Object Oriented aspects of it too simplify the problem. My code looked something like this:

import praw
import pyttsx
__author__ = "Lyndon Armitage"

engine = pyttsx.init()
r = praw.Reddit(user_agent="Lyndon's news reader  by /u/LyndonArmitage")

def get_headlines(limit=10):
    subs = r.get_subreddit("worldnews").get_hot(limit=limit)
    headlines = []
    for sub in subs:
        headlines.append(sub.title)
    return headlines

def speak_headlines(headlines=[]):
    for s in headlines:
        print s
        engine.say(s)
        engine.runAndWait()

titles = get_headlines()
speak_headlines(titles)

Assuming the modules are installed correctly it should also work for you! You might notice it’s pretty similar to both simple tests I made above. That shows how simple it was to make!

What I have done here is created two functions (using the keyword def), one returns a list of headlines from Reddit using praw and the other speaks them aloud using pyttsx. Very simple stuff.

And that’s it! I did create a modified version that will loop indefinitely (a bit like my Java version), only speaking at a set interval using Python but you can do similar things using cron on the Pi, which is my next step in playing with my Raspberry Pi!

What have you made with your Raspberry Pi? I’d love to get inspired by your ideas so please leave a comment below!

Blog Update & University Options

Blog Update

I haven’t posted in a while annoyingly, sorry about I’ve been busy with other responsibilities and it seems there aren’t enough hours in the day as I’d like there to be!

You may of noticed that I changed my blogs theme over to something simpler and brighter; I wasn’t to happy with the previous theme, although I did like the nice dark contrast it didn’t seem professional enough so I’ve opted for a slightly simpler theme. Hopefully it also makes reading the code I post a lot easier. At some point I hope to make my own theme or at least change one to suit my needs a bit better.

University

I’ve now chosen my final year options for my Computer Games Programming Course at DeMontfort University. For those who don’t know (or keep forgetting) I started my course over 2 years ago and am now over half way through a placement year. I had quite a few to pick from although not nearly as many as I’d of liked, they were:

  • Secure Web Application Development
  • Multi-Service Networks
  • Advanced Graphics
  • Mobile Robotics
  • Fuzzy Logic & Knowledge Based Systems (AI)
  • Systems Building: Methods and Management
  • Mobile Games
  • Audio Post-Production

I had to pick an amount that added up to 60 credits as I already have two compulsory modules worth 30 credits each (they all need to total 120).

  • My first choice was Mobile Games, because I have done some work on it before and enjoyed it. This is worth 30 credits.
  • My next choice was the Fuzzy Logic module, as I enjoy AI a lot and think it sounds interesting. This one is worth 15 credits.
  • I then only had 15 credits left to play with, which meant I could only pick between; Secure Web Application Development and Mobile Robotics. While I am interest in web applications the module description and presentation that we were given on it didn’t excite me. Whereas the description and presentation for Mobile Robotics interested me as it also involves a lot of Artificial Intelligence concepts. So I chose Mobile Robotics.

So now I have my options for next year I can sit back and relax right? Wrong! I’ve got to come up with a final year project before I start!

I have some ideas for this!

As you know if you’ve read some of my blog posts (including this one) I enjoy programming AI to do things, in fact some of my first blog posts were on an AI I made at university as part of a group project. So one of my ideas for a final year project is to attempt to create a framework for AI in a village for a role playing game like the Elder Scrolls series games have. That is, a system with NPCs who actively do things and perform daily tasks and follow routines working with and against one another. Hopefully I’d be able to make it a bit more complex than that though and include things like a simple economy and NPCs trading with one another.

If you have any thoughts on it or an idea that I might like feel free to leave a comment.

Other Stuff

Apart from all that I have still had time to create a few experiments; one of which being the start of a Fallout 3 Hacking Simulator and the other a program that will speak out the top news stories from Reddit using Python on my Raspberry Pi (blog post coming soon). I’m also still working on the second part to my Boids tutorial.

Playing Video Files on the Raspberry Pi from the command line

For the past 2-3 hours I have been watching movies on my television in my living room from a memory stick through my Raspberry Pi all without bothering with GUI interface and only using the command line interface to do it. And here’s how!

First I made sure that their was a media player installed that could be run from the command line, lucky for me I had one already installed called omxplayer and you probably do too! But in case you don’t here is the command to install it:

sudo apt-get install omxplayer

Of course you will need to be connected to the internet to use this but that’s the only time you will need to be connected for this tutorial so after installing omxplayer you can take your Pi wherever you like without worrying about an internet connection, which is handy because my living room hasn’t got one yet, and I have had trouble using my wireless dongle in the past.

To make sure it’s installed correctly, try running it:

omxplayer

You should get output that looks something like this:

Usage: omxplayer [OPTIONS] [FILE]
Options :
         -h / --help                    print this help
         -n / --aidx  index             audio stream index    : e.g. 1
         -o / --adev  device            audio out device      : e.g. hdmi/local
         -i / --info                    dump stream format and exit
         -s / --stats                   pts and buffer stats
         -p / --passthrough             audio passthrough
         -d / --deinterlace             deinterlacing
         -w / --hw                      hw audio decoding
         -3 / --3d                      switch tv into 3d mode
         -y / --hdmiclocksync           adjust display refresh rate to match video
         -t / --sid index               show subtitle with index
         -r / --refresh                 adjust framerate/resolution to video
              --boost-on-downmix        boost volume when downmixing
              --subtitles path          external subtitles in UTF-8 srt format
              --font path               subtitle font
                                        (default: /usr/share/fonts/truetype/freefont/FreeSans.ttf)
              --font-size size          font size as thousandths of screen height
                                        (default: 55)
              --align left/center       subtitle alignment (default: left)
              --lines n                 number of lines to accommodate in the subtitle buffer
                                        (default: 3)

Now you have omxplayer installed on your Raspberry Pi you can simply play media files on it and have the audio come out of the HDMI lead using the command:

omxplayer -o hdmi [Your-File-here]

But if like me you want to be able to play media off a memory stick plugged into your Raspberry Pi you will also need to mount it when it’s been plugged in! I found a good guide on how to do this here.

Basically the gist of it is that you need to create a directory in your /mnt/ folder for the device you want to mount, I went with the folder name usb:

sudo mkdir /mnt/usb

Then all you need to do is mount the drive to that directory using the command:

sudo mount /dev/sda1 /mnt/usb

NOTE: Your device might not be sda1! You will need to find out what device it is! I found a good tutorial about this here. Again, the gist of it is as follows:

Run the command:

tail -f /var/log/messages

Then simply plug in your memory stick and you should see a few messages appear telling you what your device is called.

Now you can run your mount command using the right parameters! After doing this you should then change directory to where you mounted the memory stick to (in my case /mnt/usb) and run omxplayer on the media using the aforementioned command.

Something to note about controlling playback is that from what I read on the omxplayer website there doesn’t appear to be a way of specifying a time stamp to start watching from, the only controls are:

z           Show Info
1           Increase Speed
2           Decrease Speed
j           Previous Audio stream
k           Next Audio stream
i           Previous Chapter
o           Next Chapter
n           Previous Subtitle stream
m           Next Subtitle stream
s           Toggle subtitles
d           Subtitle delay -250 ms
f           Subtitle delay +250 ms
q           Exit OMXPlayer
Space or p  Pause/Resume
-           Decrease Volume
+           Increase Volume
Left Arrow  Seek -30
Right Arrow Seek +30
Down Arrow  Seek -600
Up Arrow    Seek +600

But those are adequate enough for the most part! Hopefully this will come in handy for anyone not wanting to install XBMC!

Boids, Flocking Behaviour Tutorial Part 1: The Engine

The green circle represents their field of view.

Flicking through my Java games book some more I found a chapter on flocking, a subject a university tutor of mine only managed to mention in passing on our AI module due to time constraints. As I find AI quite interesting I thought I’d look into the subject some more so I decided to write my own 2D demo application of flocking behaviour akin to the original proposed by Craig Reynolds in his paper and on his website.

The Java book itself contained it’s version of a tutorial using awt but I decided to stick with a newer library that I have worked in before called Slick2D (See my tutorial on Game States as an example of my use of it before). I also decided to stick with 2D instead of 3D as the book demonstrates so I can keep things simple.

This first post is going to be about setting up the basic game engine I will be using as the underlying driving force of the game/demo.

 Step 1: Set up Slick

Slick needs a little bit of setting up before you begin making and testing your creations with it. This is relatively simple, add the jar files as libraries and make sure to include the LWJGL native files on your library path.

See the Slick2D wiki for details if you’re unsure how to do this.

If you are working in IntelliJ then you need to make sure to add the natives to your java.library.path in your run configurations like so:

RunConfig

For those too lazy to type that’s: -Djava.library.path=path/to/natives

 Step 2: Extend the BasicGame and Start Coding

The next step to take is to create a new class that extends Slick’s BasicGame class.

public class BoidsGame extends BasicGame {
	public BoidsGame(String title) {
		super(title);
	}

	@Override
	public void init(GameContainer gc) throws SlickException {
		//To change body of implemented methods use File | Settings | File Templates.
	}

	@Override
	public void update(GameContainer gc, int delta) throws SlickException {
		//To change body of implemented methods use File | Settings | File Templates.
	}

	@Override
	public void render(GameContainer gc, Graphics g) throws SlickException {
		//To change body of implemented methods use File | Settings | File Templates.
	}
}

The BasicGame class basically gets rid of a lot of the coding you need to do to create a game; in the words of the JavaDoc it is: A basic implementation of a game to take out the boring bits.

As you can see it needs to have a few methods over-ridden; init() is run once when we start the game and should contain all the initialization, update() and render() are both run each game frame. update() is where you should place game logic and  render() is reserved for drawing onto the screen. You also need to implement a constructor that takes the title of your game as a parameter.

Next we need a main method that will launch the game. In Slick we do this using an AppGameContainer object. I’ve also added a few constants and fields to my class that are used here:

	private static final int WIDTH = 800;
	private static final int HEIGHT = 600;
	private static int targetFrameRate = 30;

	private AppGameContainer container = null;

	public static void main(String args[]) {
		BoidsGame game = new BoidsGame("Boids - By Lyndon Armitage");
		AppGameContainer app;
		try {
			app = new AppGameContainer(game, WIDTH, HEIGHT, false);
			app.setShowFPS(false);
			app.setTargetFrameRate(targetFrameRate);
			app.setPaused(true);
			game.container = app;
			app.start();
		} catch (SlickException e) {
			e.printStackTrace();
		}
	}

This when run will show an empty window that is 800 by 600 pixels in size.

 Step 3: A Boid Skeleton

Next on the agenda is to make a start on the individual Boid code.

First create a new Class named Boid. Being a visual and non static creature we will need to implement a way of showing and updating the Boid so will create two methods that will be called by those in the previous class, update() and render():

	public void render(GameContainer gc, Graphics g) {

	}

	public void update(GameContainer gc, int delta) {

	}

Now we need to decide what the Boid will look like on screen and what properties it needs that will be changing.

  • A Boid needs:
    • A position in space. I choose to make this 2D so I opted for using a class built into Slick called Vector2f
    • A velocity. Again this is in 2D so I used Vector2f
    • A colour. Slick has a class for this called Color
    • An angle of where it is looking.
    • Height and width.
    • A shape. In keeping with the examples given on Boids I made them triangular
    • A field of view. To keep things simple I am using a whole circle around the Boid. A better way of doing this would be to chop out a section of the circle behind the boid so it cannot see behind it.

     

So what do these all look like?

	private float angle = 0f;
	private Vector2f pos;
	private Vector2f vel;
	private Color color;

	private static final float width = 8;
	private static final float height = 10;
	private static final int lineWidth = 1; // how many pixels thick the lines are

	private static final float viewDistance = 50f;

I’ve used static final values for things that I will not be changing during the simulation/game. Onto the code using these!

Firstly I implemented the basic code to update the position of the Boid based only on the velocity, this is quite simple:

		pos.x += vel.x / delta;
		pos.y += vel.y / delta;
		if (pos.x > gc.getWidth()) {
			pos.x = gc.getWidth() - pos.x;
		}
		if (pos.y > gc.getHeight()) {
			pos.y = gc.getHeight() - pos.y;
		}
		if (pos.x < 0) {
			pos.x = gc.getWidth();
		}
		if (pos.y < 0) {
			pos.y = gc.getHeight();
		}

Delta, for those who do not know what it is, is the time between frames and is used to ensure that everything is synced up and independent of the frame rate. In this example I also made the Boids play area loop round from top to bottom and left to right.

For this to work we need to initialize some of the objects we declared as variables. For this I created an init() method that I call from various constructors:

	public Boid() {
		init(0f, 0f, Color.white, 0f, 0f);
	}

	public Boid(float x, float y) {
		init(x, y, Color.white, 0f, 0f);
	}

	public Boid(float x, float y, Color color) {
		init(x, y, color, 0f, 0f);
	}

	public Boid(float x, float y, Color color, float velX, float velY) {
		init(x, y, color, velX, velY);
	}

	public Boid(float x, float y, float velX, float velY) {
		init(x, y, Color.white, velX, velY);
	}

	private void init(float x, float y, Color color, float velX, float velY) {
		pos = new Vector2f(x, y);
		vel = new Vector2f(velX, velY);
		this.color = color;
	}

These should cover all the ways I could want to create a Boid. We also need to add some getters for later use as I made all of the variables we defined private.

	public Color getColor() {
		return color;
	}

	public Vector2f getPos() {
		return pos;
	}

	public Vector2f getVel() {
		return vel;
	}

But wait, we can now update the Boids position but we still can’t see them! So we need to make the render() do something.

Below is the code I used to render the Boids, note that I made use of the graphics context given to us by Slick, a better way of drawing a triangle would of been to use LWJGL’s OpenGL methods directly  but opted to use Slick’s methods to show them off:

	public void render(GameContainer gc, Graphics g) {
		//g.drawString("boid", pos.x, pos.y);
		g.rotate(pos.x, pos.y, angle);
		g.setLineWidth(lineWidth);
		g.setColor(color);
		g.drawLine(pos.x - (width / 2), pos.y - (height / 2), pos.x + (width / 2), pos.y - (height / 2)); // bottom line
		g.drawLine(pos.x + (width / 2), pos.y - (height / 2), pos.x, pos.y + (height / 2)); // right to top
		g.drawLine(pos.x, pos.y + (height / 2), pos.x - (width / 2), pos.y - (height / 2)); // top to left
		g.resetTransform();
	}

Hopefully these method calls are pretty self explanatory; I first rotate the context, then I set the line width and colour, followed by drawing the lines and finally resetting the rotation. The lines are each drawn separately, first the bottom line of the triangle, next the right hand side line and last the left hand side line.

Step 4: Some Intelligence for the Boid

Now we have our basic Boid Skeleton code; we can see them and they can move but they have no intelligence so now we need to give them a brain just like the Scarecrow in Oz wanted!

The first thing I a Boid needs to be able to do is know the angle between itself an other Boids. To do this we can use a mathmatical function called the arctangent that when supplied with two values derived from the difference between two points will return the angle between them. In my code I implemented this function like this:

	private float getAngleToBoid(Boid target) {
		float deltaX = target.getPos().x - pos.x;
		float deltaY = target.getPos().y - pos.y;
		float angle = (float) (Math.atan2(deltaY, deltaX) * 180 / Math.PI);
		angle -= 90; // seems to be off by 90 degrees probably due to how the graphics are set up
		if (angle > 360f) {
			angle = 360f - angle;
		} else if (angle < 0f) {
			angle = 360f + angle;
		}
		return angle;
	}

Like most programming languages and libraries Math.atan2() returns an angle in radians so we need to convert these to degrees for our use, that’s what the multiplication and division by pi are for.

Next the Boid also needs to be able to discern the distance between it and other Boids, this is a very simple method to implement since it uses a well known forumla for measuring distance on a grid:

	private float getDistanceToBoid(Boid target) {
		Vector2f v = target.getPos();
		return (float) Math.sqrt(Math.pow(v.x - pos.x, 2) + Math.pow(v.y - pos.y, 2));
	}

And finally a Boid should also be able to figure out if another Boid is within range of it or not. The method I made to do this makes use of Pythagoras to check if it is with a circle around the Boid:

	private boolean isBoidInView(Boid target) {
		float dx = Math.abs(target.getPos().x - pos.x);
		float dy = Math.abs(target.getPos().y - pos.y);
		float radius = viewDistance / 2;
		if (dx > radius) {
			return false;
		}
		if (dy > radius) {
			return false;
		}
		if (dx + dy <= radius) {
			return true;
		}
		// Pythagoras here
		if (Math.pow(dx, 2) + Math.pow(dy, 2) <= Math.pow(radius, 2)) {
			return true;
		} else {
			return false;
		}
	}

It also does some checks beforehand to make sure we need to use Pythagoras.

Step 5: Add some Boids and test!

Up until now we havent added any Boids to the actual engine to test. In fact the engine doesn’t have any logic as of yet for dealing with the Boids. What we now need to do is write the code for the update() and render() methods!

This code is quite simple since the logic for the Boids will be contained within the Boids themselves. All I have done is added an ArrayList of Boids to the main game class and some code that loops through each Boid within calling their respective methods:

	private ArrayList boids = null;
	@Override
	public void init(GameContainer gc) throws SlickException {
		Random rnd = new Random();
		boids = new ArrayList();
		// add the boids
		Boid boid1 = new Boid(WIDTH / 2, HEIGHT / 2, rnd.nextInt(100), rnd.nextInt(100));
		boids.add(boid1);
		Boid boid2 = new Boid(WIDTH / 4, HEIGHT / 2, rnd.nextInt(100), rnd.nextInt(100));
		boids.add(boid2);
		gc.setPaused(false);
	}

	@Override
	public void update(GameContainer gc, int delta) throws SlickException {
		//System.out.println(delta);
		for (Boid b : boids) {
			b.update(gc, delta, boids);
		}
	}

	@Override
	public void render(GameContainer gc, Graphics g) throws SlickException {

		if (drawGrid) {
			drawGrid(g);
		}

		for (Boid b : boids) {
			b.render(gc, g);
			if (drawArc) {
				b.renderArc(g);
			}
		}

		if (debugOn) {
			g.setColor(Color.green);
			g.drawString("0,0", 10f, 0f);
			g.drawString(WIDTH + "," + HEIGHT, WIDTH - 65, HEIGHT - 18);
		}
	}

Notice that I have changed the method signature of the Boid method update() to include a reference to the ArrayList, this way we can tell each Boid were each of them are and develop behaviour accordingly. For example, this code will make a Boid look at the closest other Boid within range:

	public void update(GameContainer gc, int delta, ArrayList boids) {

		// some look at the closest boid in view code
		Boid target = null;
		float dist = 0;
		for (Boid b : boids) {
			if (b.getPos().x == pos.x && b.getPos().y == pos.y || !isBoidInView(b)) continue;
			if (target == null || getDistanceToBoid(b) < dist) {
				dist = getDistanceToBoid(b);
				target = b;
			}
		}
		if (target != null) {
			angle = getAngleToBoid(target);
		}

		updatePos(gc, delta); // This contains the original update() code
	}

Also you will notice references to debug features I added; including drawing the view arc around each Boid drawing a grid and drawing out the min and max coordinates in view.

When run at this stage we should get something similar to this when the two Boids go within range of each other:

The green circle represents their field of view.

The green circle represents their field of view.

The source code for this tutorial can be found on GitHub here: https://github.com/LyndonArmitage/Boids

The next part will hopefully deal with adding the various Flocking behaviours Separation, Alignment and Cohesion. For more information on Boids please visit Craig Reynolds website: http://www.red3d.com/cwr/boids/

Edit:

A flaw in the way I had my boids reacting to each other was exposed by Amndeep7 on Reddit, see the original comment here.

I had overlooked the fact that each time I updated a Boid their new values were then used by the next Boid in the loop, this meant that the results would not be valid. The original comment has a much better explanation than this so I encourage you to have a look at it. As for the code in this tutorial I will address the issue in the concluding post so for now I leave it as an exercise for the reader to figure out how to solve this problem, feel free to leave suggestions and solutions in the comments.