SUVAT question application

One of the problems I keep facing is that, although I can write software pretty well, I keep struggling to find things to make. The advice always seems to be come up with something useful so this is my first attempt at doing just that.

Since I spent a few days looking for new suvat questions when I was practicing for them throughout college, I figured I’d put together an application I’d like to have had at the time. The application presents the user with one suvat question from a list of back-end problems. The user can attempt to answer it, look through the solution, and request another problem once they’re done.

Screenshot_2017-08-23-19-42-35 Screenshot_2017-08-23-19-42-43

Since I want to be able to add new problems to the application without making any laborious changes I decided that it would be best to store the input problems in a back-end XML file. Luckily Java has a pretty good XML parser, albeit one I still took too long to get working due to my misunderstanding of how the read methods would advance the parser’s position.

Sticking everything in XML makes editing easier but does leave me with a tiny technical problem, but one I don’t think is quite worth fixing at the moment. Normal strings stored as an Android string constant can be changed according to the language of the target device; I couldn’t do the same thing with the XML file contents without some manual checking.

Despite it only being in English and the fact that it’s very basic-looking, I like to think that someone might have a genuine use for this application. I think I’ll publish it, maybe once it looks better and I’ve exhausted every method of not paying for a developer application on the various app stores.

Code Editor: My Final Year Project

Since the academic year is over and I finally graduated I think I can now write about my final year project without it jeopardizing any marking.

My project was to create a code editor with syntax highlighting and autocomplete. In addition to these features required by the brief I also created other functionality in order to expand the functionality of the editor; these were error-feedback, project spaces, compiler integration, and simple source-control mechanisms (commit, revert, and update). The editor targeted the C language, specifically the ANSI-C standard.

Standard editor script

Figure 1. The main layout of the editor

The above figure shows the main layout of the editor that the user interacts with. It’s a fairly simple layout devoting the most screen space to the coding section and making the editor functionality available in the toolbar to the left.

Regarding the toolbar, the choice of this was largely based on another editor’s interface, one made by Robert Crocombe a couple of years earlier. The difference was that he made a very elegant toolbar which expands to give greater information about the tool; in my toolbar I relied on familiarity with common icons in order to show the functionality of each button (this is something I’ll have to change according to the user feedback I got).

Overall I managed to get the majority of the goals for the project, but couldn’t get all of the source-control solutions done in time.

Syntax highlighting

The syntax highlighting feature was made to highlight each different lexeme in the C language. This included all 32 standard keywords and operators (numbers, pointers, etc), and any user-defined types.

Thankfully, a fair few people have undertaken the code editor project before me so a pretty clear method for visually drawing the text on the screen had been provided; inherit from the standard text box and override the rendering method to highlight the text.

Like the people who did the project before me I managed to make a few optimisations to the rendering code; before generating any of the formatted text objects to be displayed first check to see if the line to be rendered would be on-screen. In this way the same number of lines are rendered regardless of the length of code and there is negligible difference between the rendering times because the checks for whether the line is visible is comprised of a relatively inexpensive mathematical comparison.

Autocomplete

Code editor autocomplete example 1

Figure 2. The autocomplete feature suggesting two valid variables

Code editor autocomplete example 2

Figure 3. The autocomplete feature suggesting method signatures alongside suggestions

Code editor autocomplete example 3

Figure 4. The autocomplete feature ensuring the method signature remains displayed until the closing bracket of the method call has been completed.

An autocomplete feature is one which suggests any of the different possible tokens to be entered based on the partial string the user has entered. For example, if the user had entered the text ‘d’ the autocomplete feature would suggest the words ‘do’, ‘double’, and any variables starting with the string ‘d’.

One of the aspects of my editor that I’m most proud of is its ability to account for variable scope. When I was looking at the other editors I didn’t want to have an editor that suggested variable names regardless of scope (Notepad++ had the type of autocomplete I wanted to avoid (not that Notepad++ is a bad editor)). In order to accomplish this scope awareness I’ve built up some classes to build a type of symbol table. A MethodScope object is created for each method. This MethodScope object contains the name of the method and its start and end points in the text. Each method scope object contains a list of variable objects. These are comprised of the variable name, its type, and the point in text beyond which it is applicable. Using these objects it is possible to see which variables are applicable by comparing the text’s caret position to each MethodScope’s start and end position and the applicable point of each variable within that scope.

The autocomplete feature also tries to allow the user a degree of error before completely removing suggestions. To do this I’ve used the edit distance algorithm to make sure that the entered text is within a set number of changes before deciding that a variable cannot be suggested. For example, if the code contains a variable ‘LocalVariable’ and the user enters ‘localVariabl’ the variable would still be entered because 2 of the characters could be changed to get us to the actual suggestion (2 was the acceptable number I decided on, but this could easily be changed). I should mention that this was an idea suggested by people who had previosuly undertaken the code editor project.

Error-feedback

My code editor contains syntactic error-feedback. This is accomplished via the Irony.NET parsing technology which I chose because it allows the BNF grammar to be coded into the program and doesn’t require any kind of additional runtime-environment. Incidentally, the autocomplete and syntax highlighting features could be accomplished via this parsing technology by checking the end nodes of the generated parse tree for their token types and then colouring and storing them. This approach was not used because the first syntactic error causes the parsing technology to stop parsing and I didn’t have the time implement any error-recovery mechanism.

The parsing technology stops on the first syntactic error, but an ideal error-feedback feature will highlight as many errors as possible. Using the MethodScopes object it is possible to extract all of the text from a method and parse them individually, thus my editor can get the first syntactic error in each method and highlight multiple errors throughout the code.

Project spaces

This is the most basic of the features I’ve implemented. Each file in the project should be located in the same space in secondary memory. Once a program is saved through the editor both the back-end file and the XML project file are updated. The XML project file contains lists of each file in the project, the location of any SVN repository location, and any project-specific compiler arguments that might have been entered.

The XML format was chosen because this back-end file has to be human-readable in the event that edits have to be made outside of the editor. Jason was another viable format option that’s quicker to parse, but XML was chosen because the speed of execution isn’t vital to this feature and because the .NET library comes with in-built XML tools whereas a Jason approach would require 3rd party tools which I wanted to limit the use of for clarity’s sake.

Below is an example of a typical project XML file which is loaded and saved in order to store the necessary information about the project:

Main.c
Person.h
Person.c

To conform with good standards that Project tag really should encompass all of the file tags, but this worked well enough.

Compiler integration

Compilation and run program settings

Figure 5. The compiler and run-program sections of the settings page.

Compilation is a necessary part of C development.  Since I wanted my editor to provide all of the necessary tools for writing C programs it is necessary for the user to be able to compile and run the output program through the editor. Above is the compiler and run program section of the settings page that the user-feedback indicates could use some work.

The editor allows the user to specify any compiler on the system and any number of compiler directives. Once specified a compiler setup is saved into secondary memory and can be used again on multiple programs. Having multiple applicable compilers was a feature I adapted from Code::Blocks since it seems like a useful feature but I didn’t think it was readily accessible in most of the larger editors I’d looked at.

Ensuring the compiler setup can be applicable to multiple projects means I’ve had to setup a mechanism to evaluate the setup arguments and produce the output string to be passed to the compiler as an argument. For example, the GCC compiler can output an executable with the specified name using the -o argument. If the user specifies this in my editor (as shown in the figure below), the output string -o followed by whatever the project name is produced by the EvaluateArgument method.

Standard GCC setup

Figure 6. The standard GCC argument for outputting the executable stored in the C editor.

        public string EvaluateArgument(Project PassedProject)
        {
            string ReturningString = CompilerArgumentSpecifier;

            switch(ChosenArgumentType)
            {
                case ArgumentType.UseAllFiles:
                    for(int i = 0; i < PassedProject.ReturnProjectFiles().Count; i++)
                    {
                        if(PassedProject.ReturnProjectFiles()[i].EndsWith(".c"))
                        {
                            ReturningString += ("\"" + PassedProject.ProjectLocation + @"\" + PassedProject.ReturnProjectFiles()[i] + "\"");
                            ReturningString += " ";
                        }
                    }
                    break;
                case ArgumentType.ProjectName:
                    ReturningString += "\"" + PassedProject.Name + "\"";
                    break;
                case ArgumentType.ProjectLocation:
                    ReturningString += "\"" + PassedProject.ProjectLocation + "\"";
                    break;
                case ArgumentType.EntireProjectAddress:
                    ReturningString += ("\"" + PassedProject.ProjectLocation + @"\" + PassedProject.Name + "\"");
                    break;
                case ArgumentType.ManualEntry:
                    ReturningString += "\"" + OptionalArgumentType + "\"";
                    break;
                case ArgumentType.Ended:
                    //Should be used when the argument is a single variable
                    break;
            }
            return ReturningString;
        }

One of the things I should have done, and might in the future, is have each compiler include all of the standard directives by default; when I implemented the feature I wanted the user to be able to input the necessary directives in the (admittedly unlikely) event that someone was using an unconventional compiler or variation of standard, but including all standard directives by default would save significant time.

Running the compiler and the output program uses very similar code. The .Net library’s System.Diagnostics library contains the code to launch an external process and specify any input arguments. This works quite simply for individual program arguments, but the compiler arguments required me to build a mechanism to produce the output arguments based on their type.

Source-control mechanisms

I wanted the editor to provide commit, update, and revert to options to the user. Unfortunately, due to time constraints and a lack of familiarity with the API (SharpSVN which was the best tool I could find for the job), it wasn’t possible to implement the revert to feature within the time given for the project.

The editor allows the user to both commit and update their work via the tortoise source-control software, but there are some caveats. Unfortunately, I wasn’t able to implement the functionality to commit new folders to the repository thus if the user wants to commit and update from a repository it is first necessary to checkout the folder from the repository and then save the project in the checked out folder.

I should mention that someone quite generously put up working code for the commit and update functions of the API, which meant I could adapt the code to implement these features quicker. These can be located here (Accessed 28 October 2016).

Overall

My editor works very well considering the functionality I tried to implement, the timespan I had, and the new tools I had to work with. If I’m going to return to this in the future then there are a fair few simple updates to be implemented:

  • Update the source-control mechanism to allow the user to create folders on the repository side
  • Include the revert-to functionality in the source-control functionality
  • Changed the compiler directives settings to include all standard directives by default

There’s quite a bit more that could be done, however these are relatively simple (hopefully simple in the case of the SVN work) that could be done in the immediate future.

This blog post is simply a quick look at the project; it would take a great deal longer to talk about all of the project. While I’ve just done a quick run-through of the features here it’s fair to say that I made some unorthodox design choices and had to come up with strange solutions for obscure problems, which might make pretty good posts later on.

Creating a better management system

ManagementSystemServerSide ManagementSystemClientSide

Figure 1 (left). The server application which services each client

Figure 2 (right). A simple client to communicate via the implemented protocol

This week I’ve been working on a better management system. This was an interesting project because it incorporates many different approaches to make it accessible and secure.

The basis for the project is very simple. A user can connect to and log into the server via the client application (this could be changed to a web-server or mobile application later with a little careful tweaking). Once logged in the client can place a new order or look at the list of their previous orders.

Once connected all interactions take place over a persistent connection. In this way, a user’s individual Id can be associated with the connection (incidentally it would be possible to extend this further later on in order to implement some kind of privilege mechanism to distinguish customers and staff in this hypothetical company).

The actual protocol implemented is very basic with the first line sent denoting the command necessary and subsequent lines supplying the information for each command.

Multi-threading

A server has to handle many connections concurrently (or it should at least if it’s going to be considered any good), so my application relegates each connection to a separate thread. In my application the TCPListener object listens on a background-thread (which is necessary because the AcceptTcpClient method is a blocking call) and launches a new thread passing the accepted TcpClient to it.

            while (true)
            {
                TcpClient Connection = ConnectionListener.AcceptTcpClient();
                Thread t = new Thread(() => { ServiceConnection(Connection); });
                t.Start();
                //Notion of using delegate to pass object to thread provided at the following source
                //https://stackoverflow.com/questions/3988514/how-to-pass-refrence-and-get-return-in-thread
                //Accessed 26/06/2017
            }

The idea of using a delegate method to pass the TcpClient to the thread was provided here by user sumit_programmer (accessed 26/07/2017).

Salting hashes

While creating this application I’ve attempted to follow best practice when storing password, meaning that users with the same passwords should not have the same hashes associated with them. This is generally accomplished by appending random data to the password to make sure the hash becomes vastly different.

private string GenerateSalt()
{
     string GeneratedSalt = "";

     for(int i = 0; i < SaltLength; i++)
     {
         GeneratedSalt += System.Text.Encoding.ASCII.GetString(new byte[] { (byte)rnd.Next(0, 255) });
     }

     return GeneratedSalt;
}

In my system this is done by generating a random string of fixed length and appending it to the entered password. I’ve accomplished this by simple using the .Net Random object and using the result for hash values. This has worked pretty well so far but because the Random object only gets its value from a pseudo-random sequence it is entirely possible that the same character will be selected from each part of the salt if selected quickly enough. Fortunately, this is unlikely to happen throughout the salt and very unlikely to produce the same salts for the same passwords. As an example both users 1 and 2 have the same password (‘password’) with vastly different hashes to be matched.

ChangedHashes

Figure 3. The difference in hashes for the same password

Hash-based check-sums

Another of the security measures I’ve implemented is to ensure that the message received hasn’t been tampered with en-route. To do this a hash of the concatenated message contents is sent along with the message. Then a new hash is recreated upon receipt; if the newly created hash and received hash are equal then there has been no tampering, but if they differ the message has been altered.

This leaves me with something of a weird question; if the message has been intercepted and changed do I want to inform the client of it? If you do wouldn’t this alert the attacker to the fact that they’ve been discovered? Because this is unlikely to occur in my system I’ve simply resolved to make sure that nothing happens in the event that the two hashes don’t match, but this is something I’ll have to look into later.

      int ChecksumLength = int.Parse(ConnectionInputStream.ReadLine());

      string CheckSum = "";

      for(int i = 0; i < ChecksumLength; i++)
      {
           CheckSum += (char)ConnectionInputStream.Read();
      }
      ConnectionInputStream.ReadLine();

An interesting problem that I didn’t think of accounting for before the start of the project was that the hashes would inevitably include special characters (see above). This meant that the length of the hash would have to be sent to know how many individual characters to read in; this is the downside of using something low-level like ports.

Conclusion

This hasn’t been a bad project. I’ve made a reasonably useful protocol that could be extended further, and I’ve implemented some fairly reasonable security measures throughout it. It certainly may have been a better idea to implement it in some higher form of communication technology but overall I’m reasonably happy with it.

Creating my simple XML setup file

At the end of the last blog post I said that it’d be a good idea to create some kind of config file in order to change the functions that the program could provide without having to keep recompiling the program. To fulfill this hastily decided upon requirement I’ve created XML config file with each of the queries encapsulated within their own tag (as seen in the example snippet immediately below) and a method which creates a XAML button for each query stored. The appropriate query can be run upon this button’s click.

<SQLQuery name=”…” query=”…” result=”…”/>

There are a few problems to overcome for this to work. Since the number of queries provided isn’t known until run-time, each button click must run the same method since the code to create the buttons doesn’t iterate a set number of times. Since each SQL query returns a different mix of columns (and consequently types) this has to be accounted for when retrieving the rows.

Upon successfully loading in the XML query element a ‘StoredDynamicQuery’ object is created to store the query, its name, and the format of the results. To make it simpler the content of the button is set as the query name; once the button is clicked its content is used to retrieve the necessary ‘StoredDynamicQuery’ object.

result=”CustomerID:int,Name:string,Age:int,Address:string,PostCode:string”

To account for the changes in the end-results I’ve come up with a results mechanism that could generously be described as JSON-like (as seen above). They’re written in key-value pairs separated by colons, and the results string can contain any number of these with individual pairs split by commas. The code to account for this can be seen below.

StoredDynamicQuery RetrievedQuery = RetrieveStoredQuery(QueryName);

using (SqlConnection connection = new SqlConnection(DataString))
{
     using (SqlCommand command = new SqlCommand(RetrievedQuery.QueryCommand, connection))
     {
          List IndividualResults = RetrievedQuery.Results.Split(',').ToList();
          connection.Open();

          using (SqlDataReader reader = command.ExecuteReader())
          {
               if (reader.HasRows)
               {
                    while(reader.Read())
                    {
                         string ConcatenatedString = "";
                         for (int i = 0; i < IndividualResults.Count; i++)
                         {
                              string QueryPart = IndividualResults[i].Split(':')[0];
                              string ResultType = IndividualResults[i].Split(':')[1];

                              switch (ResultType)
                              {
                                   case "string":
                                       ConcatenatedString += QueryPart + ": " + reader.GetString(reader.GetOrdinal(QueryPart)).TrimEnd() + " ";
                                       break;
                                   case "int":
                                       ConcatenatedString += QueryPart + ": " + reader.GetInt32(reader.GetOrdinal(QueryPart)) + " ";
                                       break;
                               }
                          }
                          OutputBox.Items.Add(ConcatenatedString);
                     }
                }
           }
      }
}

In the above code the QueryPart and the ResultType each represent the column name and type of the key-value pairs of the results string. In this way I can basically add in any new query methods without the need for recompilation. For examples see the below two images.

LoadedQueriesExample1   LoadedQueriesExample2

Figure 1. The initial functionality.

Figure 2. The expanded functionality based upon the updated XML file.

Figure 1 (on the left) is created with the first XML layout below. The layout in figure 2 (on the right) is an extension of the first layout. In my implementation it was possible to add an entirely new piece of functionality by just adding an additional XML line.

More work would have to be put in to account for queries that insert and update data (some way of knowing which visual element provides which bit of data for the query would have to be agreed on), but overall it provides what I wanted.

Using SQL connections in C#

Last time I used an SQL database with C# it all went through the LINQ plugin. It’s useful, but it always takes a good while to setup and requires me to remember a good chunk of code (like where to put the NotifyChanging and NotifyChanged parameters, what attributes are needed, etc). One of the things I discovered courtesy of this page  (accessed 13/06/2017) is that the .NET framework provides some APIs to link to a back-end database.

DatabaseManagementOpenPage

Using these I’ve created a simple database management system in order to query and update a database through a WPF GUI application. The database I’ve created is comprised of 3 tables: a customer table, an order table, and a product table. The application allows the user to query the data in the database, update the data, and insert new records.

The actual code involves first creating a data connection, then a command, and finally executing the command. It’s the ability to manually stick in the SQL string you want to execute that makes it the most useful to me, but since each row being returned is comprised of different rows it means having to come up with some alternate mechanism to account for the results of each row. This means that the finished program is lot longer than a LINQ alternative.

I think I like this approach over the LINQ option; it means writing the SQL queries in by hand, but I think this is a fair trade-off for having a simpler setup. I suppose if I’d known about this simple API beforehand then I’d probably used back-end databases in some of my older projects.

One possible change to be made to this program would be to store the SQL queries in some external file so I don’t have to recompile the thing every time I want alternatew functionality. The idea I’ve come up with is to use an external XML file and then use some kind of regular-expression replacement mechanism (this has become something of a go-to thing for string replacement lately (worrying maybe, but if it works)) in order to programmatically alter the subject of the search. This would be necessary as most of the queries inherent in the project require either data insertion or mild changes of the same query.

Creating a simple Chrome extension

Working Extension

Image 1. The current popup frame for my extension

With the help of some online tutorials and pretty good documentation I’ve managed to make a simple Google Chrome extension. Built more to experiment with the kind of functionality that can be built, my extension does some pretty basic stuff, but it’s interesting to know how these things can be built to target multiple web pages.

My extension currently has 4 bits of functionality: get the page to display a simple alert, remove all of the images on a page, replace all of the images on a page, and display all of the hypertext links on the page in an alert.

Before  Removed Images

Image 2. The original image search.

Image 3. The image search with sources removed through my extension.

Replaced Images LinkRetrieval

Image 4. The images replaced with a separate PNG image through my extension (original image here)

Image 5. The links retrieved from each <a> tag on the page

As it turns out, you can make the prompt with a simple HTML file. This was a pretty nice surprise as HTML and CSS are simple enough and I can make a fairly nice interface using them. The problem is that an extension doesn’t support inline JavaScript (so StackOverflow says) meaning that you have to add event handler’s to each different button.

One interesting mechanism I’ve had to learn about is the message passing mechanism chrome extensions have to implement for communication between content-scripts (ones that are embedded into the page) and the scripts running in the extension. The messaging code involves sending an appropriate code to the recipient, it is up to the recipient to pick an appropriate action based upon this code. It reminds me of Android’s intent mechanism.

I’ve used messaging in two places in this extension: to trigger the functionality necessary from the extension, and to initially update the extension with the number of images and links on the page.

This hasn’t been a bad experiment, now I just have to think of some kind of missing functionality to base an extension around.

I learnt a significant deal how to get started from this helpful tutorial (Getting Started, accessed June 6th 2017), without which I wouldn’t be able to create the manifest file, and based most of my messaging around the tutorials found on the official messaging page (Message Passing, accessed June 6th 2017).

3 Thing Game 2017 Entry

Every year the University Of Hull’s Computer Science department host a game where teams get assigned three random words and then have to create a game around those words. This year team Error 303 (a holdover name from our first year) created Mine of Aether, which Kristian has made available here.

Mine of Aether 1  Mine of Aether 2

Why Mine of Ather? The three words we got were: miner, ethereal, and caution. Since the point of the contest is to base the game around the words the overall concept was as follows: you play as a miner navigating his way through tunnels in an attempt to find jewels (or treasure chests in this case), all the while avoiding the enemies present in the mines and using the temporary ghost mode to reach areas that would be otherwise inaccessible.

The entire thing was created in Unity, which, thankfully, the other members of the team have far more experience with, and, miraculously, has its own kind of source-control mechanism so you can work across machines.

Unlike our usual 3 Thing Game performance where we stay up most of the night and get too tired to make much progress at about 8 or 9am, this year we managed to last the full 24 hours. It would be fair to say that the game still needs work and we didn’t get to put in everything we wanted, but its definitely worth a play through.

Android’s Gyroscope

Learning to use the sensor’s in an Android machine is something I should have done a while back. One of the things I’m liking about the Android system is that it seems to provide access to what I assume is pretty complex technology with simple sensor classes (even if Android Studio still bugs me to a great extent).

The application I’ve made uses the change in the gyroscope’s angle in order to draw onto the canvas. Basically, a tilt of the tablet changes the direction in which the controller on screen moves and consequently the direction in which the line is painted.

Building this involved a couple of interesting little tricks: inheriting from the android view class, overriding the onDraw method, and learning to register and unregister android services. Actually figuring out how to stick a compiled application directly onto a device was a bit of a pain; that tap the model number 7 times to get the developer mode trick is very strange.

The main goal of this was to learn how to use the built in android sensors, so the drawing implementation is not exactly efficient. In order to correctly draw the line, the view I’ve implemented stores a list of previous positions and draws a circle the same size of the character at that point; at the moment there’s no limit on the size of the list so it will inevitably cause a crash if there are too many changes in the screen’s orientation.

One of the things I didn’t consider when making this program was that I’ve set the speed of the character based upon the change in rotations (with the addition of some threshold values to make sure it can continue one direction and doesn’t just travel forward and back), but because of differences in the screen size I can turn on one axis a lot quicker. That’s something to consider for the future, should I ever put this to use.

The gyroscope sensor code I’ve used in this code can be found in the below videos (neither of which are my own content). The first targets the gyroscope sensor explicitly, the second is targeting an accelerometer, but I think it gives a better insight into the event values array that the sensor events provide. These are the X, Y, and Z rotations in the case of the gyroscope, and that took a while to wrap my head around.

Video 1 – Last accessed 21/05/2017

Video 2 – Last accessed 20/05/2017

As an afterthought, it wouldn’t be a bad idea to monitor the change in angle to calculate acceleration. This is more physics work than anything sensor-based, but it’d be a cool way to improve the app.

GTK#

It probably isn’t a great idea to rely on Visual Studio and WPF for everything. That’s what I decided at the beginning of the week as a thin excuse to take another crack at trying GTK#. I’d considered using this as part of my final year project, but ultimately decided that it was too unfamiliar and that I could get a lot more done a lot sooner by using something I knew.

Now I’ve decided to take another look at it because I want to write something for the Linux OS I have on my computer. I’ve took a crack at some little, console-based things before, but I knew that eventually I’d like to make something with a GUI (I should probably take a look at installing Java on there sooner or later). Apparently you can run anything you make on the Windows OS on a Linux or Mac installation if you run it through the Mono Project software; it’d be nice to have some familiar libraries available when I get back to it.

It’ll be interesting if it works. Still, new framework, so baby steps. I’ve been writing everything in a new editor (Xamarin Studio) for a new framework so it’s not the most technically impressive thing I’ve ever made; so far I’ve got a simple box to store a list of strings.

MonoProject

Seems simple enough, but GTK# (a wrapping of GTK according to the website) doesn’t seem to have the same kind of simple layout as other frameworks. I spent a good while yesterday trying to figure out how to programmatically add data to and from the combobox; you can insert text, but try to get the text back and you find a CellView object (as I say, early days).

In fact I think it would be fair to say that most of the time it took me to build this (far longer than I expected for something that I could knock together in any framework I’m familiar with) was spent trying to puzzle out which of the strangely named features accessed the data I wanted. I’m a little disheartened at my progress to tell the truth. Still, I do intend to get it running on my Ubuntu installation, that was the point after all.

This is hardly the most interesting of posts, but, like with my recurring project, my reasoning (if that’s what were calling it) is that I should try and document as much of what I do as I can; it’ll stop me repeating the same projects over again. From this I think the take-home is that new frameworks will bug you if you don’t take the time to learn them properly (or if all the documentation for them is written in another language from a couple of versions back).

Simple Route Traversal

So a while ago, during a university module, we all had to make a design data structures that could support bi-directional travel. It was a fairly interesting task, and I’m happy to say I did very well.

The problem with this was that the implementation wasn’t very fun to look at; the purpose of the task was to build efficient data structures in C++. I’ve decided to build a similar sort of thing, but instead of just re-uploading my old code (which feels a bit like cheating) I’ve built a new system complete with a GUI.

RouteTraversalBasicLayout

Figure 1. My program displaying a simple network

The system currently works as follows. Add a node, create a link between two nodes, and find a path between the two.

RouteTraversalHighlightedRoute

Figure 2. A highlighted route in my program based upon input criteria

The data object

One of the reasons I decided to do build this is that I’ve recently being writing C programs with data structures. Since these only had to be tiny things to demonstrate functionality I kept finding myself writing a simple list implementation that can be searched by recursively checking each node and accessing its ‘Next’ node. This project finally puts this structure to some use.

Each ‘node’ in this project contains an identifier, X and Y coordinates, and a list of other nodes that connect to it. Algorithms in this project work by recursively calling each connected node from the current. The root node is accessed first.

Infinite loops

Creating this the first time was infuriating! Create a system in which you can only visited a place once and you’re fine, but add the ability to travel backwards and you have to start keeping track of where your going.

RouteTraversalInfiniteLoop

Figure 3. A simple recursive loop that would cause a stack overflow if previously visited nodes were not taken into account

Take the above example. You’d think that a simple square route wouldn’t be too much of a hassle, but trust simple logic to outfox the lazy programmer. You start at Route, which goes to 1, then to 2, 3 follows, and then since you don’t remember that you went to Route already you visit again. This continues until your program throws a scary exception, chastising you for not thinking ahead.

Which isn’t to say that this is a particularly difficult problem to solve once you put some thought into it, but it does mean you have to pass another list of nodes to whatever function you want to make sure you don’t visit them again. In the past coursework when I did this I created route objects and node objects; the route objects would hold the node objects it connected. This might be a better way to do this; have each route object have a flag denoting whether its been traversed yet. It would be more complicated (something I’d rather avoid when an overly-complex spec isn’t provided), but it would be more elegant, and would stop me iterating through the entire list of past nodes every time a new node is checked.

Drawing the network

WPF doesn’t have a HTML like canvas, but if you inherit from a ‘UserControl’ it will let you override the OnRender method you can access the basic visual methods: DrawLine, DrawRectangle, etc. Apparently this doesn’t do particularly well when the visual element has to refresh quickly, so my hope of making this scroll-able might have to wait a while (I’m sure I’ll try it, though I’ll feel a tad less proud if I wind up with a sluggish mess).

Saving and loading

It never occurred to me that this would be so time-consuming. As with all other software I want to be able to save what I’ve made so I don’t feel like the 2 hours I’ve spent looking at a screen could have been spent lazing around on the couch for all the effect it’d have on my productivity.

The problem with saving the state of this program was that you have to create load in links between nodes that might not have been loaded yet; you can save out a node with its list of connections, but when you load it in you can’t recreate that connection until you recreate the necessary nodes. My solution: create a simple structure to store the node and its list in string format and then evaluate each once they’ve all been created. I know, hardly splitting the atom is it? Nonetheless, I’m pretty happy that I’ve got an extensible solution that could be applied regardless of the size or structure of the route put in.

Improving the program

Right now the route that the program suggests when prompted is based upon the order in which the nodes were attached to each other. This was all I wanted for the time being, but it’s not the best way of finding a route. There are many different algorithms, and I want my program to tout at least some variety. What would make this really great would be the ability to calculate the shortest path between nodes, or fewest individual routes.