Sunday, 5 December 2010

Working with disk resident files .Net

I dont often find myself working a lot with flat or delimited files. I either interact with .config files or just with very, very simple text files. However, recently I have found myself dealing with some rather complex delimited files.

When I started working on these files I thought to myself that I would need to develop some code that would do everything I wanted in one fell swoop. However, there are a number of viable alternatives out there that will help you a out a great deal.

Over the last week, tab delimited files were my main adversary and after some pointless efforts on my part to create a robust and useful file handling API, I came across FileHelpers. For now, I am just interested in the ability FileHelpers has when it comes to parsing de-limited files, this whole API is now embedded in the code I am writing though. It struck me that there is a lot on offer from this open source utility that I couldn't ignore.

Take a look at the examples on show in the documentation, there is so much there to make use of it seemed natural to include the API in my work. One of the very cool features I will be looking into during the new year is ability to diff de-limited files. I know there are a lot of tools and patterns out there I could use, but why should I go and make something of my own when this is now on my doorstep?

I urge anyone working in .Net who has to deal with complex de-limited files to take a look at this library, you will not be wasting your time.

Friday, 5 November 2010

Retro Gaming Consoles

Time for something a little light-hearted I think. I love video games consoles, have done since I first saw a SNES in Currys when I was about 13 years old. At the time, my parents were a little hard up so I wasn't able to get something as swanky as the SNES or the MegaDrive at the time. Instead I got given a Master System - this was my first introduction to console gaming.

I loved my MasterSytem, it was the original model with both cartridge and card slot for games. The built in game was Maze, which I actually played with regular frequency. After a few years I was able to get my hands on a MegaDrive, to me at the time this represented the pinnacle of modern video gaming. I was so pleased that my parents decided to get me these consoles, it was a rare treat for me when growing up to have something this fun to play with.

As time went by, I became a steady Sega fanboy, I was given a Mega-CD as a Christmas present once and not long after that, after working for it relentlessly all summer as a Barrow boy on the local Butlins, I was able to get the 32X. Now of course, this was also the time where things like the Playstation and Sega Saturn were being touted for release. This didn't phase me one bit, I could proudly say to my peers at school that I already had a next-gen, 32bit games console in my house right now. Of course, as the 32X bombed out fairly quickly a lot of my compatriots heaped scorn on me. This didn't really upset me that much, there was no way I could get a Playstation or a Saturn, they were way to expensive for me. But this didn't stop me from reading all about the new machines that were on the way. Of course, after a while, my brother was given a Playstation for a birthday present I think, along with a copy of Metal Gear Solid (the limited edition one with the dog tags etc). I think I had started to go to college my this time, so on one of my off days I picked it up and gave it a play. This was the moment I fell in love with Metal Gear Solid as well as the Playstation.

As the years went by, and I started to earn more money as a post-graduate, I was able to settle my schoolboy attraction to consoles I read about in my early teens. Even thought I have a strong home computer background, I have always had, and always will have a love for gaming consoles. Right now I have a fairly good collection going, starting with the Sega MasterSystem (the same one I was given - it still works today) and going up to the Playstation 3 (the one that can play PS2 games). Recently, I picked up a Goldstar 3DO, this is a magnificent console, I recall reading about it being launched when I was you, but then for some reason people stopped writing about it. There was, for a brief time, a number of articles about the 3DO M2, but this never seamed to get released at all.

The console itself is pretty amazing if you take into account that it was released in 1993. Mine came packaged with Starfighter, which is a truly wonderful game to play. I was attracted to this game based on the fact that it had started off its development as a game for the Acorn Archimedes (and we know just how much I love Acorn Kit). Strangely, the console only has one controller port on the box, additional controllers are plugged into each other, daisychaning them. I am not sure as to just how many can be connected. Another design feature of the controllers was that you could plug 3.5mm headphones into them. The controller also has its own volume control. Now, I don't think that is something that has been seen on a console until the Xbox 360? I may be wrong.

As it stands just now, I have games consoles in pretty much all the rooms in my house apart from my bedroom and bathroom. I will also continue to pick up nice examples of gaming consoles, I would like to get my hands on a NeoGeo and a WonderMega, but these ones a pretty pricey. Which is odd as you can pick up a Hyper NeoGeo arcade motherboard quite cheaply.

Right now though, I am reliving part of my youth by playing Ravenskull on my Acorn Electron which is hooked up to my LCD TV :)

Friday, 24 September 2010

My Lovely A7000+

Well, it took a massive effort on my part to get my venerable A7000+ up and running again. The machine itself is about thirteen years old and didn't like booting it's original OS. One of the nice things about the A7000+ and pretty much all of it's relatives is that the operating system itself is stored on ROM. You don't need a hard drive in it all really (but it is better if you do). This box was running RISC OS 3.71 the last time I tried to get it up and running, however after years of being stored in the garage, it got a little temperamental. After some careful refurbishment, I ended up having to get a new OS installed.

There isn't that much development happening for RISC OS machines at all at the moment, I managed to find some RISC OS 4 ROMs on Ebay which I fitted myself, after a mornings worth of trying to remember how the network setup works in RISC OS I finally managed to get it on to the Internet. I had planned to do this post from the A7000+ itself, but the supplied web browser just doesn't like Blogger, which I think is a shame. I have seen better browsers for RISC OS, but unlike mainstream OS's they are not free.

There are lots of cool things you can do with a RISC OS machine on a network, but I din't think there is any jobs going in the field just now. I used to work in a school, which at the time had hundreds of these machines. Unfortunately, by this point (2001) there was very little call for the use of RISC OS itself, instead we implemented a Citrix server to provide a Windows session directly to the box. It was a sad thing for me to see really, my first ever computer was an Acorn Election (more posts to come on that one I think) and I grew up using the Acorn Archimedes. the A3000

Tuesday, 7 September 2010

Old Tech Is Great

I have been struggling for a while now from the lack of a decent smartphone. I have been using Nokia phones since 2006, prior to that I used Ericsson devices. Now I am stuck with a pointlessly tiny touch screen phone that has no wifi or decent calendar.

So, how am I to cope when I see all these grinning people with shiny new Android devices? Not to mention the fanboys exclaiming madly about how the iPhone 4 is the second coming...

What do I do then? Well, I turn to a gadget I bought around ten years ago to carry my calendar and contacts with me. This device is the Psion Series 5. I have blogged about old tech in the past, I have an almost working Acorn A7000+, an Acorn Electron as well as a few old consoles. My A7000+ is completely functional, running RISC OS 3.7 at the moment, but I am having trouble getting it connected to the Internet, anyway, that's a post for another day...

So, this Psion device. I don't think many people will remember them all that much, as far as I can recall they were never massive hits with the consumer, despite getting very good reviews. When I first got mine, they were still relatively expensive second hand, so I got myself two damaged ones and took them apart to make one working version. However, back in those days I didn't really need an organiser like that - it wasn't like I needed to keep a calendar of things to do, meetings and peoples contacts. However nowadays I do. On my present phone, I cant do this very easily, the only other device I can use for a calendar is my iPod Touch - but Apple seem to have nobbled first gen owners who don't want to upgrade iOS (and why should I? It is mainly updates for the next gen models - not mine). So, I have gone retro geek chic and pressganged my old Series 5 into action for the first time in a decade (alomost).

When I first started playing with them all those years ago, I found it very hard to get it speaking to Windows XP Pro. I am using Windows 7 Ultimate now, so I was a little concerned that the Psi-Win suite that comes with the device would not work. However, after a very brief install period it all works fine. Next on the list was the serial connection, the Series 5 doesn't use USB, it has either an IR port or mini RS232 jack for connectivity. Again, back in the day of Windows XP this was troublesome for me, but again on Windows 7 it works like a dream. Now I can sync my 10 year old Psion Series 5 with Outlook 2010 with no problems at all!

Another interesting thing about this device is it's heritage. The Psion Series 5 runs an operating system called EPOC - this was the direct ancestor of Symbian. If you look at older smart phones like the Ericsson R380 World Phone, you will notice that the interface is almost identical to that of a Psion Series 5. The same thing can be said about the Nokia 9210i as well, the UI is almost identical to EPOC, right down to the soft buttons on the side of the screen (on a Psion, these were either touch sensitive or a soft menu button).

In fact, Ericsson made their own version of the Psion Series 5MX (a slightly updated version) to work seamlessly with Ericsson phones called the Ericsson MC128. I haven't seen any of these devices at all on Ebay, if I ever did I would snap one up in an instant as I have an Ericsson R320 smartphone (count afford the R380 when it came out).

Anyway, the software that Psion created, EPOC, gave way to Symbian. But there is one interesting thing about this OS. It is very similar to that of RISC OS, the two are not directly related. I know that a RISC OS/EPOC Palmtop device was planned, it was called the Osiris but I don't ever remember it getting released.

The only downside I can see with this device these days is the monochrome screen and the lack of a decent messaging suite straight out of the box. I think the Series 5 MX and Series 5 MX Pro addressed these problems though.

So, lets take a look at my Series 5 in it's glory:

The connector at the back is the mini RS232 jack, on the left is a power cable (I have never seen an official Psion power cable for these, so I am actually using a Nokia ACP-12X charger). The Series 5 is pretty good on batteries, but if you are using cheap rechargeable ones like me, it will gobble them up in seconds.

Now for the device opened:

The Series 5 has a very clever way of providing a decent keyboard for typing, when you open it, the keyboard slides out towards you, pushing the screen back to a decent angle. I think this is one of the most ingenious designs ever for a PDA/Netbook.. You can see under the screen and on the left of the screen there is a silver strip. This is touch sensitive and will open up any of the built in apps. You can also control the screen etc from the left hand controls. On the right, there are some touch sensitive soft menu items, such as the control panel.

Now, underneath the device:

I included this shot to show you the last little surprise this device has. You can just about make out that there is a small black panel there - this clever little thing actually hides the controls for a dictation mode the Series 5 offers. With the device closed, you can actually slide this back and record what ever voice memo you need. If you open the device up, you will get this recording presented to you for playback. I am not sure how many people actually use voice memos these days, but you can see that the Series 5 definitely had a lot of thought and effort ploughed into its design.

One final thing about the Series 5, it even comes with it's own development environment installed as standard. The device offers OPL allowing you to create your apps and scripts for this device. It is actually supported in the Sony Ericsson P800/P900 as well as the Nokia Series 80 Commincators (although I do not remember seeing the development editor available in these devices.

So that is it, the Psion Series 5. I think it is as much of an oddity these days as it was when it was first released. It is a fantastic example of British engineering and was definitely ahead of it's time on it's release considering the popularity of netBooks now on the market, not to mention our smartphones...

I will be using mine for a little while yet untill I can track down the last Psion netbook to be released somewhere.

Wednesday, 11 August 2010

Lego is Great

Ever since I was a kid I have been interested in building things. I would use anything to hand, building blocks, plasticine and of course Lego!

I think that for most people who are the same age as me and were also Lego fanatics, there are always a few kits that stick in your mind. For me, it was the Mega Core De-Magnetiser, part of Lego's space line up.

At the time, there was no way I could afford it myself, and I didn't have similar pieces to make my own version. Funnily enough, it was a recession then, just like it is now. However, I am older and we have EBay!

I got mine off Ebay, and for something that's almost fourteen years old, it is still in pretty good nick. It really brought my childhood back to me, the only difference is that it took me four and a half hours to build it! I am certain that I would have been able to do it much, much faster when I was small.

Anyway, it was a fun diversion, I even documented the build to prove to others I am not just spouting off.

Now that I think about it a little more, I am pretty sure this kit came out closer to twenty years ago, not fourteen... poor memory must be a sign of old age...>
The chassis:

You don't really get a good idea of how complex this thing actually is. It is essentially a real wheel steering vehicle using split axels - a bit like the current Batmobile, but with added six wheel drive.

There is also a clever crane mechanism that sits on top the thing when it is built. It is one of the rare occasions where a mainstream Lego kit gets to use pieces from Lego Technic. When the build is complete, it is about 17 cm long and about 15 cm high, if you take into account the crane mechanism as well, it ends up as a fairly stout model:

The canopy is a little different to most other Lego Space kits from then as it flips up from the front. If memory serves me correctly, most kits from that time had canopy's that swing upwards.

Either way, I am pleased as punch to finally get one of these kits, I think I waited long enough!!

Monday, 5 July 2010

Continuous Build and Test - Achieving automated deployment

A few days ago, I posted about how I developed a CI system for an ambitious financial software developer. We had gotten to the point where we had the experience with creating a CI environment but at this point we had outgrown our current build/integration software. Cruisecontrol.Net performed admirably, it had offered us a level of control that we had never seen before. Through the use of Cruisecontrol.Net, we were now able to ensure that all builds would take place the same way based on the requirements of the team working on the code. There was no-one manually running something to build something and forgetting some steps halfway through the process. Development teams were able to see the status of the build they were all working on and received reports and notifications about their code. Management were now able to look at high level reports to see how far along any given project was going. By implementing unit testing during the build process, testing carried out by analysts uncovered fewer and fewer issues and bugs due to this. When it came to deployment, we were also able to run a suite of UI tests post deployment, all carried out based on the spec provided by the development team working on this code.

However, there was still a significant amount of work needed to be done by the test and deployment team at this point. We had a complete hands off build and test system. We could build code and test its logic in one process. We could also test post build using UI test tools such as Watin and Selenium. By this point in time, we were also on the cusp of implementing Quick Test Pro test scripts into the process. There was only really one manual piece of work that we needed to complete in order to have a completely hands of build/test/deployment.

Now, we had a great deal of NAnt scripts that could do pretty much anything we wanted. We were building and running multiple tests, the only thing we could not do at the time was automatically release on a successful build process. We didn't want this to take place in production though, but this would be acceptable for any other environment (including DR). We had a shiny new SCM in place that was fully supported by the build scripts we had created. Right about then, we were also churning out the same amount of code as the main development teams. We worked in c# as well as some other languages like Python and Ruby, whereas the development teams worked exclusively in VB.Net. The last hurdle for us to cross would be the automated deployment after a god build.

It wasn't just the code that the backroom teams were developing that was growing. The company had expanded it's customer base and we were now providing solutions to other global players in the finance sector. The test and deployment teams had already demonstrated that we could work with the development teams to create a build process exclusively for them. In reality, we just a bunch of NAnt scripts and tasks that we used dynamically. The only thing that we needed to do next was to appraise our CI software.

Cruisecontrol.Net is a fantastic tool, but if you work in a company that is providing different solutions to different clients it can be a bit tricky. As the business at large was my teams customer, we quickly found that even though we could put together a CI system for any given team to cater for each spec would mean we would need to implement a Cruisecontrol.Net server per development team. We also found out that Cruisecontrol.Net would be a bit of a nightmare to configure automated deployments. Even though Cruisecontrol.Net could make use of .Net Remoting, the head of infrastructure pointed out (and rightly so) that to achieve an automated deployment using this software would be a security disaster. At the time, Cruisecontrol.Net was a complete CI tool in a box, to implement an automated deployment using Cruisecontrol.Net would have meant we would have had to deploy a Cruisecontrol.Net server to each environment and its selection of application servers. The obvious problem here is that if had have done that, then any malicious users could trigger a build out in the environment itself and possibly substitute our code for theirs. We looked at a way around this, but we could not come up with a solution that infrastructure could support due to security issues - we explained that we could get away with NAnt being out in the environment to handle deployment. Again, and rightly so, infrastructure pointed out that the same problem regarding malicious builds would still exist - basically they said that we were not to have any build tools out in the environments at all. At this juncture in our brainstorming, I pointed out that MSBUILD is already out there and is comparable to NAnt and that it really couldn't be removed as it was needed by .Net! Infrastructure was thankful for this update, but still forbade us from using it for deployments.

Right about now we had two problems on our hands. First up we needed a new CI build tool that could cater for the direction the company was moving towards and secondly, we needed to come up with an automated deployment tool.

We dealt with the CI element first.

We took a look around to see what was on offer. Team Foundation Server could do what we want build wise, but was simply not up to the task as far as our development was concerned. Cruisecontrol.Net was getting left behind with regards to features - the way in which it worked also started to become obsolete for us. We looked at some other CI tools that were available, both the head of development and myself thought we should go for the best of bread solution. This had already been highlighted in our existing setup - there were no tools or frameworks etc that we were using that were entirely dependant on another. From the outset, I wanted each element of the overall system to be as agnostic as possible so that when the time came to change something there would be little impact on our ability to carry our testing and deployments etc. We looked at other tools out there, things like Anthill Pro etc. We ended up going for a tool called TeamCity. It pretty much ticked all the boxes we needed, it was free (up to a certain point) and worked on the server agent architecture - and it was this function that decided it for us.

On adopting TeamCity to be our main CI tool, we asked for one extra server in the build environment (infrastructure did ask why we needed a fifth machine when we were only using one of our existing four...). The new addition was pretty important. Cruisecontrol.Net is a complete build server in a box which means that you need to have it installed on each machine you want to build from (or, at least this is how it worked when we were using it). TeamCity is a little more subtle. It makes use of what are termed as 'build agents'. With a server running constantly listening for build agents, you are able to add as many build servers as you want to cater for each of your projects. For instance, I could have (and have) ran a Linux box to build Mono assemblies. This box would be configured for this type of build only. My TeamCity server is running on a Windows 2008 machine on the network, it has been configured to only allow Mono builds to take place on the box running Linux. With Cruisecontrol.Net, you would have had to configure an instance of Cruisecontrol.Net on each build machine you wanted to build apps from. The TeamCity Agent/Server design is much, much more favourable. It means you can have a relatively cheap box sitting somewhere that is configured to do only one type of build - or in fact be configured to represent a machine out in an environment. You begin to ensure that the code you are building and testing is conforming to the spec laid out for that piece of build. With one server controlling it all and providing valuable MI. When we got to see this in action, we were very pleased. Now we could build for any spec and deliver information on these builds to whomever needed to get feedback.

This again made people happy. It made me happy because it meant that all my guys could simply get on with what they needed to do. The guys writing and maintaining the code we needed for this to take place were not bogged down manually checking that everything had built not only correctly, but also matched the spec for that piece of build. It enabled us to simply agree on a build spec for that development team and implement it. In essence, this was win all over.

As TeamCity uses the server/agent architecture, infrastructure was not adverse to us placing a build agent out in the environments to start working on a delivery system if we needed to do this. There was a lot of work that needed to take place prior to this though, the workstream that would enable a build to be deployed to an environment needed to be bashed out first. For instance, one of our requirements was that we wouldn't release a build to an environment until it has:
  • Been built successfully.
  • Passed it's unit tests.
  • Passed it's UI tests on the build box.
These were pretty simple requirements for ourselves. The majority of the work left to us was to develop the NAnt tasks needed to carry out environmental testing and to create a deployment platform.

There was a pseudo deployment platform in place already, it worked as part of the existing application. it was primarily used to handle some functions out in an environment. Things like pushing assemblies out to application servers and controlling web servers et. I took the decision to move away from this system - I didn't want the build/test/deployment process to be dependant on the actual product we were making.

So, we came up with a prototype platform that implemented .Net Remoting and MSBUILD. There were a few other staging areas as well, like TFTP servers and small services to control things like web-server availability and application servers. As the project for automated deployment went on, we encountered new tools like PowerShell. To me, this was a very useful tool, it meant that I could create c# code that could be accessed directly from within a PowerShell script (we already knew we could script .Net code in NAnt, but we didn't like to do this.). So, MSBUILD gave way for PowerShell, we liked it and so did infrastructure. With these tools in implementation, we needed to define the automated release process. It went a little like this:
  • Build machine is selected from server farm.
  • Build machine clears a workspace.
  • Latest code is updated to the build machine.
  • Latest code is built.
  • If the build was successful, unit testing takes place.
  • On successful unit testing the assemblies are packaged for deployment.
  • Selected environment would begin a countdown to unavailability.
  • Once build is packaged and the environment is down for deployment, existing code is uninstalled on target machines.
  • Latest build is deployed and activated on target environment.
  • Tests are then carried out in the environment, mainly UI testing.
  • On successful test completion, all relevant schema and sql is deployed to that environments DB servers.
  • If the new code fails at all, everything is rolled back and the DB is not touched.
And that was pretty much it. All we needed to do was to come up with some simple .Net classes to handle some of the deployment steps via .Net Remoting. Our PowerShell scripts would then handle the process out in the environment. If it all failed, everything was rolled back prior to releasing to the DB and the team responsible for the build would be notified.

We could have ran a test prior to committal on each developers machine. However, the head of development and myself thought that this could be a little tricky to carry out as there were so many developers working on so many work-streams in so many environments.

When it came to deploying to production, the same scripts and classes were used, but with intervention from someone in the release team to guide it all. So the same process was used, but in such a manner that steps were followed manually.

Just prior to leaving that company, we had created a complete build to release system in just under a year. The system I designed and created with the other guys in the release team (and ultimately the test team when I got promoted) was a world away from the VB scripts we used when I first joined. Now, each customer of the company has its very own build and test specification as well as an automated deployment specification.

Doing all of this would not have been possible if it were not for the availability of the tools at our disposal. There are earlier posts on my blog on how NAnt can be used to create new tasks. Systems like TeamCity allow you to concisely manage all aspects of a development work-stream from end to end. The only thing I never got around to doing for that company was to implement FITNesse.

I continue to praise CI and Agile to my colleagues, when ever I start a new contract, I often ask what build process they have in place. Directly after working for that company, I joined a small team within a huge organisation that needed to create a mini version of my last CI project. I now get asked by developers and employers to explain just how easy it is to implement for their own projects. It astonishes me sometimes to see very complex pieces of software not getting even the simplest unit test - and that has to go through a manual build phase with set specification.

Saturday, 3 July 2010

Hyper Geo 64

As a lot of people who know me will admit to, I am a fan of video games. Pretty much where ever there is a TV in my house, there will be a video game console attached to it.

If you are a guy round about the same age as me, it is possible that you will remember some of the great video game consoles of the 1990's. One that may stick out a bit was the Neo Geo made by SNK. I remember this console having a near mythological impact on video gaming, the games themselves cost almost as much as the console itself, and chances are you never met someone who owned a Neo Geo.

The console itself was pretty much a stripped down version of an arcade machine, this is why the games cost so much money. The AES, which was the home version of the console, provided arcade perfect graphics during a time where people were arguing about whether or not the SNES was superior to the Mega Drive. I think there were a number of attempts by SNK to create a newer, cheaper console for home use - they came up with the Neo Geo CD as well as two handheld consoles.

I think one of the most interesting consoles they came up with was the Hyper Geo 64. From what I can make out, this was SNK's attempt to flirt with 3D graphics. It never made it to the home market though and failed as an arcade machine with only a few games made for it.

Given my interest in consoles, I have embarked on a search to pick up a Hyper Geo 64 and see if I can get it firing. I love old tech like this, so I am hoping that it wont be too expensive to get one.

Continuous Integration and SCM

Last month, I went on a bit talking about Continuous Integration and how I have developed myself and my skills in relation to this.

I left off by saying that the needs of the company had expanded some what as had the needs of individual development teams. My team was slightly different to the main production and new build teams. The work and code we produced was made to facilitate the development of the applications we offered to our customers. So for us, there was very little in the way of UI development etc. The code that my team developed was primarily to support the rest of the development teams, enabling them to get a birds eye view of development at any one time and across all projects. We didn't just restrict this to management, it would be a bit daft to do that. Pre CI, we often heard the following complaint "we just do not know what is happening". In the build team we were responsible for maintaining all of the build and test environments, infrastructure took care of the guts of them and we maintained the environment based on a development teams specification.

So, to start developing version 2.0 of the CI system, we need to appraise a few new things. First up was our SCM tool. Our current tool was just not geared up to what we wanted it to do. We had been using the same tool for quite some time prior to me joining the company, it was fine in the pre-.Net days where there was only one customer, but as the company grew we needed to get something a little more robust, extensible and modern in. So, one of the first things we did was move away from Surround as our SCM. There were a number of options available to us at this point in time. Team Foundation Server could cater for us at version 1.0, it had a SCM system, it could carry out builds on more than one server etc. The only trouble with it was that it was v1.0 software, even though it was a production release, it was a pain to get it to work. Thankfully, the head of development had looked into this issue quite some time ago with a view to migrating from Surround to a newer tool. His initial efforts at the time met with apathy - there was no need for a new tool as the one they had was fine. So, when the time came the head of development pointed me in the direction of a package called Accurev. By this point in time, I had been made up to the head of deployment and testing, my main aim here was to get a complete CI system, from build to testing logic, to deployment to an environment and then further testing (UI stuff, this was a web app). I took a look at Accurev and liked it pretty much straight away. It was a very modern tool, it allowed you to work under the Agile umbrella of software development and allowed my team to quickly configure code and builds whilst at the same time gave the releasers time to make sure the code we were about to release was the correct code (there was a time prior to Accurev that code hadn't been checked in or re-based properly).

To fit into our CI system, all we needed to do was to create a number of NAnt tasks to work with Accurev, these were nice little tasks to make sure that the code was being updated on the build server. Integrating Accurev, Cruisecontrol.Net, NAnt was pretty simple in the long run, and it let management chill out when we were able to demonstrate how a fairly significant migration could take place with out a lot of drama.

With our new SCM in place and all the developers and other people who needed access to code trained up on how to use it, we were ready for take-off. Took a weekend to export the code we needed from our old SCM into our new one. It didn't take long to script the whole event, but did take a while to get the code from a to b. Once it was done, we were ready. We had a nice new shiny SCM that everyone knew how to work and we were ready to re task the deployment/release team into a new method of CI.

Of course, there were some hiccups along the way. I think the scariest one we had was just after migration, one of my colleagues accidentally deleted some very important code a few days in-front of a major release. bit of panic at the beginning, but we sorted out a major catastrophe fairly quickly. One of the good things that Accurev offers is the fact that it never, ever forgets code. It actually stores all your code ad infinitum, so a quick look through the manual and we were able to bring back all the code we needed after a few bits of code on the command line.

With this migration complete, we were only a few steps away from providing a major update to our CI system.

Thursday, 10 June 2010

Unit Testing .Net Automatically

When I am working on a project, I try to keep the tools we use for building and testing in the same language as any production code uses. So, if I am working on a .Net project, I will endeavour to use .Net wherever possible. The same can be said for things like Java, although I rarely develop in this language, I will try and make sure that the tools used to build, test and deploy the code are based in Java.

Recently though, I have been working on pure c# projects. Since becoming a contract developer, I have encountered a number of lead or senior developers that simply turn around and say "no unit testing, just make it work". It makes me a little sad when I hear comments like this. As far as I am concerned, unit testing is one of the most useful processes a developer can exploit. It cuts down the amount of logic fails in testing and in production. For a relatively short amount of work prior to development, both analysts and developers can agree the unit tests that support the specification - once the test is defined for each unit of code it never has to be changed again (or rarely changed). This cuts down the amount of fails picked up in testing. Some lead developers I have worked with recently have turned down unit testing flat simply because they do not understand what a unit test actually is. You find this situation in teams that are under resourced and over time/budget. The embattled lead developer, possibly dealing with a bit of cynicism, just wants to make the deadline. But, one thing I have seen is that if you take the time to explain the benefits of unit testing, they normally come round and embrace the concept.

These days though, Visual Studio comes along with built in unit testing support. I think it is about time this has happened, even though I am not a fan of what is bundled. My unit testing tool of choice is MbUnit. These days it is part of a much larger test automation platform called Gallio. I like MbUnit for one reason you can test against a DB and roll back any changes once the test is complete, fail or not. I know that typically this is the way you should use unit testing with a DB, but for small things its fine.

Most unit testing tools will come with a test runner that you can use to run your tests, there are also VS add-ins that will do this for you - but I am not going to teach anyone to suck eggs here. Instead, I am going to examine just how unit testing can be automated as part of a larger build process. Some unit testing frameworks will come with an example NAnt or MSBUILD task that can be slotted straight into your build process. There are some out there that will not do this and/or you have the need to execute tests with a test runner outside of NAnt or MSBUILD. Another bit of win for Gallio is that it has very good support for NAnt. There is a task provided by Gallio that allows you to use the test framework automatically, there is a really good example of how this can be done here. The fragment of xml on that site is enough to demonstrate how Gallio can be used with NAnt, however in practise you may find that it is too simplistic, especially for larger builds.

The trick comes down to maintaining your unit tests. IMHO, creating a requirement for naming unit test assemblies is a must, for instance calling your unit tests someting like foo.tests.mbunit is very important for managing the fixtures that will carry out your tests. Personally, I like to provide unit tests within the project it tests. However, I suppose that you could create a separate solution containing all of the test fixtures, I am just of the mind that if you have a project then the fixture should go in that project to maintain encapsulation.

Anyway, regardless of where you keep your fixtures, the naming convention is pretty important as you can make NAnt loop through all assemblies that use a certain naming convention. You could, of course, use NAnt to run an external tool to do this for you etc.

One complaint I used to here from developers when they were just getting used to unit testing was the fact that they needed to run a command on the CLI or use a test runner to prove their code. Again, there is a simple solution to this problem called Test Driven. They provide a fantastic little tool that allows you to test fixtures from within VS itself. I know this may seem like teaching some people to suck eggs but describing this, but there are many new developers who are new to .Net or development in itself who may not know about some of the tools. I am not going to describe any NAnt code here to explain how it can be done. There are lots of NAnt scripts I have written for other bits and pieces I have done in the past, these coupled with the support docs provided by Gallio, NAnt and et al will show you how this can be done.

Wednesday, 14 April 2010

Back once again with the renegade master

Been a while since I last posted on here. My aim to put up at least one worthwhile post every month seemed to fall by the wayside.

I still havent setup my old Acorn machine correctly yet, and cunningly my main desktop has decided it doesnt want to speak to my HDD any more.

Good news though, I have a new job that is very interesting to say the least and I have move house.

Bad news, totalled my bike and I have suffered from bad TV and travel karma over the last few weeks. But onwards and upwards! over the next few weeks I intend to put up at least one post concerning the Levenshtein Distance. There is lots of interesting proofs of concept out in the wild at the moment, but there are none concerning an actual useful example. I know that not everyone will be able to implement a really good peice of code to do this (and I doubt anything I produce will be class leading). But this is something I have to work on in my new job, so this will help me figure it out in my head :)

Saturday, 3 April 2010

Continuous Intergration

It's about time I started blogging about CI again, given that I have developed and worked with some of the best CI systems I have had the pleasure to encounter.

When I first got involved with CI it was with Java development. At the start we simply used Ant to carry out our builds and tests on remote servers. As time progressed, we started to look at some of the CI build tools that were available to us. We came across Cruisecontrol, which I think was the tool de jour when it came to build servers. I was naturally impressed with the features offered by Cruisecontrol at the time. We were able to put together a simple CI environment to carry out our builds and/or churns. As time progressed, we looked for ways to integrate Cruisecontrol into our main repository - the company I was working with at the time used Perforce as it's SCM, we also had a tool called the repository that held all of our application meta data (we made mobile phone apps, so the meta data was a collection of definitions per handset and carrier). By the time I left that company, we had almost a complete integration of our build environment with our repository and SCM.

From here, I moved to a .Net software house. I brought with me the concept of Cruisecontrol and CI. My new employers could be best described as a group of energetic younger people who worked on the bleeding edge. This time round, it wasn't mobile phone apps, instead it was the ultra serious realm of financial software. When I joined this company, the guys had just put the finishing touches to a migration to .Net, easy to implement but unforseen issues cropped up - that is a story for another day ;). These guys were up for anything when it came to CI and automated testing. I introduced them to Cruisecontrol and Ant for creating a CI build environment, the head of development (my boss) at the time was very interested in implementing a unit testing regime for the newly migrated .Net code. I remember working with an intern who had been tasked with creating a set of guidelines for unit testing within the integration code, this was about the time I started looking at Cruisecontrol.Net and Nant.

Given that my new place was a .Net house, it made little sense to me to implement a suite of Java based tools when there were viable .Net versions available. With this said and done, the company branched out into the new realms of .Net CI! It was a success from the get go, the upper management liked the idea of seeing nice graphs and reports on build status, the developers like the idea of being told when a specific build passed testing - it allowed pretty much anyone in the developer and analyst pool to see what was happening with their build and others.

I feel that good CI is often based on some simple foundations. First off would have to be good code. It doesn't matter if it doesn't pass the build at a lower level, those guys should be used to new build to go through some teething problems with production code, this is something that CI would come to reduce further down the line. Next, we needed a decent environment in which to build our code. At the start, we used Crusiecontrol.Net on one builds server, but we had another three servers available to us should we need them. As unit testing was rolled out for all of development, this allowed everyone to see problems before a build made it out onto a test environment. Naturally, this annoyed some developers who didn't see the point in unit testing their code, being in the build development team I could see the importance of unit testing and so could my colleagues. The next important thing that makes up a good CI environment is Software Configuration Management. I have worked with some excellent SCM's and some truly awful (read Visual Source Safe). At this point the company was using a tool called Surround (made by Seapine). It was a fairly good SCM, a little clunky and had some performance issues, we needed to move away from it however as we needed to get to a point where we could run multiple projects from a common library of assemblies. Surround wasn't really up to this. Being a .Net house, we had access to all of the latest tools from MS, naturally Visual Studio was in there. At around about this point in time, MS was starting to promote a tool called Team Foundation Server, this was essentially MS's take on CI. It came with all all the bells and whistles that a good CI system should offer, there was built in SCM, built in reporting, built in data collection and the ability to carry out Team Builds (MS's take on a Cruisecontrol style too). It also came with a tool called MSBUILD, which I like very much. MSBUILD was the replacement for NMake, I am going to assume that most people reading this article will be familiar with Apache Ant and/or Nant. MSBUILD borrows a lot from these tools. Essentially, it is a XML based script that allowed you to automate a great deal of pre-build, build and post-build activities. At it's simplest, it allows you to carry out a build on the command line.

Now, when it came to the CI system that I built, I didn't want to use MSBUILD, at the time it seemed to constricted for me, instead I turned to what is probably MSBUILD's biggest rival, Nant. At the present time, both these tools are pretty much identical in what they can do and how they are built and maintained, back in the day though Nant offered slightly more flexibility over MSBUILD.

Now it came down to the system that I built. One of the strong points of Nant (and MSBUILD) is that you can write your own tasks for your build/test configuration. This is great when you need to add some small things into the config - this could be something as simple as plugging in a third party tool etc. Both of these tools offer a lot of functionality from the get go, all I had to write for the first CI system there was integration with our SCM, I needed to create some Nant tasks that would work with our SCM.

When we first went live with this system, there were some complaints. These mainly came from the developers working on new build, they would now be informed of breaks very early on in the build process automatically, Cruisecontrol.Net allowed you to do this. There was some resistance to unit testing also at this time, some developers simply did not want to see their code fail at such an early stage. Both the head of development and myself held the same view - unit tests are a must have as they find issues way in advance of a build making it out into test, where an analyst may or may not find the issue. Also, the build development team was also subject to unit tests, so across the entire company unit testing became a must be done (especially when managerial start looking at lovely reports that showed them test coverage and pass rates. A lot of the developers that complained were of the "just get it to run" frame of mind. Thankfully, these developers were vastly in the majority.

As time progressed, we found that Cruisecontrol.Net was beginning to be a little constrictive for us. We were only using it for build and test for the entire company (read multiple projects) on one build server. By this time, upper management was extremely impressed with what had been achieved with the setup, it has sped up the build and test time across the board. When companies are successful, they often start branching out to get more clients to use their systems, this was no different to the one I was currently working for. It was financial software, so the money to be made from efficient development was astronomic. As the shape of the overall development changed, so did our attitude to building it changed also. With the current setup, we couldn't handle these sorts of builds on a daily basis, at this point we were running CI for one workstream only and the system would not be able to cater for different builds for different projects. We were under using the build environment we had, we only used one machine out of the four available to us. We knew we needed to change our build tools and management was concerned that we would loose development time having to migrate from one system to another. We quickly put that fear down by explaining that the build scripts we were using were not dependant on the CI system, in other words, the Nant scripts we used for all of the builds would work with any CI tool. This brought a sigh of relief to many a manager a team lead. Right up until this point, we were running a complete build and test for the new build for just one customer automatically. This would take place all the way through the day at scheduled times, this meant that developers could work away to their hearts content pretty much 24/7 (and some did come close to this).

This made moving to a new CI system fairly painless, it also allowed us to see what we could do for the build, not just a test and build but hopefully a test, build, deployment and further testing. A complete lights off system that would roll back to a working build should the current code fail.