def ant = new AntBuilder()

def command = "./myScript.sh"

ant.sshexec(host: "myHost", username: "myUsername", password: "myPassword", command: command, trust: "true")

For the above you will need the following library:

org.apache.ant
ant-jsch
1.8.2

Code Snippet: Execute shell scripts across ssh using Ant (Groovy)

Advertisements
Tagged , , , ,

Promote your Automation

Once you have an established method for producing automated functional tests one of the most valuable things that you can do next is to start promoting them within your development team.

Start off small by just talking about their existence with other developers in your project team. Explain the technologies behind the tests and also their purpose.

Hopefully you will be using a framework such as cucumber and writing scripts in a programming language rather than using record playback. If you are then it will allow you to talk tech about things they are familiar with on a similar level.

This approach usually ends up with the developers wanting to take a look at the code. This is an excellent opportunity to ask them to check it out from source control on their own machines.

Having the tests checked out onto a developers machine increases the chances of them running them before committing their code. This has obvious benefits for improving the quality of your software.

This also enables the developers to understand your requirements when it comes to designing with testability in mind. For example, in projects that I’ve worked on, by showing developers my code I’ve been able to explain why I need certain identifiable elements to be added to the markup of a page. This has greatly improved my ability to write accurate and robust tests.

The next step is to get buy-in to integrate the tests into your continuous integration (CI) system. Once they are integrated into CI, usually as part of the nightly build, you can start to shift the responsibility of checking the results towards the developers.

By doing this it means that you can reduce the feedback time between the tests finding an issue and it being resolved. Typically I have found that, once they get used to it, the developers start checking the test results first thing in the morning. If they find any issues they then fix them straight away before continuing with their day’s work.

As a side effect to the above, promoting your automation can also improve the awareness within the company of your and your team’s technical ability. This tends to lead to a somewhat increased level of respect from fellow developers along with being included in technical discussions that you may currently be excluded from.

So all in all, if you have automated functional tests then start spreading the word. There’s nothing to lose by doing so!

Tagged ,

def expectedElements = []
expectedElements << "//div[@id='1']"
expectedElements << "//div[@id='2']"

expectedElements.each {
    driver.findElement(By.xpath(it))
}

Code Snippet: Iterate through a collection of xpaths (WebDriver/Groovy)

Tagged , , ,

Lean Test Plans

Throughout my career I’ve been asked to write many a test plan. When I started writing them I usually ended up with a 15 page document that I thought laid down the law on how testing was going to be done on my new project. In reality, however, the test plan usually ended up being emailed to the project team and that was that.. I never heard anything back.

After doing this a couple of times I started to wonder how what I had written in my test plan matched up with how testing was actually carried out on the project. I started to do post-project reviews comparing the two. Unsurprisingly I found that what actually happened was nothing like what had been planned. More crucially no one in the project team had any idea as to what I was doing during the project in terms of approach.

I came to a simple conclusion after these reviews, no one reads large documents. There was simply too much information in the test plans. The test plans became stale very quickly, they were too prescriptive – there was no room for reaction. Therefore they were useless and a waste of my time.

I tried to work out what information is actually relevant and what information is there ‘just because’. The method I came up with was to simply think of a test plan not as a prescription for testing but rather as a tool in handing over to another colleague.

I asked myself the question ‘What would I need to know if I were to take on this project?’. In answer to this question I came up with the following sections:

  • Project purpose – a brief description of the project and its purpose
  • Areas of responsibility – the components that make up the project and who’s responsible for them
  • Areas to be tested – as above but from a testing point of view
  • Methodologies and Technologies – a list of the methods (e.g. exploratory, automated) and their respective technologies (e.g. Cucumber, JIRA)
  • Issue Management – How issues are logged, who to assign things to etc.
  • Test Deliverables – The actual deliverables from the testing effort (e.g. automated test scripts, performance test results)
  • Entry & Exit criteria – Typical entry and exit criteria for the project

I also made a shift from writing the test plans as documents to writing them directly into the wiki along side all other project documentation. This way they can  always be up-to-date and can reflect the true status of how testing is being carried out on a project at any given time. It also means that you don’t duplicate information contained in other project documentation (e.g. the test schedule is usually also part of the project schedule).

After implementing these changes I’ve seen an increase in the readership of test plans and also an understanding within the team of how the project will be tested. Even clients are happy to receive such light-weight test plans, they too are tired of sifting through 15 pages of stale information.

So if you are also struggling with communicating your test approach despite your ring-bound test plan, I recommend you ask yourself: ‘What would I need to know if I were to take on this project?’.

Tagged , , ,

var result = document.evaluate("YOUR_XPATH",
document,
null,
XPathResult.FIRST_ORDERED_NODE_TYPE,
null);

var evt = document.createEvent("MouseEvents");
evt.initMouseEvent("click", true, true, window,
0, 0, 0, 0, 0, false, false, false, false, 0, null);

var element = result.singleNodeValue;
element.dispatchEvent(evt);

Code Snippet: Find an element by xpath and click on it (Javascript)

Tagged , ,

Familiarise yourself with Source Control

It is highly likely that within your development team a source control system such as Subversion is currently in use. If you don’t know then I recommend finding out about it as soon as possible.

There are many benefits for a QA in gaining knowledge of these systems and how they are being used. One of the major benefits is that of being able to actually see what code is a part of the build you’ve been asked to test.

By looking through the commit messages since the previous build you can work out what files have changed. This leads to being able to understand what areas of the application have changed and also to work out, at a low level, what other areas might have been affected. This knowledge can become invaluable when it comes to debugging issues or deciding what to regression test.

Once you start to read the team’s commit messages you can gain an understanding of how the team operates in terms of how often they commit and how they think about issues. Don’t be surprised if you see commits of several files in one go with a generic message such as ‘updated for latest build’. Commits like these would suggest that the bigger picture of how this code gets deployed isn’t being taken into consideration.

Improving a team’s use of source control can be as simple as reading a commit message and asking the relevant developer what it means or what has actually been done. More often than not they then realise people do actually use them for something and work to improve their messages.

By working towards the goal of using source control not only as code backup but as a communication tool the team will invariable improve its ability to quickly track down when defects were introduced and to identify the culprit lines of code.

A side effect of taking time to understand source control is that you will, if interested, start to gain a better understanding of how the code of your applications works. By using file differential tools to compare code revisions you can work out how something was coded and how bugs were fixed.

Finally, by trying to understand something that is typically seen as a developer only zone you should gain a bit more respect within the team. You are not only showing that you are interested in what the developers are doing but that you are also trying to improve quality at the lowest levels. Being able to provide relevant and precise information back to development is something that no one can argue against.

Tagged , ,

Deployment Process

I’m often asked “how can you prove that what has been tested on your QA Environment is the same code that will be released to Production?”. In this post I hope to be able to answer that question based upon my experiences from working in a Java/Web environment.

This process begins with a source control system such as Subversion. Source control systems are crucial for professional software development and are the most basic requirement for any development team. They allow for safe collaboration on a code base and the ability to track changes at a code level. Ideally both the development code and the automated tests for a project should be committed under the same location in the source control system.

The next step is to setup a Continuous Integration system such as Jenkins. These systems allow you automate a large variety of development tasks but they are especially good at automating code builds. In a system like Jenkins you can create a series of ‘jobs’ each with a specific task. These jobs can then be scheduled or even triggered by the success or failure of other jobs.

The first job that should be created is the ‘Frequent Build’. This job monitors your source control system and will perform a code build on either every commit or on a schedule (e.g. every 5 minutes). This code build will check-out the latest code from the source control system, build it using the project’s build tool (e.g. Maven), and then run the unit tests.

The last step of the Frequent job should be to archive the built artefact (e.g. WAR file) into an artefact repository such as Nexus. Ideally these will be ‘Snapshot’ builds which have their own version numbers. The convention I’ve used in the past is ‘projectName-projectType-projectVersionNumber-DateTimeStamp-BuildID’ (e.g. myProject-webApp-1.0.0-20111108.130000-01).

The second set of jobs that should be created in your Continuous Integration system are Deployment jobs. There should be one job per environment. For example you might typically have:

  • DEV Deployment
  • QA Deployment
  • Showcase Deployment

Each of these jobs should be manually started and should take, as an input parameter, the version number of the required snapshot. When the job runs it should download the specified artefact from the artefact repository and deploy it to the specified environment.

This process allows for linking a commit in source control to an artefact deployed on an environment. It also allows for the deployment of older artefacts onto any environment at any time which enables easier debugging of issues found.

The last job in this process is the ‘Release’ job. This job is similar to the ‘Frequent’ job with some small but significant differences. Firstly, it is a manually triggered job which should only be used when an actual release of code is required. Usually this would be at the end of a Sprint or Iteration when a release to the Showcase environment is required.

The release job will checkout a fresh copy of the latest code from source control, build it, and run the unit tests. It will then increment the version number of the artefact and archive it into a release repository. This artefact can then be deployed to the required environment in the same way that a snapshot is.

Before the release artefact is deployed to a showcase or other customer-facing environment it should be deployed to the QA environment. It should then be tested as per the relevant processes whether they be automated or manual. Once it has been OK’d it can then be deployed to the next environment.

By deploying an archived and versioned artefact to different environments through an automated process you can be sure that what has been tested on the QA environment is what is being released to Production.

Tagged , , , , ,

Issues vs. Defects

When working in an agile environment, more specifically in a SCRUM environment, I believe that it’s important to make a distinction between issues and defects. By having two distinct categories for bugs you can communicate their importance easier and ensure that quality is built in task by task.

Issues

I take issues to be bugs found on a task during a sprint. Each user story should be broken down into several tasks covering areas such as back-end development, documentation, front-end development, automation etc. Each one of these tasks should be passed to QA when complete. It should then be the QA’s responsibility to decide whether it can be tested or not.

When a task has been tested a decision can then be made as to whether it has passed or failed. If it has passed then it can be closed. If it has failed then the task should be passed back to the relevant party (the back-end developer for example) with the required information i.e. steps to reproduce etc. This information is usually supplied as a comment attached to the task.

By keeping these bugs relative to their respective tasks you can easily track how many times tickets are moved between Development and QA. You are also ensuring that the smallest parts of the product are being produced to the highest quality possible.

If a task has failed testing at this point then I wouldn’t classify it as a defect but rather say that the task is not complete. Once all tasks have been deemed as being complete then, by right, the user story can be closed. This should ensure that your ‘Definition of Done’ is met for every user story.

Defects

To me defects are unexpected bugs found on a user story that has been previously completed. The key part here is making a distinction between bugs found during and bugs found outside of a sprint. If a bug has been found outside of a sprint then it must be raised in its own ticket. This way it can be added to either the sprint backlog or the product backlog and re-estimated and planned.

Whilst development is being done on the defect ticket it should be treated like a user story’s task. That is, any bugs found during re-testing should be treated as issues.

Tagged , , , ,

Setting up Eclipse for Arduino Development

Eclipse Download & Plug-ins

First off you will need to download Eclipse for C++ Development. You can find this here.

If you already have Eclipse then you can just install the C++ plugin using this update site: http://download.eclipse.org/tools/cdt/releases/galileo

Next you need to get the AVR plugins, you’ll need to download them from here and then install them manually in Eclipse. (‘Install New Software’ >> ‘Add’ >> ‘Local’)

Configure AVR

Open the Eclipse Preferences dialogue and expand the AVR section, then select ‘Paths’. Verify that the correct AVR Toolkit has been selected (CrossPack-AVE for OSX).

You’ll now need to create an Arduino configuration so select AVRDude and click ‘Add’.

  • Give the configuration a sensible name e.g. Duemilanove.
  • Select ‘Arduino’ under Programmer Hardware.
  • Enter the port that your Arduino will connect to. To find this out you can open Terminal (whilst the Arduino is plugged in) and type ‘cd /dev/tty.usbserial’. Then press Tab, it should look something like ‘/dev/tty.usbserial-A700dXeQ’.
  • Set the Baud Rate to 57600
  • Click ‘OK’
  • Click ‘Apply’

Retrieve core library from Arduino IDE

You’ll need the core Arduino library to build your projects. The best way to do this is as follows:

  • Open the Arduino IDE
  • Open a ‘Sketch’ e.g. Blink (from examples)
  • Click ‘Verify’
  • With the Arduino IDE still running, open a Terminal
  • cd to ‘/private/var/folders’
  • Run ‘sudo find . -name core.a’
  • cd into the resulting directory (should be something like ‘….build6403021802689133568.tmp/’)
  • Copy ‘core.a’ to your Eclipse workspace and rename to ‘libcore.a’

Create and Configure your C project

  • Select File >> New >> C Project
  • Select ‘Empty Project under ‘AVR Cross Target Application’
  • Give the project a sensible name e.g. Blink (we will use the ‘Blink’ example from the Arduino IDE as a template)
  • Click ‘Finish’
  • Copy ‘core.a’ from the Eclipse workspace into this project’s root directory (e.g. ../workspace/Blink/)
  • Right-click the project in the Project Explorer and select ‘Properties’
  • Expand the AVR section and select AVRDude
  • Select your Arduino configuration from the drop-down and click ‘Apply’
  • Select ‘Target Hardware’
  • With the Arduino plugged in, click ‘Load from MCU’
  • If this fails then select the correct MCU Type from the drop-down (ATmega328p for Duemilanove or ATmega168 for Diecimila)
  • Set the frequency to ‘16000000’ and click ‘Apply’
  • Select ‘C/C++ Build’
  • Verify that ‘Builder type’ is set to ‘External builder’
  • Expand the ‘C/C++ Build’ section
  • Select ‘Settings’
  • Select ‘Additional Tools in Toolchain’
  • Ensure that only ‘Generate HEX file for Flash memory’, ‘Generate HEX file for EEPROM memory’, ‘Print Size’, and ‘AVRDude’ are ticked
  • Select ‘AVR Assembler’ and verify that the command is ‘avr-gcc’
  • Select AVR Assembler >> Debugging
  • Select ‘No debugging info’
  • Select ‘AVR Compiler’ and verify that the command is ‘avr-gcc’
  • Select AVR Compiler >> Directories
  • Add this directory path: ‘/Applications/Arduino.app/Contents/Resources/Java/hardware/arduino/cores/arduino’ (modify to reflect where your Arduino IDE is install to)
  • Select AVR Compiler >> Debugging
  • Select ‘No debugging info’
  • Select AVR Compiler >> Optimization
  • Select ‘Size Optimizations (-Os)’ from the drop-down
  • Select ‘AVR C Linker’ and verify that the command is ‘avr-gcc’
  • Select AVR C Linker >> Libraries
  • Add ‘core’ to the Libraries
  • Add ‘${workspace_loc:/Blink}’ to the Libraries Path (where ‘Blink’ is the project name)
  • Click ‘OK’

Add source code to your project

Select File >> New >> C Source File and call it ‘main.c’. Copy the following code into ‘main.c’:

#include "WProgram.h"

extern void __cxa_pure_virtual() {
    while (1);
}

int ledPin = 13;

void setup() {
    pinMode(ledPin, OUTPUT);
}

void blink(int n);

void blink(int n) {

    for (int i = 0; i < n; i++) {
        digitalWrite(ledPin, HIGH);
        delay(500);
        digitalWrite(ledPin, LOW);
        delay(500);
    }
}

void loop() {

    blink(3);
    delay(1000);
    blink(3);
}

int main(void) {

    init();

    setup();

    for (;;) {
        loop();
    }

    return 0;
}

Send to Arduino

Now click the Hammer icon on the toolbar (or select Project >> Build Project from the main menu). This will automatically transfer the program to your Arduino.

You can now copy this project and use it as a template for all future Arduino projects, saving you the hassle of setting up all of the above again!

Hope this helps you and happy coding!

Tagged , ,