Monday, January 13, 2014

Software Testing: Performance Testing

This is a test which measures, or determines, the performance of an application or an application component.  Of all non-functional testing, this is probably the most commonly executed type of test.
 
The overall purpose of a performance test is to determine if the application will be functionally correct even at high workloads.
The objectives of a performance test would be something along the lines of:
    * Determine if the application can support the expected workload
    * Find and resolve any bottlenecks
It is very difficult (i.e., time consuming and expensive) to build and replicate in a test environment an exact simulation of the workload that the application will be expected to process in production.  It is much easier (i.e., quicker and cheaper) to build an approximation of the workload.  Often the 80:20 rule is used to persuade project managers that an approximation makes more sense.  That is, 80% of the workload is generated by 20% of the functionality.  Of course, no two applications are the same, in some we can easily achieve 90:10, in others it is more like 70:30.  Careful analysis by the performance tester will help determine the volumetrics for the application and therefore which functions will be included in a performance test.

Using the 80:20 rule is in essence compromising the testing effort.  While some or most performance issues will be detected, performance issues associated with functionality not included in the performance test could still cause problems on release to production.  Further steps can be made to minimise this possibility, including:
    * Manually key functions while a performance test is executing
    * Observe and measure performance, especially database performance, in functional test environments
Once an approximation of the production workload has been determined and agreed, the performance tester works towards building the automation into a workload that can be executed in an orderly and controlled fashion.  The work early on in the performance testing process becomes a good foundation on which to analyse and publish results, ultimately determining if the application can or cannot meet the specified objectives.

Performance tests usually need to be run multiple times as part of a series of test tune cycles.  Where a performance bottleneck is detected, further tests are run with an ever increasing amount of tracing, logging or monitoring taking place.  When the cause of the problem is identified, a solution is devised and implemented.  Again, the performance test is re-run to ensure the performance bottleneck has been removed.

Saturday, January 11, 2014

Software testing: Agile Software testing

Now-a-days software development frequently use the term “agile”. Unless you have been trekking in the Andes for the past 5 years, you will no doubt have heard somebody in your organisation talking about “agile” software development or read about some aspect of “agile” on any number of software development and technology related web sites.


The trend in adoption of an Agile based methods has increased significantly.
Agile software development methodologies appeared in the early 1990’s and since then a variety of agile methodologies such as XP, SCRUM, DSDM, FDD and Crystal, to name but a few, have been developed.
The creators of many of these processes came together in 2001 and created the “Agile Manifesto” which summarised their views on a better way of building software.


Agile software development methodologies have flipped on its head, the traditional view of waiting for a fully built system to be available before higher levels of testing, such as Acceptance testing, can be performed.
Testing from the beginning of the start of the project and continually testing throughout the project lifecycle, is the foundation on which agile testing is built. Every practice, technique or method is focused on this one clear goal.
So what does testing now need to know and do to work effectively within a team to deliver a system using an agile method?
The concept of “the team being responsible for quality” i.e. “the whole team concept” and not just the testing team, is a key value of agile methods.
Agile methods need the development team writing Unit tests and/or following Test First Design practices. The goal here is to get as much feedback on code and build quality as early as possible.

The key challenges for a tester on an agile project are:
•No traditional style business requirements or functional specification documents. We have small documents (story cards developed from the 4×4 inch cards) which only detail one feature. Any additional details about the feature are captured via collaborative meetings and discussions.
•You will be testing as early as practical and continuously throughout the lifecycle so expect that the code won’t be complete and is probably still being written
•Your acceptance Test cases are part of the requirements analysis process as you are developing them before the software is developed
•The development team has a responsibility to create automated unit tests which can be run against the code every time a build is performed
•With multiple code deliveries during the iteration, your regression testing requirements have now significantly increased and without test automation support, your ability to maintain a consistent level of regression coverage will significantly decrease
The role of a tester in an Agile project requires a wider variety of skills:
•Domain knowledge about the system under test
•The ability to understanding the technology be used
•A level of technical competency to be able to interact effective with the development team