Monday, January 13, 2014

Software Testing: Performance Testing

This is a test which measures, or determines, the performance of an application or an application component.  Of all non-functional testing, this is probably the most commonly executed type of test.
 
The overall purpose of a performance test is to determine if the application will be functionally correct even at high workloads.
The objectives of a performance test would be something along the lines of:
    * Determine if the application can support the expected workload
    * Find and resolve any bottlenecks
It is very difficult (i.e., time consuming and expensive) to build and replicate in a test environment an exact simulation of the workload that the application will be expected to process in production.  It is much easier (i.e., quicker and cheaper) to build an approximation of the workload.  Often the 80:20 rule is used to persuade project managers that an approximation makes more sense.  That is, 80% of the workload is generated by 20% of the functionality.  Of course, no two applications are the same, in some we can easily achieve 90:10, in others it is more like 70:30.  Careful analysis by the performance tester will help determine the volumetrics for the application and therefore which functions will be included in a performance test.

Using the 80:20 rule is in essence compromising the testing effort.  While some or most performance issues will be detected, performance issues associated with functionality not included in the performance test could still cause problems on release to production.  Further steps can be made to minimise this possibility, including:
    * Manually key functions while a performance test is executing
    * Observe and measure performance, especially database performance, in functional test environments
Once an approximation of the production workload has been determined and agreed, the performance tester works towards building the automation into a workload that can be executed in an orderly and controlled fashion.  The work early on in the performance testing process becomes a good foundation on which to analyse and publish results, ultimately determining if the application can or cannot meet the specified objectives.

Performance tests usually need to be run multiple times as part of a series of test tune cycles.  Where a performance bottleneck is detected, further tests are run with an ever increasing amount of tracing, logging or monitoring taking place.  When the cause of the problem is identified, a solution is devised and implemented.  Again, the performance test is re-run to ensure the performance bottleneck has been removed.

Saturday, January 11, 2014

Software testing: Agile Software testing

Now-a-days software development frequently use the term “agile”. Unless you have been trekking in the Andes for the past 5 years, you will no doubt have heard somebody in your organisation talking about “agile” software development or read about some aspect of “agile” on any number of software development and technology related web sites.


The trend in adoption of an Agile based methods has increased significantly.
Agile software development methodologies appeared in the early 1990’s and since then a variety of agile methodologies such as XP, SCRUM, DSDM, FDD and Crystal, to name but a few, have been developed.
The creators of many of these processes came together in 2001 and created the “Agile Manifesto” which summarised their views on a better way of building software.


Agile software development methodologies have flipped on its head, the traditional view of waiting for a fully built system to be available before higher levels of testing, such as Acceptance testing, can be performed.
Testing from the beginning of the start of the project and continually testing throughout the project lifecycle, is the foundation on which agile testing is built. Every practice, technique or method is focused on this one clear goal.
So what does testing now need to know and do to work effectively within a team to deliver a system using an agile method?
The concept of “the team being responsible for quality” i.e. “the whole team concept” and not just the testing team, is a key value of agile methods.
Agile methods need the development team writing Unit tests and/or following Test First Design practices. The goal here is to get as much feedback on code and build quality as early as possible.

The key challenges for a tester on an agile project are:
•No traditional style business requirements or functional specification documents. We have small documents (story cards developed from the 4×4 inch cards) which only detail one feature. Any additional details about the feature are captured via collaborative meetings and discussions.
•You will be testing as early as practical and continuously throughout the lifecycle so expect that the code won’t be complete and is probably still being written
•Your acceptance Test cases are part of the requirements analysis process as you are developing them before the software is developed
•The development team has a responsibility to create automated unit tests which can be run against the code every time a build is performed
•With multiple code deliveries during the iteration, your regression testing requirements have now significantly increased and without test automation support, your ability to maintain a consistent level of regression coverage will significantly decrease
The role of a tester in an Agile project requires a wider variety of skills:
•Domain knowledge about the system under test
•The ability to understanding the technology be used
•A level of technical competency to be able to interact effective with the development team

Saturday, December 28, 2013

Software Testing: Is it a Bug or not ?

Many a times we see something that we think is unexpected as per our understanding of the product, usually in large projects where you are working only on a specific component however for testing purpose you have to exercise other component workflow, you may land up in a situation of " I see something new .. weird...may be or may not be a bug".

What can we do in such situations

If you think that its not your component and not your responsibility to test , you are losing an opportunity to help, explore, make yourself accountable across teams and importantly  understand the complete flow of your own test product. According to me there are two approaches that will help in this situation.

1) Debug and root cause yourself: Ask yourself few questions like
  • Why do you think its a bug
  • What negative versus positive impact does this behavior makes to the application
  • Can you design a scenario that will break things and obviously make this behavior a bug and get proper attention to be resolved later.
  • Investigate the change history in source code for the component that has the new behavior you observed.
  • Can this behavior be reproduced consistently. If yes note down the steps.
  • Is the behavior same across all the test environments
  • If possible, can you provide the comparison between the old and new behavior.
  • If the behavior tends to be functionality (not a bug) what other changes needs to be made to make sure other do not think of this as a bug.
2) Report and follow-up: Find the owner of the component (Dev/Test/PM) report them the behavior you observed and give all the necessary details they want to see this behavior. Follow-up with them to understand the behavior and investigate with them if this is a bug or not.
In this way you are making sure that you understand system (product) as a whole and not only your component. Also this helps you to have experience of across team collaboration.
If it resulted to be a bug you win the trust of your testing ability from other teams as well. Make sure you learn why this was bug and make sure next time if such same thing occur in regression you report it as bug.
If it resulted as not a bug make sure you understand the answer you get from other team for this behavior. You should note it down as you got to know a new thing in the product workflow and make sure you list all the improvements if any required by the cross team to get done as to make this behavior not look like a bug.

Happy Bug finding Friends

+Vijay Rathod  +Nilesh Zade  +Kushal Punjabi +Ashish Jethani +nitin shelar +amit tripathi +Shatrughan Ambwani  +reegan samy  +Mukesh Chaudhari  +vishal gaikwad  +Sagar Dhadge +Nikhil Deval  +Ravi Varpe  +Pratik Godhani  +Hrishikesh Tiwary  +Rakhee Makhijani +Kavita Chandwani  +Gaurav Jain  +Vikram Keswani  +kedar gorwadkar  +Leena Talreja +Mukesh Deshmukh  +Padmarag Lokhande  +Sharique Quadri  +Raushan Verma +Tanya Sukheja  +Peeyush Mittal 

Thursday, December 26, 2013

Software Testing: Fundamentals


  

What is Software Testing?

Executing a program or application with the intent of finding bugs in it is called software testing. It a process of validating and verifying that a software program or application or product meets the business and technical requirements.



Methods and Types
 
Many a times we get confused as what is Testing Methods and what is Testing Types.
  • Testing Method is a particular form of procedure for accomplishing or approaching testing goals.
  • Testing Type is a category of testing having specific goals or characteristics.



Testing methods

Static Testing :It is a form of software testing where the software is tested without executing the code.
Examples: Code reviews, inspections and Software walkthroughs
 
Dynamic Testing: Testing the software by executing i.e. giving input values and checking if the output is as expected, which can be done manually or with the use of an automated tool.
Examples: Integration tests, System tests and Acceptance tests
 
White Box testing: It is a method of testing software that tests internal structures or workings of an application.
Example: Control flow testing, Data flow testing, Branch testing, Path testing
 
Black Box testing: It is the method of testing software that examines the functionality without knowing the internal structure of the application.
Examples: Equivalence partitioning, Boundary value analysis, Cause effect Graph, Error Guessing


Testing Types

  • Functional testing :  Functions are tested by feeding them input and examining the output, and internal program structure is rarely considered.
  • Regression testing: The intent of regression testing is to ensure that a change such as enhancements/patches/configuration changes has not introduced new faults.
  • Performance testing: Testing the performance of a software such as response time, resource usage and stability
  • Security testing: Testing the software to verify it protects data and maintains functionality as intended
  • Compatibility testing: It tests whether the application or the software product built is compatible with the hardware, operating system, database or other system software
  • Smoke and sanity testing: Tests made to a systems to determine if it is ready for more robust testing
  • Accessibility testing: Test with reference to users with disabilities that affect how they use the software.
  • Acceptance testing:  Test conducted to determine if the requirements of a software are met as per the specifications provided.


Testing levels

  • Unit testing : Test the individual unit of source code.
  • Integration testing :  individual software modules are combined and tested as a group
  • System testing: Testing the software as whole system (Complete integrated system)
  • Acceptance testing: Tests conducted to determine if the requirements of a software are met
  
Communication:
 As a test member we think from all perspective and design test cases, the tests are executed with the intend to find bugs, if we find a bug (which is a good test) we report it to the developer and Manager. But at the same time we need to be very careful as how we react or report the defects and failures to the developers. We are pleased because we found a good bug but how will the requirement analyst, the designer, developer, project manager and customer react. Hence communication and effective polite communication is very necessary.

With Regards Vishal and friends +Vijay Rathod  +Nilesh Zade +nitin shelar  +Hrishikesh Tiwary  +Sagar Dhadge  +reegan samy  +Ashish Jethani