Thursday, November 8, 2007

Manual Testing

What is testing?
Testing is the process of executing a program with the intent of finding an error. A good test case is one that has a probability of finding an as-yet undiscovered error. A successful test is one that uncovers an as-yet-undiscovered error;

Why testing?
The development of software systems involves a series of production activities where opportunities for injection of human fallibilities are enormous. Errors may begin to occur at the very inception of the process where the requirements may be erroneously or imperfectly specified. Because of human inability to perform and communicate with perfection, software development is accompanied by a quality assurance activity.

Testing, is the core competence of any Testing Organization:

  • Understand the Application Under Test
  • Understand the Requirements (Business & Functional)
  • Prepare Test Strategy and Test Plan.
  • Develop Test Scenarios and Test Cases
  • Understand the data involved (prepare sample test data)
  • Execute all assigned Test cases
  • Automate test scripts (if required) using Automation Tool
  • Record defects in the defect tracking system (Ex. Bugzilla,Test Director)
  • Retest fixed defects after fixes
  • Assist the test leader with his/her duties Perform various Testings like Functional,System(End-to-End), Performance and Security testing
  • Understanding of SQL for Database Testing.
Software Test Life Cycle:
Requirements Analysis: Testing should begin in the requirements phase of the software life cycle(SDLC).
Design Analysis: During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those tests work.
Test Planning: Test Strategy, Test Plan(s), Test Bed creation.
Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in testing software.
Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team.
Test Reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
Retesting the Defects: Re-test the bugs or defects until it is fixed (Regression testing)

Software Testing MethodologiesThese are some commonly used software testing methodologies: Waterfall model
V model
Spiral model
RUP
Agile model
RAD
The waterfall model adopts a 'top down' approach regardless of whether it is being used for software development or testing. The basic steps involved in this software testing methodology are:
Requirement analysis
Test case design
Test case implementation
Testing, debugging and validating the code or product
Deployment and maintenance In this methodology, you move on to the next step only after you have completed the present step. There is no scope for jumping backward or forward or performing two steps simultaneously. Also, this model follows a non-iterative approach. The main benefit of this methodology is its simplistic, systematic and orthodox approach. However, it has many shortcomings since bugs and errors in the code are not discovered until and unless the testing stage is reached. This can often lead to wastage of time, money and valuable resources.
The V model gets its name from the fact that the graphical representation of the different test process activities involved in this methodology resembles the letter 'V'. The basic steps involved in this methodology are more or less the same as those in the waterfall model. However, this model follows both a 'top-down' as well as a 'bottom-up' approach (you can visualize them forming the letter 'V'). The benefit of this methodology is that in this case, both the development and testing activities go hand-in-hand. For example, as the development team goes about its requirement analysis activities, the testing team simultaneously begins with its acceptance testing activities. By following this approach, time delays are minimized and optimum utilization of resources is assured.
Spiral ModelAs the name implies, the spiral model follows an approach in which there are a number of cycles (or spirals) of all the sequential steps of the waterfall model. Once the initial cycle is completed, a thorough analysis and review of the achieved product or output is performed. If it is not as per the specified requirements or expected standards, a second cycle follows, and so on. This methodology follows an iterative approach and is generally suited for very large projects having complex and constantly changing requirements.
Rational Unified Process (RUP)The RUP methodology is also similar to the spiral model in the sense that the entire testing procedure is broken up into multiple cycles or processes. Each cycle consists of four phases namely; inception, elaboration, construction and transition. At the end of each cycle, the product or the output is reviewed and a further cycle (made up of the same four phases) follows if necessary. Today, you will find certain organizations and companies adopting a slightly modified version of the RUP, which goes by the name of Enterprise Unified Process (EUP).Agile ModelThis methodology follows neither a purely sequential approach nor does it follow a purely iterative approach. It is a selective mix of both of these approaches in addition to quite a few new developmental methods. Fast and incremental development is one of the key principles of this methodology. The focus is on obtaining quick, practical and visible outputs and results, rather than merely following theoretical processes. Continuous customer interaction and participation is an integral part of the entire development process.
Rapid Application Development (RAD)The name says it all. In this case, the methodology adopts a rapid development approach by using the principle of component-based construction. After understanding the various requirements, a rapid prototype is prepared and is then compared with the expected set of output conditions and standards. Necessary changes and modifications are made after joint discussions with the customer or the development team (in the context of software testing). Though this approach does have its share of advantages, it can be unsuitable if the project is large, complex and happens to be of an extremely dynamic nature, wherein the requirements are constantly changing.

V-Model: Verification and Validation
Software testing is too important to leave to the end of the project, and the V-Model of testing incorporates testing into the entire software development life cycle.





In a diagram of the V-Model, the V proceeds down and then up, from left to right depicting the basic sequence of development and testing activities. The model highlights the existence of different levels of testing and depicts the way each relates to a different development phase.

This model does have a number of good points, such as:

• It defines tangible phases of the process, and proposes a logical sequence in which these phases should be approached.

•It also defines a logical relationships between the phases.

• It demands that testing documentation is written as soon as possible, for example, the integration tests are written when the high level design is finished, the unit tests are written when the detailed specifications are finished.

• It gives equal weight to development and testing.

• It provides a simple and easy to follow map of the software development process.

for further more information refer: http://en.wikipedia.org/wiki/V-Modell

Understanding Software Defects
Describe 13 major categories of Software defects:
User interface errors – the system provides something that is different from Interface.
Error handling – the way the errors are recognized and treated may be in error.
Boundary-related errors – the treatment of values at the edges of their ranges may be incorrect.
Calculation errors – arithmetic and logic calculations may be incorrect.
Initial and later states – the function fails the first time it is used but not later, or Vice-versa.
Control flow errors – the choice of what is done next is not appropriate for the current state.
Errors in handling or interpreting data – passing and converting data between systems (and even separate components of the system) may introduce errors.
Race conditions – when two events could be processed, one is always accepted prior to the other and things work fine, however eventually the other event may be processed first and unexpected or incorrect results are produced.
Load conditions – as the system is pushed to maximum limits problems start to occur, e.g. arrays overflow, disks full.
Hardware – interfacing with devices may not operate correctly under certain conditions, e.g. device unavailable.
Source and version control - out-of-date programs may be used where correct revisions are available.
Documentation – the user does not observe operation described in manuals.
Testing errors – the tester makes mistakes during testing and thinks the system is behaving incorrectly.

No comments: