Friday, November 9, 2007

Manual Testing : Test Cases

A test case is a detailed procedure that fully tests a feature or an aspect of a feature. Whereas the test plan describes what to test, a test case describes how to perform a particular test.

Test Case Design Techniques
The preceding section of this paper has provided a "recipe" for developing a unit test specification as a set of individual test cases. In this section a range of techniques which can be to help define test cases are described. Test case design techniques can be broadly split into two main categories. Black box techniques use the interface to a unit and a description of functionality, but do not need to know how the inside of a unit is built. White box techniques make use of information about how the inside of a unit works. There are also some other techniques which do not fit into either of the above categories. Error guessing falls into this category.

These three techniques are based on input and output data, and an expectation of the system’s behavior.
Boundary Value Analysis (BVA) examines those data elements that can take a continuous range of values, using the requirements and design to predict boundaries where the system’s behavior changes.
The idea is to produce three values – one on the boundary itself, and the other two either side (as close as quantization permits). If the boundary is between valid and invalid ranges, the test case that uses the Invalid value will be a negative test – for instance, using 66 in an age field that only accepts values from 18-65.
Equivalence Class Partitioning (ECP) looks at the range between the boundaries. Each member of a given equivalence class should, in the context of a known test, make the system do the same thing – so the tester does not have to test every value in an equivalence class. Ranges of invalid input data can be seen as negative tests – for instance, an age field may be expected to reject all negative numbers in the
Same way.

ECP is commonly extended to include sets of non-continuous values, rather than ranges of continuous values. Be aware that some inputs may look equivalent, but may actually show very different behavior. For example, the input to a simple web form may be rejected if empty or too long, but the correct combination of control characters may compromise the security of the underlying web server.

1) Boundary Value Analysis
In this method tester has to concentrate more on the boundaries of the input values.
Ex: Consider any edit field which can accept values b/w 18-35
Here the, Min=18 Max=35 we need to consider range of values

Test Cases: Min=Pass Min+1=Pass Min-1=Fail Max=Pass Max-1=Pass Max+1=Fail
According to BVA method the valid inputs are:
17,18,19,34,35 & 36 .
If the range is 'a' to 'b'.
Then valid i/ps are: a-1,a,a+1,b-1,b,&b+1.
2) Equivalence pertitioning
In this method given I/P is divided into number of equivalence classes.
From each equivalence class one i/p value is chosed for testing.
Ex.Consider the above ex. edit field which accepts i/p value b/w 18-35.
From the given range of i/p value we can form 3 equivalence classes.
i) Less than 18.
ii) Between 18-35.
iii) Greater than 35.
Take any value from three equivalence classes and test them.
Ex. 6, 25,100
3) Error guessing.
This depends on the tester experience, domain knowledge, & common sense.
Ex:For the same above ex.
i)23+,-34,5*,00,Blank,5o(Not zero,O),etc.

Test case templates are blank documents that describe inputs, actions, or events, and their expected results, in order to determine if a feature of an application is working correctly.
Test case templates contain all particulars of test cases like:
Test ID
Description
Expected Results
Actual Results

Where:
· Test ID is a unique identifier for the test case. The unique identifier should relate back to the particular requirement the test case is verifying. For example, if your naming scheme for requirements is numbers, test cases for requirement 3 could have test IDs 3.1, 3.2, etc. Acceptance test cases must end the Test ID with a *.
· Description should clearly document the steps that need to be done in order to run the test case. Write the description specifically, such that any team member can run the test case, even if the author of the test case is not present.
· Expected results is a statement of what should happen when the test case is run.
· Actual results are an indication of whether the test case is currently passing or failing when it is run. The actual results could be recorded simply as “Pass” or “Fail.” However, it is also helpful to describe what happened in cases where a test case fails.

Test case consists the following fields:
1. TC name
2. TC number (ID)
3. Requirement Id or Use Case scenario (main success scenario, flow etc.)
4. Type of Testing. (Positive or Negative)
5. Objectives
6. Initial conditions or preconditions
7. Valid or invalid conditions (use the word Verify for valid conditions and Attempt to for TC with invalid data. This will help simplify verification and maintenance)
8. Input data
9. Test steps
10. Expected result
11. Comments.( Bug Id if Test cases fails while execution)
Points to remember:
Every requirement must have a minimum of one test case. Considering equivalence class partitioning, boundary value analysis, and diabolical test cases, it is likely that each requirement should have several test cases.


Refernces for more on Test Cases and Templates:
http://www.kaner.com/pdfs/GoodTest.pdf
http://www.rexblackconsulting.com/publications/BasicTemplates/Test%20Case%20Templates.xls http://www.stickyminds.com/testandevaluation.asp?ObjectType=tem&function=Search&newtopcat=SWTST

No comments: