Friday, November 9, 2007

Manual Testing - Test Plan, Testing types,bug report

Manual testing is the oldest and most rigorous type of software testing. Manual testing requires a tester to perform manual test operations on the test software without the help of Test automation. Manual testing is a laborious activity that requires the tester to possess a certain set of qualities; to be patient, observant, speculative, creative, innovative, open-minded, resourceful, unopinionated, and skillful.
As a tester, it is always advisable to use manual
white box testing and black-box testing techniques on the test software. Manual testing helps discover and record any software bugs or discrepencies related to the functionality of the product.

Manual testing can be replaced by test automation. It is possible to record and playback manual steps and write automated test script(s) using Test automation tools. Although, test automation tools will only help execute test scripts written primarily for executing a particular specification and functionality. Test automation tools lack the ability of decision-making and recording any unscripted discrepancies during program execution. It is recommended that one should perform manual testing of the entire product at least a couple of times before actually deciding to automate the more mundane activities of the product.

Manual testing helps discover defects related to the usability testing and GUI testing area. While performing manual tests the software application can be validated whether it meets the various standards defined for effective and efficient usage and accessibility. For example, the standard location of the OK button on a screen is on the left and of CANCEL button on the right. During manual testing you might discover that on some screen, it is not. This is a new defect related to the usability of the screen. In addition, there could be many cases where the GUI is not displayed correctly and the basic functionality of the program is correct. Such bugs are not detectable using test automation tools.
Repetitive manual testing can be difficult to perform on large software applications or applications having very large dataset coverage. This drawback is compensated for by using manual black-box testing techniques including
equivalence partitioning and boundary value analysis. Using which, the vast dataset specifications can be divided and converted into a more manageable and achievable set of test suites.

A manual tester would typically perform the following steps for manual testing:
1. Understand the functionality of program
2. Prepare a test environment
3. Execute
test case(s) manually
4. Verify the actual result
5. Record the result as Pass or Fail
6. Make a summary report of the Pass and Fail test cases
7. Publish the report
8. Record any new defects uncovered during the test case execution
There is no complete substitute for manual testing. Manual testing is crucial for testing software applications more thoroughly. Test automation has become a necessity mainly due to shorter deadlines for performing test activities, such as
regression testing, performance testing, and load testing.

What steps are needed to develop and run software tests?
The following are some of the steps to consider:
Obtain requirements, functional design, and internal design specifications and other necessary documents
Obtain budget and schedule requirements
Ø Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
Ø Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
Ø Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
Ø Determine test environment requirements (hardware, software, communications, etc.)
Ø Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
Ø Determine test input data requirements
Ø Identify tasks, those responsible for tasks, and labor requirements
Ø Set schedule estimates, timelines, milestones
Ø Determine input equivalence classes, boundary value analyses, error classes
Ø Prepare test plan document and have needed reviews/approvals
Ø Write test cases
Ø Have needed reviews/inspections/approvals of test cases
Ø Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
Ø Obtain and install software releases
Ø Perform tests
Ø Evaluate and report results
Ø Track problems/bugs and fixes
Ø Retest as needed
Ø Maintain and update test plans, test cases, test environment, and testware through life cycle
What is verification? Validation?

Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. Validation typically involves actual testing and takes place after verifications are completed.

What's a 'test case'?
A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results

What is a test plan?
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product.

Elements of test planning:
Establish objectives for each test phase
Establish schedules for each test activity
Determine the availability of tools, resources
Establish the standards and procedures to be used for planning and conducting the tests and reporting test results
Set the criteria for test completion as well as for the success of each test
The Structured Approach to Testing
Test Planning
Define what to test
Identify Functions to be tested
Test conditions
Manual or Automated
Prioritize to identify Most Important Tests
Record Document References
Test Design
Define how to test
Identify Test Specifications
Build detailed test scripts
Quick Script generation
Documents

Test Execution
 Define when to test
 Build test execution schedule
 Record test results

What is the goal of Software Testing?
 Demonstrate That Faults Are Not Present
 Find Errors
 Ensure That All The Functionality Is Implemented
 Ensure The Customer Will Be Able To Get His Work Done

Modes of Testing
 Static Analysis doesn’t involve actual program execution. The code is examined, it is tested without being executed Ex: - Reviews
 In Dynamic, The code is executed. Ex:- Unit testing

What is the goal of Software Testing?
 Demonstrate That Faults Are Not Present
 Find Errors
 Ensure That All The Functionality Is Implemented
 Ensure The Customer Will Be Able To Get His Work Done

There are four types of testing, with each test process giving you different information:
Unit Testing: Unit testing happens at the development level. When a developer builds a piece of code that delivers a set of functionality, they must test it to make sure it works and that it delivers the required functionality. A developer tests by running the code in their own environment. A piece of code (be it a web page or a function) should never go into a systems integration environment until it has been unit tested.

System integration testing (SIT): A systems integration environment is a test environment where code (web pages, classes, databases) is placed to ensure the application as a whole works together. Usually there’s more than one developer building an application or site. Each one unit tests their individual functions and pages, and one a regular basis, their code is deployed into the SIT environment and tested together. This ensures one developer’s code doesn’t break the others. Usually test cases and test scripts are developed based on the functional requirements and tested here. It provides a more integrated view of the application. This is also the environment that gives a mirror of the production environment. Most applications live with other applications in production. This is the first chance to ensure that the new application/site doesn’t break and isn’t broken by other sites or applications in the same environment.
Stress and Performance Testing (S&P): Stress and performance testing is the process of ensuring that the site or application handles well under load. This means that it can support the expected volumes of users and offers an acceptable level of performance for these users. What are the expected user volumes and acceptable performance levels? If the business doesn’t know, then look at what other sites/applications provide and assume from that point. Generally you don’t do too much S&P testing until most of the SIT testing is complete.
User Acceptance Testing (UAT): This is exactly what it sounds like; the actual users test the application to ensure it does what they are expecting it to do. Test scripts and test cases executed in the SIT testing above can be used, but they must be run by actual users of the system. Once the users test the application/site and approve it, then it can be moved into production and go live.
All four of these testing processes needs to be completed to ensure your application/site will be a success. It may sound like a lot of extra work, but it leads to a better quality product the first time around.

What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available. The following are items to consider in the tracking process:
Ø Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
Ø Bug identifier (number, ID, etc.)
Ø Current bug status (e.g., 'Released for Retest', 'New', etc.)
Ø The application name or identifier and version
Ø The function, module, feature, object, screen, etc. where the bug occurred
Ø Environment specifics, system, platform, relevant hardware specifics
Ø Test case name/number/identifier
Ø One-line bug description
Ø Full bug description
Ø Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool
Ø Names and/or descriptions of file/data/messages/etc. used in test
Ø File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
Ø Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
Ø Was the bug reproducible?
Ø Tester name ,Test date ,Bug reporting date
Ø Name of developer/group/organization the problem is assigned to
Ø Description of problem cause
Ø Description of fix
Ø Code section/file/module/class/method that was fixed
Ø Date of fix
Ø Application version that contains the fix
Ø Tester responsible for retest
Ø Retest date
Ø Retest results
Ø Regression testing requirements
Ø Tester responsible for regression tests
Ø Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

No comments: