Monday, December 3, 2007

Bug Triage Meeting – Severity & Priority

Bug Triage Meetings (sometimes called Bug Councils) are project meetings in which open bugs are divided into categories. The most important distinction is between bugs that will not be fixed in this release and those that will be

There are three categories for the medical usage, software also three categories - bugs to fix now, bugs to fix later, and bugs we'll never fix

Triaging a bug involves:
Making sure the bug has enough information for the developers and makes sense
Making sure the bug is filed in the correct place
Making sure the bug has sensible "Severity" and "Priority" fields

Let us see what Priority and Severity means
Priority is Business;
Severity is Technical

In Triages, team will give the Priority of the fix based on the business perspective. They will check “How important is it to the business that we fix the bug?” In most of the times high Severity bug is becomes high Priority bug, but it is not always. There are some cases where high Severity bugs will be low Priority and low Severity bugs will be high Priority.

In most of the projects I worked, if schedule drawn closer to the release, even if the bug severity is more based on technical perspective, the Priority is given as low because the functionality mentioned in the bug is not critical to business.

Priority and Severity gives the excellent metrics to identify overall health of the Project. Severity is customer-focused while priority is business-focused. Assigning Severity for a bug is straightforward. Using some general guidelines about the project, testers will assign Severity but while assigning a priority is much more juggling act. Severity of the bug is one of the factors for assigning priority for a bug. Other considerations are might be how much time left for schedule, possibly ‘who is available for fix’, how important is it to the business to fix the bug, what is the impact of the bug, what are the probability of occurrence and degree of side effects are to be considered.

Many organizations mandate that bugs of certain severity should be at least certain priority. Example: Crashes must be P1; Data loss must be P1, etc. A severe bug that crashes the system only once and not always reproducible will not be P1, where as an error condition that results re-entry a portion of input for every user will be P1

Microsoft uses a four-point scale to describe severity of bugs and three-point scale for Priority of the bug. They are as follows

Severity
---------------
1. Bug causes system crash or data loss.
2. Bug causes major functionality or other severe problems; product crashes in obscure cases.
3. Bug causes minor functionality problems, may affect "fit anf finish".
4. Bug contains typos, unclear wording or error messages in low visibility fields.

Priority
---------------
1. Must fix as soon as possible. Bug is blocking further progress in this area.
2. Should fix soon, before product release.
3. Fix if time; somewhat trivial. May be postponed.

Wednesday, November 28, 2007

Localization Testing & Internationalization Testing

Globalization
Globalization is the process of developing, manufacturing and s/w products that are intended for worldwide distribution. An important feature of these products is that, they support multiple languages and locale. In a globalize product, code is separated from the messages or text that it uses. This enables software to be used with different languages without having to rebuilding the complete s/w. Globalization is achieved through the Internationalization and localization. It is important to understand locale as supporting different locale is more than supporting different language. In addition to languages and geographical location, locale incorporates cultural information such as time, date, font, currency convention etc. Differences in spelling, currency and other conventions make testing with different locales necessary. Internationalization In I18N testing, first step is to identify all the textual information in the sytem. This includes all the text present on the application’s GUI, any text/messages that application is producing including error message/warning and help/documentation etc. Main focus of the I18N testing is not to find functional defects, but to make sure that product is ready for the global market. As in other non functional testing it is assumed that functional testing has been completed and all the functionality related defects are identified and removed. I18N testing can be divided in to two parts. First, to make sure that application’s GUI or functionality will not be broken with the translated text. Second to make sure that translation of all the strings have happened properly. This activity is called Translation Verification Testing and is normally conducted by person who knows the language very well. To make sure that application’s functionality or GUI will not be broken after the translation a popular technique known as pseudo-translation is used. In pseudo-translation instead of translating it completely, it is translated in a pseudo manner. For example an externalized string “Bad Command” can be translated in Japanese as [JA XXXXX Bad Command XXXXXX JA]. Now if the product is launched with locale as Japanese it should show the externalized string as given above instead of “Bad Command”. There are utilities to do this job for you, to do pseudo-translation of all the externalized strings of your application. During pseudo-translation you need to make sure that you are doing it roughly according to the rule. For example, width is normally expanded up to forty percent for the pseudo-translated strings as compare to the English. As stated above, In I18N testing focus is not on the functionality but on the translation and locale related issues. Once all the externalized strings are pseudo-translated, you need to make sure that you have test cases for every message or text element present in the system. Once it is done, same set of test cases can be executed on the properly translated build to make sure that translation is proper.

Localization Testing (L8N orl8n)

Localization testing typically follows after internationalization testing, which verifies the product's readiness for localization and its international use in the required culture or locale.As part of localization testing, User Interface (UI) testing concentrates on the appearance of the product assuring that the product looks good to the end user. We perform a series of UI checks on every dialog in the software. This series includes steps such as:
All User Interface is localized
List box and Combo box items are localized and are not truncated.
No duplicate items on list box/combo box.
Any available drop-down menus are localized.
The User Interface format and design is comparable to the source version.
No duplicate hotkeys exist.
No items are truncated.
No items overlap.
When a control gets focus, it does not truncate the text.
Alternative texts (tool-tips) are localized.
Additional UI testing steps check the localized product compliance with the relevant national language standards, such as formats consistency, capitalization, alphabet, currencies format etc.Localization testing involves also a number of functionality testing passes to verify if no functionality issues have been introduced into the product during the process of localization. The basic functionality tests include setup and upgrade tests run in the localized environment. The complete functional testing as a standalone service will verify the product's behavior in both source and localized versions and environments.

Internationalization testing (i18n or I18N)
Common i18n issues:
Adding a New Character Set or Language
8 bit clean (many encodings need to use the 8th bit for non-ASCII characters)
1 character = 1 byte (some Asian characters are multibyte)
Layout
Locale-sensitive integers (e.g. date/time)
English protocol elements (English retained on-the-wire, but translation on UI)
Issues with when reading text into fixed-size buffers (partial characters)
Special encodings of non-ASCII text (e.g. encoding standards for sending email)
Front Ends (FEs – e.g. winfe (Windows), macfe (Mac), xfe (Unix))
Complex Language Support (BiDi, Thai, Indic, etc.)
Non-Latin Layout Styles (Vertical, Ruby, etc)
Platform Independent IME Support
Natural Language Dictionary Lookup
Proofing API (in addition to spell checking)
Import/Export of database formats
Hardcopy outputs


For more information refer:
http://www.testinggeek.com/

Build Verification Testing (BVT)

Build Verification Testing (BVT)
Have you ever received a build into testing which was missing files, had the wrong file, or a file with the incorrect language version? These and other problems associated with the file properties of each new build usually result from a improperly designed build verification test (BVT) suite. The primary purpose of the BVT is to validate the integrity of each new build of a project. Some teams combine the BVT along with the Build Acceptance Test (BAT) which is not necessarily a bad approach if the test team owns the BVT. However, I have found in these situations the test team usually places more emphasis on the BAT to verify basic functionality of the build rather than the BVT which actually validates the build and thus misses critical problems with the build itself.
Ideally, the person or team that builds the project should be responsible for validating each new build before releasing it to build acceptance testing, but the test team should have oversight into what is being checked. It does no good to run a new build through a BAT process only to discover a missing file, or a German help file in a Japanese language version of the project. At a minimum the BVT should check every file in each new build for:
 Correct version information
 Correct time and date stamps
 Appropriate file flags
 Correct cyclic redundancy check (CRC) keys
 Correct language information
 ISO 9660 file naming conventions based on standards
 Viruses
Changes in the build can help the test team focus on critical areas, or identify areas of the project that should be revisited. Other information a BVT should provide to the test team includes:
 New files added to the project build
 Files removed from the project build
 Files with binary changes
It might also be a good idea to scan file names for public acceptability. For example, I remember one project in which a clever developer decided to call his new library 'sexme.dll.' This file name was released in the shipped product and several customers objected.
Even on large projects with several hundred files the tests to validate the integrity of a build would not take more than a few minutes using automated processes. The few minutes spent validating each new build is well justified considering the cost of releasing an build into testing and discovering the build is invalid (even though it may have passed the BAT) because then you have to question the validity of the test effort on that build.

Definition Here’s the definition we use: “A build acceptance test (sometimes also called build verification test a.k.a. BVT, smoke test, quick check, or the like) is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. The build acceptance test is generally a short set of tests, which exercises the mainstream functionality of the application. Any build that fails the build verification test is rejected, and testing continues on the previous build (provided there has been at least one build that has passed the acceptance test). So build acceptance tests are a type of regression testing that is done every time a new build is taken. Build acceptance tests are important because they let developers know right away if there is a serious problem with the build, and they save the test team wasted time and frustration.”

In testing, a Build Verification Test (BVT), also known as Build Acceptance Test, is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. The build acceptance test is generally a short set of tests, which exercises the mainstream functionality of the app s/w Any build that fails the build verification test is rejected, and testing continues on the previous build (provided there has been at least one build that has passed the acceptance test).
BVT is important because it lets developers know right away if there is a serious problem with the build, and they save the test team wasted time and frustration by avoiding test of an unstable build.

Thursday, November 15, 2007

Testing: Automation QTP FAQs

1. What automating testing tools are you familiar with? Win Runner, Load runner, QTP, Silk Performer, Test director, Rational robot, QA run. 2. How did you use automating testing tools in your job? 1. For regression testing 2. Criteria to decide the condition of a particular build 3. Describe some problem that you had with automating testing tool. The problem of win runner identifying the third party controls like infragistics control. 4. How do you plan test automation? 1. Prepare the automation Test plan 2. Identify the scenario 3. Record the scenario 4. Enhance the scripts by inserting check points and Conditional Loops 5. Incorporated Error Handler 6. Debug the script 7. Fix the issue 8. Rerun the script and report the result. 5. Can test automation improve test effectiveness? Yes, automating a test makes the test process: 1.Fast 2.Reliable 3.Repeatable 4.Programmable 5.Reusable 6.Comprehensive 6. What is data - driven automation? Testing the functionality with more test cases becomes laborious as the functionality grows. For multiple sets of data (test cases), you can execute the test once in which you can figure out for which data it has failed and for which data, the test has passed. This feature is available in the Win Runner with the data driven test where the data can be taken from an excel sheet or notepad. 7. What are the main attributes of test automation? software test automation attributes : Maintainability - the effort needed to update the test automation suites for each new release Reliability - the accuracy and repeatability of the test automation Flexibility - the ease of working with all the different kinds of automation test ware Efficiency - the total cost related to the effort needed for the automation Portability - the ability of the automated test to run on different environments Robustness - the effectiveness of automation on an unstable or rapidly changing system Usability - the extent to which automation can be used by different types of users

8. Does automation replace manual testing? There can be some functionality which cannot be tested in an automated tool so we may have to do it manually. Therefore manual testing can never be replaced. (We can write the scripts for negative testing also but it is hectic task).When we talk about real environment we do negative testing manually. 9. How will you choose a tool for test automation? Choosing of a tool depends on many things ... 1. Application to be tested 2. Test environment 3. Scope and limitation of the tool. 4. Feature of the tool. 5. Cost of the tool. 6. Whether the tool is compatible with your application which means tool should be able to interact with your application 7. Ease of use 10. How you will evaluate the tool for test automation? We need to concentrate on the features of the tools and how this could be beneficial for our project. The additional new features and the enhancements of the features will also help. 11. What are main benefits of test automation? FAST, RELIABLE, COMPREHENSIVE, REUSABLE 12. What could go wrong with test automation? 1. The choice of automation tool for certain technologies 2. Wrong set of test automated 13. How you will describe testing activities? Testing activities start from the elaboration phase. The various testing activities are preparing the test plan, Preparing test cases, Execute the test case, Log teh bug, validate the bug & take appropriate action for the bug, Automate the test cases. 14. What testing activities you may want to automate? 1. Automate all the high priority test cases which need to be executed as a part of regression testing for each build cycle. 15. Describe common problems of test automation. The common problems are: 1. Maintenance of the old script when there is a feature change or enhancement 3. The change in technology of the application will affect the old scripts 16. What types of scripting techniques for test automation do you know? 5 types of scripting techniques: Linear Structured Shared Data Driven Key Driven 17. What are principles of good testing scripts for automation? 1. Proper code guiding standards 2. Standard format for defining functions, exception handler etc 3. Comments for functions 4. Proper error handling mechanisms 5. The appropriate synchronization techniques 18. What tools are available for support of testing during software development life cycle? Testing tools for regression and load/stress testing for regression testing like, QTP, load runner, rational robot, win runner, silk, test complete, Astra are available in the market. -For defect tracking BugZilla, Test Runner are available. 19. Can the activities of test case design be automated? As I know it, test case design is about formulating the steps to be carried out to verify something about the application under test. And this cannot be automated. However, I agree that the process of putting the test results into the excel sheet. 20. What are the limitations of automating software testing? Hard-to-create environments like “out of memory”, “invalid input/reply”, and “corrupt registry entries” make applications behave poorly and existing automated tools can’t force these condition - they simply test your application in “normal” environment. 21. What skills needed to be a good test automaton engineer? 1. Good Logic for programming. 2. Analytical skills. 3. Pessimestic in Nature. 22. How to find that tools work well with your existing system? 1. Discuss with the support officials 2. Download the trial version of the tool and evaluate 3. Get suggestions from people who are working on the tool 23. Describe some problem that you had with automating testing tool. 1. The inability of win runner to identify the third party control like infragistics controls 2. The change of the location of the table object will cause object not found error. 3. The inability of the win runner to execute the script against multiple languages 24. What are the main attributes of test automation? Maintainability, Reliability, Flexibility, Efficiency, Portability, Robustness, and Usability - these are the main attributes in test automation. 25. What testing activities you may want to automate in a project? Testing tools can be used for: * Sanity tests (which is repeated on every build), * stress/Load tests (U simulate a large no of users, which is manually impossible) & * Regression tests(which are done after every code change)
26. How to find that tools work well with your existing system? To find this, select the suite of tests which are most important for your application. First run them with automated tool. Next subject the same tests to careful manual testing. If the results are coinciding you can say your testing tool has been performing. 27. How will you test the field that generates auto numbers of AUT when we click the button 'NEW" in the application? We can create a text file in a certain location, and update the auto generated value each time we run the test and compare the currently generated value with the previous one will be one solution. 28. How will you evaluate the fields in the application under test using automation tool? We can use Verification points (rational Robot) to validate the fields .Ex. Using object data, object data properties VP we can validate fields. 29. Can we perform the test of single application at the same time using different tools on the same machine? No. The Testing Tools will be in the ambiguity to determine which browser is opened by which tool. 30. Difference between Web application Testing and Client Server Testing. State the different types for Web application Testing and Client Server Testing types? which win runner 7.2 version compatible with internet explorer, firefox,n.n 31. What is 'configuration management'? Configuration management is a process to control and document any changes made during the life of a project. Revision control, Change Control, and Release Control are important aspects of Configuration Management. 32. How to test the Web applications? The basic difference in web testing is here we have to test for URL's coverage and links coverage. Using Win Runner we can conduct web testing. But we have to make sure that Web test option is selected in "Add in Manager". Using WR we cannot test XML objects. 33. What are the problems encountered during the testing the application compatibility on different browsers and on different operating systems Font issues, alignment issues 35. How testing is proceeded when SRS or any other document is not given? If SRS is not there we can perform exploratory testing. In Exploratory testing the basic module is executed and depending on its results, the next plan is executed.

36. How do we test for severe memory leakages? By using Endurance Testing. Endurance Testing means checking for memory leaks or other problems that may occur

Sample Testing CV

Mob :
Narendra Email -


Objective

Seeking a challenging & growth oriented career in Software Testing and Quality Management

Work Summary

X+ years of experience in Testing (change or Write accordingly as applicable for u)

· Experience in designing test cases and execution
Manual Testing and Automation
Experience in client/server and Web Testing
Good knowledge of OOP’s Concepts and SDLC
Quick learner and excellent team player, ability to meet tight deadlines and work under pressure
Knowledge on QTP, Silk Test
ü Around 3 years of professional experience in Software Development & Testing.
ü Expertise in using the Automation Testing Tools WinRunner, LoadRunner & Test Director.
ü Strong in Software Development Life Cycle and Test Methodology.
ü Good experience in Design and Execution of Manual and Automated Test Cases & Scenarios.
ü Efficient in writing Test Scripts using Automation Tools.
ü Experience in Testing of Client/ Server and Web Based Applications.
Involved in Functional Testing, Regression Testing, Compatibility Testing & System Testing
Experience in Developing and Executing test scripts using Win runner.
Experience in performing techniques like Functional testing,User Interface
tesing,Regression testing and System testing.
Experience in Web based Applications and Clint Server Applications.
Experience in preparing and executing Test Cases and Test Procedures.
Participated in code reviews and Requirement analysis.
Exposure in using Bug Tracking Tools.
· Clear understaning of Software Development Life Cycle.
· Reporting bugs in conjunction with the development team.
· Self starter,Self motivated and Quick learner.
· Versatile Team player with good communications and Problem solving skills.


Educational Profile (Change Accordingly)

· Pursuing M.Sc (IT) from Manipal University.
· B.C.A from Osmania University with aggregate of 70%.

Technical Exposure

Skills
Testing Tools
WinRunner
Test Management Tool
Test Director
Languages
C, Java, VB, SQL and TSL
Internet Technologies
HTML
Databases
Oracle, Ms-Access
Operating Systems
Windows 98/NT/2k, Linux

Professional Experience (Change Accordingly)

Ø Working as a Test Engineer in xxxxxxxx from June 2003 to July 2004.

Nature of Work

Extensively used automated test tool Win Runner for GUI testing and Regression Testing

Projects

#1. Title (Change Accordingly)

Client : Critical Health Care Company, Mumbai
Duration : Feb 2004 to July 2004
Environment : Visual Basic 6.0, oracle and Windows 98
Tools : Win Runner 7.0
Role : Test Engineer

Description:

This comprehensive system includes modules for:
Admin, Reception, Appointment Scheduling,Reports, Clinical Data Repository and Help.

Responsibilities: (Change Accordingly)
· Prepared test cases and Execution
· Extensively involved in Functional, System and Regression testing
· Developed the Test script using Win Runner and Executing the Test Script
· Defect Tracking and Reporting· Developed & Executed automated scripts for the Salary module, Personal Profile module to perform Functional Testing and Regression Testing.· Developed automated scripts for the work permit module to perform User Interface Testing.· Review of test cases and test scripts written by the team members.· Participated in team reviews.



Project Details below

Personal Information Optional (Change Accordingly)

Date of Birth :
Marital Status :
Language Known :
Present Address :




Sunday, November 11, 2007

Stress Control!

Stress Control!
Simple modifications in posture, habits, thought, and behavior often go a long way toward reducing feelings of stress and tension. Here are 8 quick and simple things you can do immediately to help keep your stress level under control.
1. Control Your AngerWatch for the next instance in which you find yourself becoming annoyed or angry at something trivial or unimportant, then practice letting go - make a conscious choice not to become angry or upset. Do not allow yourself to waste thought and energy where it isn't deserved. Effective anger management is a tried-and-true stress reducer.
2. BreatheBreathe slowly and deeply. Before reacting to the next stressful occurrence, take three deep breaths and release them slowly. If you have a few minutes, try out breathing exercises such as meditation or guided imagery.
3. Slow Downwhenever you feel overwhelmed by stress, practice speaking more slowly than usual. You'll find that you think more clearly and react more reasonably to stressful situations. Stressed people tend to speak fast and breathlessly; by slowing down your speech you'll also appear less anxious and more in control of any situation.
4. Complete One Simple To DoJump start an effective time management strategy. Choose one simple thing you have been putting off (e.g. returning a phone call, making a doctor's appointment) and do it immediately. Just taking care of one nagging responsibility can be energizing and can improve your attitude.
5. Get Some Fresh AirGet outdoors for a brief break. Our grandparents were right about the healing power of fresh air. Don't be deterred by foul weather or a full schedule. Even five minutes on a balcony or terrace can be rejuvenating.
6. Avoid Hunger and DehydrationDrink plenty of water and eat small, nutritious snacks. Hunger and dehydration, even before you're aware of them, can provoke aggressiveness and exacerbate feelings of anxiety and stress.
7. Do a Quick Posture CheckHold your head and shoulders upright and avoid stooping or slumping. Bad posture can lead to muscle tension, pain, and increased stress.
8. Recharge at the Day's EndPlan something rewarding for the end of your stressful day, even if only a relaxing bath or half an hour with a good book. Put aside work, housekeeping or family concerns for a brief period before bedtime and allow yourself to fully relax. Don't spend this time planning tomorrow's schedule or doing chores you didn't get around to during the day. Remember that you need time to recharge and energize yourself - you'll be much better prepared to face another stressful day.

11 Ways to Keep Your Cool:
Dale Collie is an author, speaker, former US Army Ranger, CEO, and professor at West Point. His McGraw-Hill book, "Winning Under Fire: Turn Stress into Success the US Army Way," takes strategies from the battlefield into the boardroom and beyond. A Purple Heart recipient, Dale has succeeded in both the Army and the corporate world through his management and leadership strategies.
11 Ways to Keep Your Cool
· Do your own job.
· Get organized.
· Communicate with the boss and others.
· Control interruptions.
· Schedule family time.
· Exercise.
· Eat right.
· Get eight hours sleep a night.
· Let others know what bugs you.
· Learn new things about your job.
· Volunteer to help others.

Do your own job: When poor the work habits of others create stress, remember why you're there. Pay attention to your own job. You will not be rated on the performance of others, but the boss will note the quality of your work. Stay focused on the job you were hired for and let management deal with improving the department or the company. Don't get stressed about things that are not your responsibility.
Organization: Regardless of company expectations, you can alleviate a lot of your stress by organizing your workspace and getting a firm grasp on the work that must be done. Even if you have to pay for it yourself, get the tools needed to organize your effort, such as files, furniture, PDAs, software, and training. Work with your boss to prioritize projects and routine tasks. Only get concerned about unfinished work if the boss gives it a priority. You'll never get everything done, so pick the most important and file everything else in an easy to reach file drawer.
Communication: It's important to maintain your supervisor's comfort level, so meet with them as often as necessary to keep them informed of projects and progress. Give them updates the way they want them (email, memos, briefings, etc.), and persist in getting the feedback that is so important in reducing stress. Use this same strategy with those who give you information or products to do your job and those who depend on what you give them. Good communication is essential for good stress control.
Interruptions: Avoid stressful interruptions by controlling your schedule and your communications. Establish times for meeting with those who want information from you and hold them to it. The more persistent you are, the more organized they will be. Handle phone calls and respond to email during specific times. Develop a list of people and events that disrupt your job and work with each until it is under control.
Family Time: Family situations are among the greatest stressors at work. There's an old axiom that says, "If momma ain't happy, ain't nobody happy." It's true. Avoid future problems by prioritizing family time on your schedule and stick to it. Get professional help if you're unable to resolve sticky situations.
Exercise: More than 80% of all doctor's visits are stress-related. Those who find time to exercise, reduce stress, strengthen their immune system, and improve their well-being are much more effective than those who do not. Do a little research and talk with the experts to find out what fits your needs. Make exercise part of your work schedule if possible; don't let it cut into family time. Regular exercise can add years to your own life and make you more productive for your employer.
Nutrition: Proper nutrition is a key to stress control. The US Army recognizes proper nutrition as a critical element in controlling stress among combat soldiers and you must admit, your job is sometimes as stressful as combat. Get information to improve nutrition. You'll have to make some deliberate changes because our eating habits are affected by our culture, the expectations of others, and inadequate knowledge about what makes a proper diet. Learn what is needed and make a plan.
Rest: Take charge of your sleep habits in the same way you work on your eating habits. Sleep deprivation is a major stressor by itself and it adds to the problem with other stressful events. Cut out the late night television. Quit taking work home from the office. Change the pattern of your weekend parties. Get some new friends. Do whatever is necessary to get back on track with seven or eight hours sleep every night. Studies show that twenty-minute power naps make us more productive, so use part of your lunch break for nutrition and part for a short nap to control stress. You'll get more done.
Discussion: Tell people what's on your mind. If you can't ignore someone's special talent for bugging you, talk it over with him or her. There's a good chance they are unaware of the offense, so you don't need to get up tight about it. In a friendly tone of voice, let them know what gets under your skin and be ready to make some concessions yourself. As you now know, their irritating habit is probably magnified by other stressors, so make sure you've done what you can to control stress before challenging anyone.
Education: The more educated you are about your job, the less stressful it becomes. Even if you've been on the job for years, there's always more to learn about the upstream and downstream impact of what you do. Stay up to date with trade journals, books, and other research. Become the expert at what you do and coach others. While some companies do not pay for this type education, your own investment will make you more valuable to your company. What you know is portable - and it looks good on a resume.
Volunteer: Helping others has an immediate impact on stress levels. Build in some family time by volunteering as a family once a month. Build rapport with supervisors and co-workers by organizing a once-a-week lunchtime volunteer program. Lead a food or clothing collection for needy employees or families outside your company.
Each of these stress relievers works independently of the others. Find one that's practical for you and put it to work. Friends, family, and co-workers will all notice the changes in you and thank you for making the effort.
For more information, go to www.couragebuilders.com.

Performance Testing: LoadRunner FAQs

Interview Questions - Loadrunner
1) What is load testing? - Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.
2)What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.
3)Did u use LoadRunner? What version? - Yes. Version 7.2.
4)Explain the Load testing process? -

Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives.
Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions.
Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us.
Step 4: Running the scenario.We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers.
Step 5: Monitoring the scenario.We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors.
Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use Loadrunner’s graphs and reports to analyze the application’s performance.
5)When do you do load and performance Testing? - We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.
6)What are the components of LoadRunner? - The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.
7)What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (Vugen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.
8)What Component of LoadRunner would you use to playback the script in multi user mode? - The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a Vuser script is executed by a number of Vusers in a group.
9)What is a rendezvous point? - You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.
10)What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
11) Explain the recording mode for web Vuser script? - We use Vugen to develop a Vuser script by recording a user performing typical business processes on a client application. Vugen creates the script by recording the activity between the client and the server. For example, in web based applications, Vugen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use Vugen to: Monitor the communication between the application and the server; generate the required function calls; and Insert the generated function calls into a Vuser script.
12) Why do you create parameters? - Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.
13) What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
14) How do you find out where correlation is required? Give few examples from your projects? - Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.
15) Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
16) What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.
17) When do you disable log in Virtual User Generator, When do you choose standard and extended logs? - Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Selectextended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the extended log options.
18) How do you debug a LoadRunner script? - Vugen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.
19) How do you write user defined functions in LR? Give me few functions you wrote in your previous project? - Before we create the User Defined functions we need to create the external library (DLL) with the function. We add this library to Vugen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*) Examples of user defined functions are as follows: GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.
20) What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the Vusers as process or as multithreading and whether each step as a transaction.
21) Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time Settings of the Vugen. The navigation for this is Run time settings, Pacing tab, and set number of iterations.

22) How do you perform functional testing under load? - Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.
23) What is Ramp up? How do you set this? - This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can bespecified. To set Ramp Up, go to ‘Scenario Scheduling Options’
24) What is the advantage of running the Vuser as thread? - Vugen provides the facility to use multithreading. This enables more Vusers to be run per generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.
25) If you want to stop the execution of your script on error, how do you do that? - The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the “Continue on error” option in Run-Time Settings.
26) What is the relation between Response Time and Throughput? - The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.
27) Explain the Configuration of your systems? - The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.
28) How do you identify the performance bottlenecks? - Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.
29) If web server, database and Network are all fine where could be the problem? - The problem could be in the system itself or in the application server or in the code written for the application.
30) How did you find web server related issues? - Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that occurred during scenario, the number of http responses per second, the number of downloaded pages per second.
31) How did you find database related issues? - By running “Database” monitor and help of “Data Resource Graph” we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues
32) What is the difference between Overlay graph and Correlate graph? - Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show’s the current graph’s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph’s Y-axis.
33) How did you plan the Load? What are the Criteria? - Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.
34) What does vuser_init action contain? - Vuser_init action contains procedures to login to a server.
35) What does vuser_end action contain? - Vuser_end section contains log off procedures.

36) What is think time? How do you change the threshold? - Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.
37) What is the difference between standard log and extended log? - The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.
38) Explain the following functions: - lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set.
39) Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would be reasonable to conclude that the bandwidth is constraining the volume of data delivered.
Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five different types of goals in a goal oriented scenario:
The number of concurrent Vusers
The number of hits per second
The number of transactions per second
The number of pages per minute
The transaction response time that you want your scenario
42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load increases. At 56 Vusers, there is a sudden, sharp increase in the average response time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.

43. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

44. Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated we just do create correlation for the value and specify how the value to be created.

45. What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.

Automation: Difference btw Win Runner and QTP

WinRunner
Summary:
This product is a mature tool that has been around since approximately 1995. It interfaces with most of the leading development toolkits using the WindowsAPI and toolkit DLLs to interface with the “Application Under Test”.
WinRunner offers a recording feature that will watch the individual tester and generate a test script to simulate the same actions just performed. The script is displayed as a program which can be enhanced with checkpoints, logic and special coding/programming.
WinRunner also has integration with Excel spreadsheets for data driven testing and the ability to write data out in Excel format or in simple text files.
Here is the description from the Mercury (owned by HP) “Features and Benefits” section of the WinRunner web page:
Significantly increase power and flexibility of tests without any programming: The Function Generator presents a quick and error-free way to design tests and enhance scripts without any programming knowledge. Testers can simply point at a GUI object, and WinRunner will examine it, determine its class and suggest an appropriate function to be used.
Use multiple verification types to ensure sound functionality: WinRunner provides checkpoints for text, GUI, bitmaps, URL links and the database, allowing testers to compare expected and actual outcomes and identify potential problems with numerous GUI objects and their functionality.
Verify data integrity in your back-end database: Built-in Database Verification confirms values stored in the database and ensures transaction accuracy and the data integrity of records that have been updated, deleted and added.
View, store and verify at a glance every attribute of tested objects: WinRunner's GUI Spy automatically identifies, records and displays the properties of standard GUI objects, ActiveX controls, as well as Java objects and methods. This ensures that every object in the user interface is recognized by the script and can be tested.
Maintain tests and build reusable scripts: The GUI map provides a centralized object repository, allowing testers to verify and modify any tested object. These changes are then automatically propagated to all appropriate scripts, eliminating the need to build new scripts each time the application is modified.
Test multiple environments with a single application: WinRunner supports more than 30 environments, including Web, Java, Visual Basic, etc. In addition, it provides targeted solutions for such leading ERP/CRM applications as SAP, Siebel, PeopleSoft and a number of others.
Simplify creation of test scripts: WinRunner's DataDriver Wizard greatly simplifies the process of preparing test data and scripts. This allows for optimal use of QA resources and results in more thorough testing.
Automatically identify discrepancies in data: WinRunner examines and compares expected and actual results using multiple verifications for text, GUI, bitmaps, URLs, and databases. This ensures stable functionality and execution of business transactions when the application is released into production.
Validate applications across browsers: WinRunner enables the same test to be used to validate applications in Internet Explorer, Netscape, and AOL. This saves testing time and reduces the number of scripts that must be developed and maintained.
Automatically recover tested applications from a crash: Unexpected events, errors, and application crashes during a test run can disrupt the testing process and distort results. WinRunner's Recovery Manager enables unattended recovery and provides a wizard that guides the process of defining a recovery scenario.
Leverage investments in other testing products: WinRunner fully integrates with our other testing solutions, including LoadRunner for load testing and TestDirector for global test management. Moreover, organizations can reuse WinRunner test scripts with QuickTest Professional.

- WinRunner “Features and Benefits” webpage from Mercury:
http://www.mercury.com/us/products/quality-center/functional-testing/winrunner/features.html
Pros:
Mature product that has been around since about 1995.
Simple interface.
Many features.
Many consultants and user group/forums for support.
Decent built in help.
Fewer features to have to learn and understand compared to QuickTest Pro.
Interfaces with the Windows API.
Integrates with TestDirector.

Cons:
Has basically been superceded by QuickTest Pro.
Looking at “program code” for the test case.
Coding is done in a proprietary language (TSL).
Very few resources available on TSL programming (it is based on the C programming language, but is not C).
Need to be able to program to a certain extent in order to gain flexibility and parameterization.
Need training to implement properly.
The GUI Map can be difficult to understand and implement.

QuickTest Pro
Summary:
QuickTest Professional provides an interactive, visual environment for test development.

Here is the description from the Mercury Interactive “How it Works” section of the QuickTest Pro web page:
Mercury QuickTest Professional™ allows even novice testers to be productive in minutes. You can create a test script by simply pressing a Record button and using an application to perform a typical business process. Each step in the business process is automated documented with a plain-English sentence and screen shot. Users can easily modify, remove, or rearrange test steps in the Keyword View. QuickTest Professional can automatically introduce checkpoints to verify application properties and functionality, for example to validate output or check link validity. For each step in the Keyword View, there is an ActiveScreen showing exactly how the application under test looked at that step. You can also add several types of checkpoints for any object to verify that components behave as expected, simply by clicking on that object in the ActiveScreen. You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files. Advanced testers can view and edit their test scripts in the Expert View, which reveals the underlying industry-standard VBScript that QuickTest Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View. Once a tester has run a script, a TestFusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test script specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining TestFusion reports with Mercury Quality Management, you can share reports across an entire QA and development team. QuickTest Professional also facilitates the update process. As an application under test changes, such as when a “Login” button is renamed “Sign In,” you can make one update to the Shared Object Repository, and the update will propagate to all scripts that reference this object. You can publish test scripts to Mercury Quality Management, enabling other QA team members to reuse your test scripts, eliminating duplicative work. QuickTest Professional supports functional testing of all popular environments, including Windows, Web, .Net, Visual Basic, ActiveX, Java, SAP, Siebel, Oracle, PeopleSoft, terminal emulators, and Web services.

- QuickTest Pro “How it Works” webpage from Mercury:http://www.mercury.com/us/products/quality-center/functional-testing/quicktest-professional/works.html

We like QuickTest Pro and now prefer implementing it over WinRunner. When you get into advance testing scenarios, QuickTest Pro has more options and they are easier to implement compared to WinRunner in our opinion.

Do to the similarities in concept and features, an experienced WinRunner user can easily convert to QuickTest Pro and quickly become an efficient Test Automation Engineer!

We recommend that existing customers begin all new development with QuickTest Pro and use the built-in feature of calling WinRunner scripts from QuickTest Pro for all existing WinRunner scripts that they already have. As older scripts require updates and time permits, we recommend replacing them with QuickTest Pro scripts. Eventually you will be able to convert your test script library with all QuickTest Pro scripts.
Pros:
Will be getting the initial focus on development of all new features and supported technologies.
Ease of use.
Simple interface.
Presents the test case as a business workflow to the tester (simpler to understand).
Numerous features.
Uses a real programming language (Microsoft’s VBScript) with numerous resources available.
QuickTest Pro is significantly easier for a non-technical person to adapt to and create working test cases, compared to WinRunner.
Data table integration better and easier to use than WinRunner.
Test Run Iterations/Data driving a test is easier and better implement with QuickTest.
Parameterization easier than WinRunner.
Can enhance existing QuickTest scripts without the “Application Under Test” being available; by using the ActiveScreen.
Can create and implement the Microsoft Object Model (Outlook objects, ADO objects, FileSystem objects, supports DOM, WSH, etc.).
Better object identification mechanism.
Numerous existing functions available for implementation – both from within QuickTest Pro and VBScript.
QTP supports .NET development environment (currently WinRunner 7.5 does not).
XML support (currently WinRunner 7.5 does not).
The Test Report is more robust in QuickTest compared to WinRunner.
Integrates with TestDirector and WinRunner (can kick off WinRunner scripts from QuickTest).
Cons:
Currently there are fewer resources (consultants and expertise) available due to QTP being a newer product on the market and because there is a greater Demand than Supply, thus fewer employee/consulting resources.
Must know VBScript in order to program at all.
Must be able to program in VBScript in order to implement the real advance testing tasks and to handle very dynamic situations.
Need training to implement properly.
The Object Repository (OR) and “testing environment” (paths, folders, function libraries, OR) can be difficult to understand and implement initially.

Importance of Soft Skills

Importance of Soft Skills
Broadly speaking, we can view software testers as having two kinds of skills: one set used to perform basic duties at work, and another set of skills used to approach work. The former can be categorized as technical skills and the latter as soft skills. To elaborate more on soft skills, these are the ones that define one's approach towards work, life, problems, etc. Soft skills are people skills. The best part about mastering them is that the application of these skills is not limited to one's profession, but their scope reaches all aspects of life. Technical skills may teach one how to meet the expectations of the job, but soft skills teach one to succeed, and to exceed expectations. It is surprising that we spend our time educating almost exclusively in technical skills. Having said so much in favor of soft skills, my intention is never to undermine the importance of technical skills. It's nearly impossible for a tester to survive in the profession without sound technical skills. What I intend to challenge here is a popular myth: Technical skills, and only technical skills make a tester a complete professional. I firmly believe that both technical and soft skills compliment each other and the balance between these two is what makes a tester a complete professional. Now, let's have a look at the various soft skills that make up a successful software tester. Discipline and Perseverance One obvious aspect of testing is that it can be extremely repetitive and may require a lot of manual effort. Consider the following situations:
A tester is struck with a bug that is not reproducible at all the instances. In order to reproduce the bug he goes through the whole series of steps again and again.
As part of a daily routine, a tester has been asked to collect data about test cases executed, bugs logged, etc.
After discovering a defect, a tester is supposed to write steps to recreate the defect.
There can be numerous examples that prove the reiterative nature of the job. A very predictable reaction to this repetition is to simply get tired of the job. But soft skills include the psychological tools to persevere, and to find ways to make effort more productive and interesting. This attitude difference helps a tester maintain focus and higher levels of quality work. It brings the ability to carry out task at hand in spite of difficulty. Reading Skills It may seem odd to classify reading as a skill. But its importance becomes more obvious when we have to deal with large chunks of information every day. As testers, we routinely encounter large quantities of data to read and comprehend. At the requirements review stage, when testers have to review hundreds of pages of requirements, the application of reading as a skill makes a big difference. Consider this fact about reading: An average person reads at the speed of about 200–250 words per minute. With the structured and scientific approach to reading, the reading speed can be more than 500 words per minute, and with improved retention and concentration. Correlating this with software testing, a requirements specification that would otherwise take a tester eight hours to read and comprehend, would take around four hours with improved reading. Negative Thinking Negative thinking can be the useful ally of a tester if it is applied at the right time. For a new product, a tester is working to create a QA plan or a master test plan. While mentioning the risks involved in the project, a tester has to consider all the things that can go wrong during the lifecycle of project. Training the mind to think negatively in such situations helps testers develop an efficient contingency plan. Let's also consider the test-design phase. An important part of test coverage and design are the tests that represent the way the application under test could fail. Every tester would agree that testing is incomplete without such tests. Again, negative thinking helps testers derive the negative user scenarios. Thus, negative testing is a skill. A word of caution here; this type of thinking is only for specific situations. A tester has to be smart enough to identify such situations and wear an appropriate thinking hat to deal with the situation.
Communication and Interpersonal Skills form the necessary ingredients for success in any profession. Communication is something that we always do in our personal lives as well as professional life. Communication is a very basic human skill and one cannot go very far without it. Though most of us agree that these skills are important, very few of us give these skills a high enough priority. For a tester, both verbal and written communication is crucial. Consider the situations below:
A tester communicates a defect in a program to the developer. This communication includes written as well as verbal communication. This moment of communication instantly decides the rapport, which a tester enjoys with developers.
The Testing department is often considered as the information source for management. This information pertains to product health at any given time in product’s lifecycle. Very often during the lifecycle of product, a tester is asked to present the product and testing status either via verbal presentation or written data, e.g. by emails to management.
Many instances can be thought of in the day-to-day work of testers, where a tester can make a difference to the situations with effective communications and interpersonal skills. Time Management and Effort Prioritization When we talk about time management, it’s not the time that we actually manage. We manage ourselves, our tasks, so that we make the most of our time. Testers have to juggle a lot of tasks Consider the instances below:
A tester is involved in Exploratory testing. In such a case, a tester may be testing, creating test cases, documenting results, and creating test metrics all in a day. Such situations call for managing time efficiently.
A Tester may be involved in more than one project or modules at the same time. The priority of work may vary. Such a situation is common and one needs to give special attention to effort prioritization even before venturing into multiple projects. Collect all the information that helps one prioritize the efforts.
Time management and effort prioritization define the importance given to each task and hence the sequence in which they should be performed. These skills help a tester manage work better and eliminate time involved in the tasks that are low priority, thus enhancing productivity.
Attitude A positive attitude is not accidental. It is something that is developed by training one's self. Attitudes are a matter of choice. Every situation we face offers us the chance to choose either to react positively or negatively. Perform a regular attitude checkup. It affects your job everyday. Attitude is a soft skill, and it is a central cause in a tester's ability to develop other effective soft skills.

Annoying colleagues at your workplace
With most of us spending 10-14 hours at work everyday, our workplaces have become our second homes. As a result, even the slightest of hindrances here tend to blow up into vexing issues, particularly if they are not tackled in time. Very often, these issues relate to our colleagues' behaviour. It can get a little awkward when it come to addressing some of these directly, as a lot of them concern subtle aspects of behaviour that are difficult to articulate.
Let's take a look at some of the most annoying aspects of workplace behaviour and what we can do about them.
Groupism
This is by far the most annoying aspect of a workplace.
"It can absolutely turn you off," says Revathi M, assistant manager -- sales, with an IT security company. "It takes a heavy toll on productivity because, if you don't belong in a certain group, you tend to feel left out. Then, you don't enjoy going to work anymore."
The snide remarks and covert glances that result from groupism are not only thoroughly unprofessional, they can also result in emotional hurt which is often difficult to express. It eventually leads to frustration and may result in people leaving their jobs.
Casual chatter
"The most irritating thing at the workplace is groups of women chatting endlessly about clothes, cosmetics and jewellery. Some of them even trade in these items at work. I think it's really unprofessional," says Purnima Gupta, a teacher at a reputed Mumbai school.
While casual conversations are fine when one wants to make small talk, one needs to realise extended chatter at the workplace disturbs other people. It also looks unprofessional.
Hypocrisy
This is widely touted as being omnipresent and is universally detested.
Sugary sweet behaviour in front of a person and backstabbing comments behind their back are known to prevail in virtually every kind of human interaction. The natural fallout of hypocrisy at the workplace is lack of trust, which greatly affects work relationships and productivity.
Discrimination
"When we are angry with something our boss does, we try hard to control our emotions and behave in a subdued manner. However, if a peon goofs up even slightly, a lot of us don't think twice before yelling at him. Is this justified?" wonders Revathi.
Dignity of labour and respect for all kinds of work is a prerequisite for a healthy work environment. We must appreciate that people at all levels provide value with whatever work they do. It can be discouraging if they are not treated with dignity, considering they work to the best of their ability, given individual constraints.

Messy cubicle partners
Another trait that can really upset people is messy surroundings. Eating at the workstation and dropping tidbits of food, or having heaps of papers and files that spill over to your neighbours' desks can be very bothersome.
A lot of people are fussy about cleanliness and are used to a certain standard of hygiene around them. If those standards are not met at the workplace, it can be very demotivating.
Undue inquisitiveness
While it is common for colleagues to turn into good friends over time, a certain level of formality is expected while one is at work. When this formality is breached, not everyone may take it well.
"When colleagues are unduly concerned about where I went the previous evening, with whom, why, etc, I really feel like telling them it is none of their business. If I wish to share personal thoughts with someone at the workplace, I need to be comfortable with that person. It has to be voluntary. The concept of personal space and privacy is rather alien to our culture," observes Purnima.
Taking credit
It is but natural that we want to be appreciated for the work we do. However, since most of the work we do in an organisation is team effort, it is important credit is accordingly shared.
"When it comes to getting work done, the higher-ups often give pep talks on how team work is important. However, when the results come in, each individual and department wants the credit. Typically, in any organisation, the frontline sales people take away the appreciation. The back-end operations group is conveniently forgotten, even though they contribute significantly to the success. This can be extremely frustrating for the people who have worked behind the scenes," says Revathi.
Talking loudly
"I wish some people had silencers fitted into their throats!" says Purnima exasperatedly. "At work, one must realise formal, subdued behaviour is called for. Etiquette demands we keep our voice low so others are not disturbed. The most annoying bit is when people excitedly almost yell over their phones for no reason. I'm sure it's equally annoying for the person at the other end of the line."
Talking loudly is often associated with rustic behaviour that lacks sophistication. It is advisable we keep our tone and pitch low when we are around colleagues.
Tackling annoying behaviour
It is indeed difficult to keep your cool and focus on productivity when behavioural factors affect performance at work. But it is necessary to be assertive if one has to solve the problem.
Of course, assertiveness is different from being accusatory. Assertiveness is all about talking in a factual manner without being judgmental. It involves conveying facts and their possible repercussions without getting emotional, or rude, in the process. Though it is easier said than done, professionalism demands one remain objective while dealing with such situations.
At the organisational level, the HR department -- and managers and supervisors as welll -- need to have a keen eye for observing team dynamics. Active intervention and counselling go a long way in smoothing ruffled feathers.
Avoiding annoying behaviour
As individuals, there are a few things that may help us avoid being in the bad books of our colleagues:
Avoid backbiting
At the workplace, never discuss a person in his/ her absence. This simple rule goes a long way in maintaining a healthy environment.
Seek feedback
If you think a colleague has been shying away from you for a while, casually enquire to find out if your behaviour has upset him/ her. If that is the case, patiently listen to your colleague's feelings without getting defensive. Once the person has opened up, it can be easier to resolve the issue.
Respect everyone
Imagine the situation if the entire housekeeping staff goes on strike. We often take a lot of people for granted simply because they may not demand attention. But that does not mean their work is any less important.
Observe formality
A lot of your colleagues may become good friends over time. However, work ethics dictate you remain sensitive to the feelings of everyone at the workplace. Hence, over-friendly behaviour ought to be avoided.

Friday, November 9, 2007

Manual Testing: FAQs

1. What is Software Testing?
2. What is the Purpose of Testing?
3. What types of testing do testers perform?
4. What is the Outcome of Testing?
5. What kind of testing have you done?
6. What is the need for testing?
7. What are the entry criteria for Functionality and Performance testing?
8. What is test metrics?
9. Why do you go for White box testing, when Black box testing is available?
10. What are the entry criteria for Automation testing?
11. When to start and Stop Testing?
12. What is Quality?
13. What is Baseline document, Can you say any two?
14. What is verification?
15. What is validation?
16. What is quality assurance?
17. What is quality control?
18. What is SDLC and TDLC?
19. What are the Qualities of a Tester?
20. When to start and Stop Testing?
21. What are the various levels of testing?
22. What are the types of testing you know and you experienced?
23. What exactly is Heuristic checklist approach for unit testing?
24. After completing testing, what would you deliver to the client?
25. What is a Test Bed?
26. What is a Data Guidelines?
27. Why do you go for Test Bed?
28. What is Severity and Priority and who will decide what?
29. Can Automation testing replace manual testing? If it so, how?
30. What is a test case?
31. What is a test condition?
32. What is the test script?
33. What is the test data?
34. What is an Inconsistent bug?
35. What is the difference between Re-testing and Regression testing?
36. What are the different types of testing techniques?
37. What are the different types of test case techniques?
38. What are the risks involved in testing?
39. Differentiate Test bed and Test Environment?
40. What ifs the difference between defect, error, bug, failure, fault?
41. What is the difference between quality and testing?
42. What is the difference between White & Black Box Testing?
43. What is the difference between Quality Assurance and Quality Control?
44. What is the difference between Testing and debugging?
45. What is the difference between bug and defect?
46. What is the difference between verification and validation?
47. What is the difference between functional spec. and Business requirement specification?
48. What is the difference between unit testing and integration testing?
49. What is the diff between Volume & Load?
50. What is diff between Volume & Stress?

More Testing Interview Question
51. What is the diff between Stress & Load Testing?
52. What is the Diff between Two Tier & Three tier Architecture?
53. What is the diff between Client Server & Web Based Testing?
54. What is the diff between Integration & System Testing?
55. What is the Diff between Code Walkthrough & Code Review?
56. What is the diff between walkthrough and inspection?
57. What is the Diff between SIT & IST?
58. What is the Diff between static and dynamic?
59. What is the diff between alpha testing and beta testing?
60. What are the Minimum requirements to start testing?
61. What is Smoke Testing & when it will be done?
62. What is Adhoc Testing? When it can be done?
63. What is cookie testing?
64. What is security testing?
65. What is database testing?
66. What is the relation ship between Quality & Testing?
67. How do you determine, what to be tested?
68. How do you go about testing a project?
69. What is the Initial Stage of testing?
70. What is Web Based Application Testing?
71. What is Client Server Application Testing?
72. What is Two Tier & Three tier Architecture?
73. What is the use of Functional Specification?
74. Why do we prepare test condition, test cases, test script (Before Starting Testing)?
75. Is it not waste of time in preparing the test condition, test case & Test Script?
76. How do you go about testing of Web Application?
77. How do you go about testing of Client Server Application?
78. What is meant by Static Testing?
79. Can the static testing be done for both Web & Client Server Application?
80. In the Static Testing, what all can be tested?
81. Can test condition, test case & test script help you in performing the static testing?
82. What is meant by dynamic testing?
83. Is the dynamic testing a functional testing?
84. Is the Static testing a functional testing?
85. What are the functional testing you perform?
86. What is meant by Alpha Testing?
87. What kind of Document you need for going for an Functional testing?
88. What is meant by Beta Testing?
89. At what stage the unit testing has to be done?
90 Who can perform the Unit Testing?
91. When will the Verification & Validation be done?
92. What is meant by Code Walkthrough?
93. What is meant Code Review?
94. What is the testing that a tester performs at the end of Unit Testing?
95. What are the things, you prefer & Prepare before starting Testing?
96. What is Integration Testing?
97. What is Incremental Integration Testing?
98. What is meant by System Testing?
99. What is meant by SIT?
100 .When do you go for Integration Testing?
Question Answers
1.What is Software Testing?
Ans- To validate the software against the requirement.
2. What is the Purpose of Testing?
Ans- To check whether system is meeting requirement.
3. What types of testing do testers perform?
Ans- Black Box
4. What is the Outcome of Testing?
Ans- System which is bug free and meet the system requirements.
5. What kind of testing have you done?
Ans- Black Box
6. What is the need for testing?
Ans- To Make error Free Product and Reduce Development Cost.
7. What are the entry criteria for Functionality and Performance testing?
Ans- Functional: should have stable functionality code Performance- After system testing
8. What is test metrics?
Ans- Contains how many test cases we have executed, which again contain how many pass, fail and can not be executed.
9. Why do you go for White box testing, when Black box testing is available?
Ans- To check code, branches and loops in code
10. What are the entry criteria for Automation testing?
Ans- Should have stable code.
11. When to start and Stop Testing?
Ans-When system meets the requirement and there is no change in functionality.
12. What is Quality?
Ans- Consists of two QA and QC.Customer point of View Fit for use and Meets User Requirement is Quality.
13. What is Baseline document, Can you say any two?
Ans- Document which is standard like test plan format, checklist for system testing .
13. What is Baseline document, Can you say any two?
Ans- Document which is standard like test plan format, checklist for system testing
14. What is verification?
Ans- To review the document.
15. What is validation?
Ans- To validate the system against the requirements.
16. What is quality assurance?
Ans – Bug presentation activity is called QA.
17. What is quality control?
Ans- Bug dictation activity is called QC.
18. What is SDLC and TDLC?
Ans- SDLC- Software Development Life Cycle and Testing is a part of it. TDLC-Test Development Life Cycle.
19. What are the Qualities of a Tester?
Ans- Should have ability of find out hidden bug as early as possible in SDLC.
20. When to start and Stop Testing?
Ans- Start- At the time of requirement gathering, When Meets Requirements.
21. What are the various levels of testing?
ans- Unit,Integration,System and Acceptance testing.
22. What are the types of testing you know and you experienced?
Ans- Black Box, Functional Testing, system testing, gui testing etc
23. After completing testing, what would you deliver to the client?
Ans- Testware
24. What is a Test Bed?
Ans- Test Data is called test bad.
25. What is a Data Guidelines?
Ans- Guidelines which are to be followed for the preparation of test data.
26. Why do you go for Test Bed?
Ans- To validate the system against the required input.
27. What is Severity and Priority and who will decide what?
Ans- Severity- how much severe is bug for application like critical Priority- How much urgency of functionality in which bug occur.
28. Can Automation testing replace manual testing? If it so, how?
Ans- Yes, if there are many modifications in functionality and it is near impossible to update the automated scripts.
29. What is a test condition?
Ans: logical input data against which we validate the system.
30. What is the test data?
Ans- input data against which we validate the system.
31. What is an Inconsistent bug?
Ans-Bug which are not reproducible
32. What is the difference between Re-testing and Regression testing?
Ans- Regression- Check that change in code have not effected the working functionality
Retesting- Again testing the functionality of the application.
What are the contents in an effective Bug report?
Project, Subject, Description, Summary, Detected By (Name of the Tester), Assigned To (Name of the Developer who is supposed to the Bug), Test Lead ( Name ), Detected in Version, Closed in Version, Date Detected, Expected Date of Closure, Actual Date of Closure, Priority (Medium, Low, High, Urgent), Severity (Ranges from 1 to 5), Status, Bug ID, Attachment, Test Case Failed (Test case that is failed for the Bug)
What is Bug Life Cycle?
Bug Life Cycle is nothing but the various phases a Bug undergoes after it is raised or reported.
New or Opened
Assigned
Fixed
Tested
Closed


What is Error guessing and Error seeding ?
Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.
Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal and also to estimate the number of faults remaining in the program.
What is the difference between Bug, Error and Defect?
Error : It is the Deviation from actual and the expected value.
Bug : It is found in the development environment before the product is shipped to the respective customer. Defect : It is found in the product itself after it is shipped to the respective customer.
What are SDLC and STLC ? Explain its different phases.
SDLC
Requirement phase
Designing phase (HLD, DLD (Program spec))
Coding
Testing
Release
Maintenance
STLC
System Study
Test planning
Writing Test case or scripts
Review the test case
Executing test case
Bug tracking
Report the defect
What is Ad-hoc testing?
Ad hoc testing is concern with the Application Testing without following any rules or test cases.
For Ad hoc testing one should have strong knowledge about the Application.

Glossary-Terms and Definations

A
Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development. Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.
Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services. Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.
Automated Testing: • Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. • The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
Backus-Naur Form: A Meta language used to formally describe the syntax of a language.
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Basis Set: The set of tests derived using basis path testing.
Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.
Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing: Testing of a rerelease of a software product conducted by customers. Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Boundary Value Analysis: In boundary value analysis, test cases are generated using the extremes of the input domaini, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".
Branch Testing: Testing in which all branches in the program source code are tested at least once. Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
CAST: Computer Aided Software Testing.
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding: The generation of source code.
Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component: A minimal software item for which a separate specification is available.
Component Testing: See Unit Testing.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.
Data Dictionary: A database that contains definitions of all data items defined during analysis.
Data Flow Diagram: A modeling notation that represents a functional decomposition of a system. Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing. Debugging: The process of finding and removing the causes of software failures.
Defect: Nonconformance to requirements or functional / program specification Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it. See also Static Testing. E ( ) Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution. End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification. Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes. Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test. F ( ) Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software. Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features. Functional Testing: See also Black Box Testing. • Testing the features and operational behavior of a product to ensure they correspond to its specifications. • Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. G ( ) Glass Box Testing: A synonym for White Box Testing. Gorilla Testing: Testing one particular module,functionality heavily. Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings. H ( ) High Order Tests: Black-box tests conducted once the software has been integrated. I ( ) Independent Test Group (ITG): A group of people whose primary responsibility is software testing, Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection). Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems. Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions. J ( ) K ( ) L ( ) Load Testing: See Performance Testing. Localization Testing: This term refers to making software specifically designed for a specific locality. Loop Testing: A white box testing technique that exercises program loops. M ( ) Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code. Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out. Mutation Testing: Testing done on the application where bugs are purposely added to it. N ( ) Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing. N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also Regression Testing. O ( ) P ( ) Path Testing: Testing in which all paths in the program source code are tested at least once. Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing". Positive Testing: Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing. Q ( ) Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer. Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives. Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality. Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality. Quality Management: That aspect of the overall management function that determines and implements the quality policy. Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management. Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management. R ( ) Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access. Ramp Testing: Continuously raising an input signal until the system breaks down. Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions. <>Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made. Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released). S ( ) <>Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing. <>Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load. <>Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level. <>Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire. <>Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed. <>Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/ <>Software Testing: A set of activities conducted with the intent of finding errors in software. <>Static Analysis: Analysis of a program carried out without executing the program. Static Analyzer: A tool that carries out static analysis. <>Static Testing: Analysis of a program carried out without executing the program. Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage. Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load. Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing. System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components. T ( ) Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met. Testing: • The process of exercising software to verify that it satisfies specified requirements and to detect errors. • The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). • The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. Test Automation: See Automated Testing. Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used. Test Case: • Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. • A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code. Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness. Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers. Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test. Test Harness: A program or test tool used to execute a tests. Also known as a Test Driver. Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829. Test Procedure: A document providing detailed instructions for the execution of one or more test cases. Test Scenario: Definition of a set of test cases or test scripts and the sequence in which they are to be executed. Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool. Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests. Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test. Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation. Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels. Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction. Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases. U ( ) Usability Testing: Testing the ease with which users can learn and use a product. Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities. User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase. Unit Testing: Testing of individual software components. V ( ) Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing. Verification: The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing. Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner. W ( ) Walkthrough: A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review. White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing. Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.
Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality. is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. Derive sets of input conditions that will fully exercise the functional requirements for a program
Attempts to find:
incorrect or missing functions
interface errors
errors in data structures or external database access
performance errors
Initialization and termination errors.
White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, and conditions. Use the control structure of procedural design to derive the test cases.
Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses. Functional and reliability testing in an Engineering environment. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration.
Incremental integration testing- continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing - black-box type testing geared to functional requirements of an application; Validating an application or Web site conforms to its specifications and correctly performs all its required functions. This type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.) The selective retesting of a
software system that has been modified to ensure that any bugs have been fixed and that no other previously-working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software
System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
End-to-End testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. is the primary testing done on the application to find it out whether the application is ready for QA testing
Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc. Stress testing is designed to test the software with abnormal situations. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs. For example:
Higher rates of interrupts
Data rates an order of magnitude above 'normal'
Test cases that require maximum memory or other resources.
Test cases that cause 'thrashing' in a virtual operating system.
Test cases that cause excessive 'hunting' for data on disk systems. Can also attempt sensitivity testing to determine if particular combinations of otherwise normal inputs can cause improper processing.
Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
User acceptance testing - determining if software is satisfactory to an end-user or customer.
Comparison testing - comparing software weaknesses and strengths to competing products.
Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. & typically made with customer
Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
Condition testing: aims to exercise all logical conditions in a program a module.
Conformance testing Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.
Alpha testing: This testing is made with customerBeta testing: This testing would be done by end users.