Software Testing Experience

Wednesday, April 9, 2008

Testing Definitions


Testing Definitions:


Acceptance Testing: Is the process of comparing a program to its requirements


Ad - Hoc testing: Appropriate and very often syndicated when tester wants to become familiar with the product, or in the environment when technical/testing materials are not 100% completed. It is also largely based on general software product functionality/testing understanding and the normal 'Human Common Sense'.


Build Acceptance Test: The build acceptance test is a simplistic check of a product's functionality in order to determine if the product is capable of being tested to a greater extent. Every new build should undergo a build acceptance test to determine if further testing can be executed. Examples of a build acceptance: Product can be installed with no crashes or errors that terminate the installation. (Development needs to install the software from the same source accessed by QA (e.g. Drop Zone, CD-ROM, Electronic Software Distribution archives, etc.). Clients can connect to associate servers. Simple client server communication can be achieved


Bottom - up: Start testing with the bottom of the program. The bottom - up strategy does not exist until the last module is added.


CET (Customer Experience Test): An in-house test is performed before the Alpha, Beta, and FCS milestones which are used to determine whether the product can be installed and used without any problems, assistance, or support from others.


Client-Server Test: Testing Systems that operate in client/server environments.


Compatibility Test: This test is used to test compatibility between different client/server version combinations as well as other supported products.


Confidence Test: The confidence test ensures a product functions as expected by ensuring platform specific bugs are not introduced and functionality has not regressed from drop to drop. A typical confidence test is designed to touch all major areas of a product's functionality. These tests are run regularly once the Functional Freeze milestone is reached throughout the remaining development cycle.






Configuration Tests: These tests are run for product testing across various system configuration combinations. Examples of configurations: Cross platform (e.g. Windows Clients against a UNIX server). Client/server network configurations. Operating systems and database combinations (also including version combinations). Web servers and web browsers (for web products). The system configurations to test are determined from the product's compatibility matrix. This test is sometimes called a 'Platform test'. CET (Customer Experience Test) An in-house test is performed before the Alpha, Beta, and FCS milestones which is used to determine whether the product can be installed and used without any Problems, assistance, or support from others.


Depth Test: The depth tests are designed to test all the product's functionality in-depth.


Error Test: The error test is designed to test the dialogs, alerts, and other feedback provided to the user when an error situation occurs. The difference between this test and a Negative Test is that an Error Test is simply verifying that the correct dialogs are seen. The Negative Test is primarily looking at the Robustness and recovery facets.


Event-Driven Testing: Testing event-driven processes, such as unpredictable sequences of interrupts from input devices or sensors, or sequences of mouse clicks in a GUI environment.


Final Installation Test: Verification that the final media, prior to hand off to Operations for duplication, contains the correct code which was previously tested and is installable on all the supported platforms and databases. The product demo is executed and product Release Notes verified.


Functionality Test: This is designed to test the full functionality, features, and user interfaces of software based upon the functional specifications.


Full Test: A full test is Build Acceptance + Sanity + Confidence + Depth. This is designed to test the full

functionality, features, and user interfaces of software based upon the functional specifications.


Graphical User Interface (GUI): Testing the front-end user interfaces to applications, which use GUI support systems and standard such as MS Windows or Motif.


GUI Roadmaps: Step by step walk through of a tool or application, exercising each screen or window's menus, toolbar and dialog boxes to verify the execution and correctness of the Graphical User Interface.

Typically, this is handled by automated scripts and rarely is used as a manual tests due to the low

numbers of bugs found from them



Module testing: To test large program its necessary to use module testing. Module testing (or unit testing) is a process of testing individual subprograms (small blocks), rather than testing the program as a whole. Module testing eases the task of debugging. When error is found, it is known is which particular module it is.


Multi-user Test: Test maximum number of users specified in the design concurrently, to simulate the real user environment when they use the product.


Negative Test: Tests that deliberately introduce an error to check an application's behavior and robustness. For example, erroneous data may be entered, or attempts made to force the application to perform an operation that is should not be able to complete. Generally a message box is generated to inform the user of the problem. If the program terminates, the program should exit gracefully


Object-Oriented: Testing systems designed or coded using an object-oriented approach or development environment, such as C++ or Smalltalk.


Parallel Testing: Testing by processing the same (or at least closely comparable) test workload against both the existing and new versions of a product, then comparing results.


Performance Testing: Measurement and prediction of performance (e.g. Response time and/or throughput) for a given benchmark workload. Phased Approach A testing strategy where test cases are developed in stages so a minimally acceptable level of testing can be completed at any time. As new features are coded and frozen, they receive priorities for a given amount of time-so that a concentrated effort is directed toward testing those new features before the effort returns to validate the preexisting functionality. When no new features are available, preexisting features will be targeted-with priorities set by Project Leads.


1st level - Minimal Acceptance Test

2nd level - Confidence Tests

3rd level - Full Functionality Test

4th level - Error, Negative, and other Tests

5th level - System level tests


Phased Approach: A testing strategy where test cases are developed in stages so a minimally acceptable level of testing can be completed at any time. As new features are coded and frozen, they receive priorities for a given amount of times-so that a concentrated effort is directed toward testing those new features before the effort returns to validate the preexisting functionality. When no new features are available, preexisting features will be targeted-with priorities set by Project Leads.


1st level - Build Acceptance Test

2nd level - Sanity Test

3rd level - Confidence Test

4th level - Depth Test

5th level - Error, Negative, and other Tests

6th level - System level tests


Regression Tests: These tests are used for comprehensive re-testing of software to validate that all functionality and features of previous builds (or releases) have maintained integrity of features and functions tested previously. This suite of tests includes the Full Functionality Tests and bug regression Tests (automated and manual).


Sanity Test: Sanity tests are subsets of the confidence test and are used only to validate high-level functionality.


Security Testing: It is a test how easy to break program's security system.


Stress Test: These tests are used to validate software functionality at the limit (e.g. Maximum throughput) and then testing at and beyond these limits.


System Level Test: These tests check for factors such as Cross-Tool testing, memory management and other operating system factors.


Top - Down strategy: Start testing with the top of the program.


Volume Testing: Is the process of feeding a program with heavy volume of data.


Usability Test: The effectiveness, efficiency, and satisfaction with which specified users can achieve Specified goals in a particular environment. Synonymous with "ease of use"


Structural Test: A testing method where the test data is derived solely from the program structure.


Black Box Testing / Opaque Testing: Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing.


Guerilla testing involves ad hoc testing done by someone who is skilled at finding errors on the fly. It is one person's best shot at finding bugs. This approach is typically time-limited. For example, to say that an area will be guerilla-level tested, you might mean that this area of the program will receive a total of two days of ad hoc testing, spread across the project. Normally, guerilla testing is done after

(Not instead of) mainstream testing.


Formally planned testing involves carefully thought through test cases that are intended to thoroughly test that area. Depending on your company's philosophy of testing, this might mean a set of test cases that collectively trace back to (check) every item in a list of specifications. Or it might mean a set of test cases that cover all of the extreme values and difficult options and uses of the program (or other product component) in this area. or something else. It is harsh testing, intended to expose those problems that a customer would find if the area were not tested and fixed.

Labels:

posted by Balaji Visharaman at 10:37 PM

0 Comments:

Post a Comment

<< Home