Software Testing Experience
Wednesday, April 9, 2008
Integration Testing
Definition:
- Integration: The process of combining software elements, hardware elements or both into an overall subsystem/system
- Integration testing: Testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them
Objectives:
- To check that all data exchanged across an interface agree with the data structure specifications
- To confirm all the control flows
- To check the data reliability
- To check all the communication delays
- To check the Operating system features like memory consumption
Brief Explanation of Integration Testing:
A software system is composed of one or more subsystems, which are composed of one or more units (which are composed of one or more modules). Integration is the process of building a software system by combining subsystems into a working entity. Integration of subsystems should proceed in an orderly sequence. This allows the operational capabilities of the software to be demonstrated early and thus gives visible evidence that the project is progressing
Integration testing focuses on testing multiple modules working together. It tests the reliability and functionality of groups of units (modules) that have been combined together into larger segments. The most efficient method of integration is to slowly and progressively combine the separate modules into small segments rather than merging all of the units into one large component
Integration tests should verify that major components interface correctly. The scope of integration testing is to verify the design and implementation of all components from the lowest level defined in the architectural design up to the system level. The approach should outline the types of tests, and the amounts of testing, required.
The amount of integration testing required is dictated by the need to:
- Check that all data exchanged across an interface agree with the data structure specifications in the Detailed Design Document.
- Confirm that all the control flows in the Detailed Design Document have been implemented.
Note: The amount of control flow testing required depends on the complexity of the software.
Though the errors found in integration testing should be much fewer than those found in unit testing, they are more time-consuming to diagnose and fix. Studies of testing have shown architectural errors can be as much as thirty times as costly to repair as detailed design errors.
Integration test designs, test cases, test procedures and test reports are documented in the Integration Test section of the Software Verification and Validation Plan.
Problem areas:
- Internal:
- between components
- between components
- Invocation:
- Call / message passing
- Call / message passing
- Parameters:
- type, number, order, value
- type, number, order, value
- Invocation return:
- identity (who?), type, sequence
- identity (who?), type, sequence
- External:
- Interrupts (wrong handler?)
- I/O timing
- Interrupts (wrong handler?)
- Interaction
- between modules/units/subsystems
- between modules/units/subsystems
Integration Test Procedure
The following steps are to be followed:
- Test plan
- Test case designing
- Review and Approval of Test cases
- Test Execution
- Test Evaluation
- Reporting
1. Test Plan
Test Plan is a document, which instructs a tester what tests to perform in order to integrate and test already existing individual code modules.
- Inputs for Test Plan:
- Detailed Design & Analysis document
- Software Requirement Specification
- Quality Plan
- Detailed Design & Analysis document
- Test Items:
- List each of the items/ programs/units to be tested
- List each of the items/ programs/units to be tested
- Features to be tested:
- List each of the features/functions/requirements which will be tested
- Order of the modules to be tested
- List each of the features/functions/requirements which will be tested
- Features not to be tested:
- Explicitly list each of the features/functions/requirements which will not be tested and Why?
- Explicitly list each of the features/functions/requirements which will not be tested and Why?
- Approach:
- Should specify the any one of the methods/approach to be used to test the designated groups of features. Different approaches are:
- Top Down Approach
- Bottom-Up Approach
- Sandwich Approach
- Big-Bang Approach
- Top Down Approach
- Should specify the tools to be used to test the designated groups of features.
- Item pass/fail criteria:
- Should specify the criteria to be used to decide whether each test item has passed or failed testing
- Should specify the criteria to be used to decide whether each test item has passed or failed testing
- Suspension/Resumption criteria:
- Under what circumstances the test will be suspended all or a portion of the testing activity on the items associated with this plan.
- Specify the testing activities to be repeated, when testing resumes.
- Check points in long test.
- Under what circumstances the test will be suspended all or a portion of the testing activity on the items associated with this plan.
- Test Deliverables:
- Test Software
- Test Cases
- Test Data
- Test Reports
- Test Software
- Environmental needs:
- Hardware (client/server platforms, memory, backup systems etc)
- Software (OS, database, language, 3rd party software (if any))
- Test tool
- Hardware (client/server platforms, memory, backup systems etc)
- Training:
- Describe the plan for providing training in the use of the software being tested
- Specify the types of training
- Personnel to be trained
- The training staff.
- Describe the plan for providing training in the use of the software being tested
- Schedule:
- Description of overhead software, concentrating on those that may require special effort
- For writing the integration test procedures
- Performing tests
- Documenting test results
- Description of overhead software, concentrating on those that may require special effort
Note: Start and End dates given for each phase
- Resources:
- Who and how many are needed
- What skills are needed
- How much time needs to be asserted
- What kinds of tools are needed (hardware, software, platforms, etc.)
- Who and how many are needed
- Exit criteria:
- Bug rate falls below a certain level
- Deadlines (release deadlines, testing deadlines, etc.)
- When test goals have been reached
- Code coverage
- Defect density
- Code coverage
- Bug rate falls below a certain level
- Approvals:
- By Team Leader
- By Test Manager
- By Project Manager
- By Client
- By Team Leader
2. Designing Test Cases
- Input required for designing test cases:
- Software Requirement Specification
- Integration Test Plan
- Detailed Design & Analysis document
- Software Requirement Specification
- Test Items:
- List of Test Items to be tested
- Brief description on each item
- List of Test Items to be tested
- Environmental needs:
- Hardware: - Specify characteristics and configuration of hardware required to execute the test case
- Software: - Specify the system and application software required to execute the test case
- Hardware: - Specify characteristics and configuration of hardware required to execute the test case
- Test Case Design Method:
- Black Box Testing
- Black Box Testing
- Test Case Description:
- Test Case ID
- Name of the person prepared the test cases
- Name of the person for executing the test
- Date and Version of the execution
- Purpose
- Test Case Description
- Input value
- Expected result
- Actual Result
- Test Case ID
- Roles and Responsibility:
- Test Cases need to design prior to application development and generally prepared by test designer
- Test Cases need to design prior to application development and generally prepared by test designer
- Test case Output:
- Test Case Document
- Test Case Report
- Test Case Document
- Designing test cases:
- Objectives:
- To check the flow of messages between modules/components.
- To verify the object life time as per the sequence diagram.
- Error Handling-testing should be done
- To check the flow of messages between modules/components.
- Guidelines:
- To check the flow of messages between modules/components.
- Understand the High and Low level design
- Study the sequence and collaboration diagram that shows the integration of different components (Use Cases integration to build system)
- Identify the components to be integrated from the sequence diagrams
- Identify the main (start) method name, which calls method in other module.
- Identify the parameters passed
- Identify the data structure of the parameters
- Identify the domain values of each parameter passed
- Divide the input domain into equivalence classes, such that all valid, invalid values will fall under classes
- Identify boundary values of each equivalence class as input data
- Identify some random values from the class as input data
- Divide the input domain into equivalence classes, such that all valid, invalid values will fall under classes
- Identify the order in which the parameters passed between the components
- Write stubs or driver programs for the test case
- Understand the High and Low level design
- To verify the object life time as per the sequence diagram
- Identify the relative object life time from sequence diagram
- Write program such that it prints messages when
- An object is created
- An object is destroyed
- Check the output against the sequence diagram
- An object is created
- Identify the relative object life time from sequence diagram
- Error Handling-testing should be done
- Error noted must correspond to error encountered
- Error condition must not cause system intervention prior to error handling
- Exception-condition processing must be correct
- Error noted must correspond to error encountered
3. Reviews and Approval of Test Cases
- Review:
- For its Completeness
- For its Correctness
- For its Consistency
- For its Uniqueness
- For its Completeness
- Test Case Review checklist:
- Do the integration test cases accurately implement the integration test plan?
- Are all materials in the proper physical format?
- Does each integration test case include a complete description of the expected input and output or result?
- Have all testing dependencies been addressed (driver function, hardware, etc.)?
- Are the integration test cases unambiguous?
- Are all data set definitions and set-up requirements complete and accurate?
- Have all inter-case dependencies been identified and described?
- Is each condition tested once and only once?
- Are all test entrance and exit criteria observed?
- Are the integration test cases designed to show the reaction to failure and not merely the absence of failure?
- Are the integration test cases designed to show omissions and extensions?
- Are the integration test cases correct?
- Are the integration test cases realistic?
- Are internal Configuration Management set up, directories established, and case data and tools loaded?
- Are operator instructions and status indicators complete, accurate, and simple?
- Have all testing execution procedures been defined and documented?
- Have all integration test case standards been followed?
- Has the entire testing environment been documented?
- Are the integration test cases documented so as to be 100% reproducible?
- Documentation has to be verified whether it is clear and understandable for the typical audience of the documentation
- Do the integration test cases accurately implement the integration test plan?
- Approval:
- By Team Leader
- By Test Manager
- By Project Manager
- By Client
- By Team Leader
4. Test Execution
- Inputs:
- Executable Code
- Design Document
- Unit Test cases
- Unit Test Report
- Code Walkthrough Report
- Approval by Project/Test Manager
- Integration Test Cases
- Integration Test Case Report
- Integration Test Plan
- Executable Code
- Process:
- Perform random unit testing
- Random test-passes then start Integration testing phase, fails return back to the developer
- Perform Integration testing as detailed in the test plan using the test cases and procedures
- Corresponding stub or driver program will be identified and executed for each Test Case.
- Document the test cases with actual results
- Track the Integration Test report in Test Repository.
- Perform random unit testing
- Output:
- Test Case Report
- Test Case Report
5. Test Evaluation
- Inputs:
- Integration Test Case Report
- Defect Classification
- Integration test plan
- Integration Test Case Report
- Process:
- Analyze the test results to determine that software correctly implements the requirements
- Identify the Test Cases that are failed and trace out the exact scenario and description of the failure that resulted in the failure of the test case.
- Classify the defects as per defect classification guidelines
- Are the interfaces tested as specified in the integration test plan?
- Analyze the test results to determine that software correctly implements the requirements
- Output:
- Issue Log
- Issue Log
6. Test Reporting
Below steps are to be followed for each execution of test process
- Input:
- Test case Report
- Defect Classification Report
- Issue log
- Test case Report
- Test Report Process:
- Document the bug description for each failed Test Case in the Test Report
- Document the Summary Report after completing the test process
- Document the bug description for each failed Test Case in the Test Report
- Output:
- Test Case Report
- Test Report
- Test Summary Report
- Test Case Report
- Test Report Description:
- Report ID
- Report Title
- ProjectName & Version
- Environment
- TestCase ID
- ModuleName
- Date of testing
- Tested By
- Iteration
- Bug Description
- Defect type classification
- Priority
- Status
- Tester comments
- Developer comments
- Report ID
- Test Summary Report Description:
- Report ID
- Report Title
- PorjectName & Version
- Author
- Summary of Defects found with severity
- Summary of test cases executed with priority
- Number of resources worked
- Start Date
- End Date
- Time Period
- Report ID
Integration Testing Standards
- Integration Test plan has to be signed off by the test manager prior to testing
- Test cases has to be designed and approved prior to testing
- Defect classification has to be done prior to testing
- Test templates has to be designed and approved prior to testing
- Test evaluation needs to be given at most importance
- Test control activities has to be identified
- Test reporting needs a standard convention to write test report
- Look for problems with data exchange format problems
- Module invocation sequence problems
- Module synchronization problems
- Module response time problems
Existing Practices
- No practices followed
Recommended Practices
- Test plan has to be prepared and signed off by the test manager prior to testing
- Test cases has to be designed and approved prior to testing
- Defect classification has to be defined prior to testing
- Standard Test templates needs to be designed and approved prior to testing
- Test evaluation needs to be given at most importance
- Test control activities has to be identified and regularized
- Test reporting needs a standard convention to write test report
- Review of test reports has to be done at regular intervals
Integration Testing Approaches:
- Incremental approaches
- Top-down
- Bottom-up
- Sandwich
- Non-Incremental approaches
- Big-bang
Top-Down Approach:
Top down integration is basically an approach where modules are developed and tested starting at the top level of the programming hierarchy and continuing with the lower levels.
It is an incremental approach because we proceed one level at a time. It can be done in either "Depth first" or "Breadth first" manner.
- Depth first means we proceed from the top level all the way down to the lowest level.
- Breadth first means that we start at the top of the hierarchy and then go to the next level. We develop and test all modules at this level before continuing with another level.
Steps:
- Main control module used as the test driver, with stubs for all subordinate modules.
- Replace stubs either depth first or breadth first.
- Replace stubs one at a time.
- Test after each module integrated.
- Use regression testing (conducting all or some of the previous tests) to ensure new errors are not introduced.
Breadth First: A B C D E F G H I J K L M
Depth First: A B E H C F I D G J L M K
Features:
- Control program is tested first.
- Modules are integrated one at a time
- Major emphasis is on interface testing
- Starts at sub-system level with modules as 'stubs'
- Then tests modules with functions as 'stubs'
- Used in conjunction with top-down program development
Bottom-Up Approach:
Bottom-up approach, as the name suggests, is the opposite of the Top-down method, this process starts with building and testing the low level modules first, working its way up the hierarchy.
Because the modules at the low levels are very specific, we may need to combine several of them into what is sometimes called a cluster or build in order to test them properly.
Then to test these builds, a test driver has to be written and put in place.
Steps
- Lower level modules are combined into builds or clusters
- Develop a driver for a cluster
- Test the cluster
- Replace drive with module higher in hierarchy
Build 1: H E B
Build 2: I F C D
Build 3: L M J K G
After that Integrate Build 1, 2, and 3 with module A
Features:
- Allows early testing aimed at proving feasibility of particular modules
- Modules can be integrated in various clusters as desired
- Major emphasis is on module functionality and performance
Sandwich Approach:
Sandwich testing uses bottom-up and top-down testing simultaneously. It merges the code modules from the top and the bottom so that they eventually meet the middle. This approach is reasonable for integrating software in a large software product. The module interfaces are tested early in the test phase, and a basic prototype is available.
Steps
- Combine top down and bottom up
- Consider
– A, B, C, D, G, and J are logic modules ==> top down
--E, F, H, I, K, and L are functional modules ==> bottom up
- When all modules have been appropriately integrated, the Interfaces between the 2 groups are tested one by one
- The tester must be able to map the subsystem decomposition into three layers
- A target layer ("the meat")
- A layer above the target layer ("the bread above")
- A layer below the target layer ("the bread below)
- The target layer is the focus of attention, top-down and bottom-up testing can be done in parallel
Layer I
Layer II
Layer III
Bottom
Layer
Tests
Top
Layer
Tests
Features: -
- Module drivers and stubs are needed as in bottom-up and top-down testing.
- As in the top down approach, the module interfaces are tested early in the test phase, and a basic prototype is available.
- The functional specifications of the software product also are tested early, since they are usually implemented in the lower-level modules.
Big-Bang Approach:
Big-Bang approach is very simple in its philosophy where basically all the modules or builds are constructed and tested independently of each other and when they are finished, they are all put together at the same time. This is a small approach and the most common. For small and well-designed software products, the big-bang approach is feasible.
Steps for Integration Testing:
- All modules should be unit tested
- Choose integration testing strategy
- Do White Box testing/Black Box testing, test input/output parameters
- Exercise all modules and all calls
- Keep records (test results, test activities, faults)
Features: -
- Module stubs and drivers are not required
- Ideally, all logic errors in a module will be detected when the module is being tested independent of the others. Debugging is easier as it is localized to the module under test.
- There is significant potential for parallel testing. To test one module, we need not wait for the testing of another to complete. Given enough personnel, all modules can be tested in parallel. So, this phase of the testing can be completed quickly.
- The testing of large programs using this approach is expected to require less computer time. The full memory requirements of the program are needed only when the integrated program is being tested
- Module interface problems have to be dealt with only after individual modules have been tested. In the ideal case, all bugs found when the integrated program is being tested will be related to module interface problems alone
Advantages/Disadvantages:
Approach | Advantages | Disadvantages | Comments |
Top-down | The advantage of Top-down integration is that, having the skeleton; we can test major functions early in the development process. The major benefit of this procedure is that we have a partially working model to demonstrate to the clients and the top management The control program plus a few modules forms a basic early prototype No test drivers are needed Interface errors are discovered early Design errors are trapped earlier Enables early validation of design Modular features aid debugging | Difficult to produce a stub for a complex component Test stubs are needed The extended early phases dictate a slow manpower buildup Errors in critical modules at low levels are found late Difficult to observe test output from top-level modules The development of test stubs is time consuming and error prone. | We have a partially working model to demonstrate to the clients and the top management but it is hard to maintain a pure top-down approach in practice |
Bottom-up | No test stubs are needed It is easier to adjust manpower needs Errors in critical modules are found early Easier to create test cases and observe output Uses simple drivers for low-level modules to provide data and the interface Interface faults can be more easily found: when the developers substitute a test driver by a higher level component, they have a clear model of how the lower level component works and of the assumptions embedded in its interface. | Test drivers are needed Many modules must be integrated before a working program is available High-level errors may cause changes in lower modules It tests the most important subsystems, namely the components of the user interface, last | |
Sandwich | Many testing activities can be performed in parallel. | It does not thoroughly test the individual components of the target layer before integration. The need for additional test stubs and drivers. | In general, use combination of Top- down and Bottom-up approach Critical modules should be tested and integrated early. |
Big-bang | The main advantage of this approach is that it is very quick as no drivers or stubs are needed, thus cutting down on the development time. | There is nothing to demonstrate until all the modules have been built and integrated. It is expensive: if a test uncovers a failure, it is difficult to pinpoint the specific component (or combination of components), which is responsible for the failure | Usually considered as the least effective approach For very small applications Big-bang approach may make sense For large applications this approach is complex |
Bottom - Up Top - Down Sandwich Big Bang Early Early Early Time to get working program Late Early Early Late Drivers Yes No Yes Yes Stub No Yes Yes Yes Test specification Easy Hard Medium Easy Product control seq. Easy Hard Hard Easy Integration
Guidelines for Integration Testing
Top-down Approach:-
- Start with the high-levels of a system and work your way downward
Bottom Approach: -
- Start with the lower levels of the system and work upward
Labels: Manual Testing
0 Comments:
Post a Comment
<< Home