See what API testing solution came out on top in the GigaOm Radar Report. Get your free analyst report >>

See what API testing solution came out on top in the GigaOm Radar Report. Get your free analyst report >>
Integration testing follows unit testing with the goal of validating the architectural design. It ensures that higher level functional capabilities in software components, including subsystems and not units, behave and perform as expected. Testing software integrations can be done bottom up and top down with a combination of approaches in many software organizations.
Integration testing is a critical aspect of the software verification process in DO-178C. The explicit requirements for integration testing can be found primarily in Section 5.4 Integration Process and Section 6.4 Software Testing.
Section 6.4.3 Requirements-Based Testing Methods in DO-178C requires hardware and software requirements-based testing, which includes integration testing. Section 6.4.3 b is more specific and outlines requirements-based integration testing as a method that concentrates on the “inter-relationships between the software requirements” and on the “implementation of requirements by the software architecture.”
DO-178C lists the following typical errors revealed by integration testing.
This approach begins by taking a unit test case and removing stubs and/or mocks to incorporate additional software units to construct higher-level functionality that can be tested. Functionality maps to or equates to a high-level requirement. Integration test cases are used to verify and validate high-level requirements.
In this testing, the highest-level software components or modules are tested first. Progressively, testing of lower-level modules follows or functional capabilities map to high-level requirements. This approach assumes significant subsystems are complete enough to be tested as a whole.
The V-model is good for illustrating the relationship between the stages of development and stages of validation. At each testing stage, more complete portions of the software are validated against the phase that defines it.
For some, the V-model might imply a Waterfall development method. However, this is not the case. DO-178C and previous versions of the standard do not specify a development methodology. The V-model shows a required set of development phases. Organizations determine how to address those phases. Teams can adopt a Waterfall, Agile, Spiral, or any development methodology, and be compliant to the standard.
While the act of executing tests and gathering their results is considered software validation, it’s supported by a parallel verification process that involves the following activities to make sure teams are building the process and the product correctly.
The key role of verification is to ensure that the building of delivered artifacts from the previous stage to specification is compliant with company and industry guidelines.
Performing some level of test automation is foundational for continuous testing. Many organizations start by simply automating manual integration and system testing (top down) or unit testing (bottom up).
To enable continuous testing, organizations need to focus on creating a scalable test automation practice that builds on a foundation of unit tests that are isolated and faster to execute. 0nce unit testing is fully automated, the next step is integration testing and eventually system testing.
Continuous testing leverages automation and data derived from testing to provide a realtime, objective assessment of the risks associated with a system under development. Applied uniformly, it allows both business and technical managers to make better tradeoff decisions between release scope, time, and quality.
Continuous testing is a powerful testing methodology that ensures continuous code quality through the SDLC. It enforces compliance in static code analysis and is always identifying safety and security defects during each developer’s commit action by also integrating unit, integration, and system testing in the loop.
The diagram below illustrates how different phases of testing are part of a continuous process that relies on a feedback loop of test results and analysis.
Parasoft test automation tools support the validation (actual execution testing activities) in terms of test automation and continuous testing. These tools also support the verification of these activities, which means supporting the process and standard requirements. Key aspects of safety-critical software development are requirements traceability and code coverage.
DO-178C considers traceability a key activity and artifact of the development process. Sections 5.4 Software Development Process and 6.4 Software Testing require bidirectional traceability between high-level and low-level requirements and the implementation, verification, and validation of assets, which include:
Source code
Requirement documents
Test results
Development plans and more
Requirements analysis requires “All software requirements should be identified in such a way as to make it possible to demonstrate traceability between the requirement and software system testing.” Providing a requirements traceability matrix helps satisfy this requirement.
Requirements in safety-critical software are the key driver for product design and development. These requirements include functional safety, application requirements, and nonfunctional requirements that fully define the product. This reliance on documented requirements is a mixed blessing because poor requirements are one of the critical causes of safety incidents in software. In other words, the implementation wasn’t at fault, but poor or missing requirements were.
Maintaining traceability records on any sort of scale requires automation. Application life cycle management tools include requirements management capabilities that are mature and tend to be the hub for traceability.
Integrated software testing tools like Parasoft complete the verification and validation of requirements by providing an automated bidirectional traceability to the executable test case. This includes the pass or fail result and traces down to the source code that implements the requirement.
Parasoft integrates with market leading requirements management tools or ALM systems including:
As shown in the image below, each of Parasoft’s test automation solutions (C/C++test, C/C++test CT, Jtest, dotTEST, S0Atest, and Selenic) used within the development life cycle supports the association of tests with work items defined in these systems, such as requirements, defects, and test cases or test runs. The central reporting and analytics dashboard, Parasoft DTP, manages traceability.
Parasoft DTP correlates the unique identifiers from the management system with:
Results are displayed within Parasoft DTP’s traceability reports and sent back to the requirements management. They provide full bidirectional traceability and reporting as part of the system’s traceability matrix.
The traceability reporting in Parasoft DTP is highly customizable. The following image shows a requirements traceability matrix template for requirements authored in Polarion and traces to the test cases, static analysis findings, the source code files, and the manual code reviews.
The bidirectional correlation between test results and work items provides the basis of requirements traceability. Parasoft DTP adds test and code coverage analysis to evaluate test completeness. Maintaining this bidirectional correlation between requirements, tests, and the artifacts that implement them is an essential component of traceability.
Code coverage expresses the degree to which the application’s source code is exercised by all testing practices, including unit, integration, and system testing-both automated and manual.
Collecting coverage data throughout the life cycle enables more accurate quality and coverage metrics, while exposing untested or under tested parts of the application.
As with traceability, code coverage is a key metric in airborne systems development. DO-178C has specific requirements in Section 6.4.4 Test Coverage Analysis. These requirements extend beyond code coverage and include the test coverage of all high-level and low-level requirements, along with the test coverage of the entire software structure.
Section 6.4.4.2 Structural Code Analysis requires the test coverage of source code beyond what may already be covered with requirements-based testing. This ensures that all code is executed by tests before certification. This code coverage analysis may reveal issues such as missing tests and dead or deactivated code. Section 6.4.4.3 Structural Coverage Analysis Resolution requires the remediation of these discrepancies discovered during coverage analysis.
Application coverage can also help organizations focus testing efforts when time constraints limit their ability to run the full suite of manual regression tests. Capturing coverage data on the running system on its target hardware during integration and system testing completes code coverage from unit testing.
Captured coverage data is leveraged as part of the continuous integration (CI) process as well as the tester’s workflow. Parasoft DTP performs advanced analytics on code coverage from all tests, source code changes, static analysis results, and test results. The results help identify untested and undertested code and other high risk areas in the software.
Analyzing code, executing tests, tracking coverage, and reporting the data in a dashboard or chart is a useful first step toward assessing risk, but teams must still dedicate significant time and resources to reading the tea leaves and hope that they’ve interpreted the data correctly.
Understanding the potential risks in the application requires advanced analytics processes that merge and correlate the data. This provides greater visibility into the true code coverage and helps identify testing gaps and overlapping tests. For example, what’s the true coverage for the application under test when your tools report different coverage values for unit tests, automated functional tests, and manual tests?
The percentages cannot simply be added together because the tests overlap. This is a critical step for understanding the level of risk associated with the application under development.
Test impact analysis uses data collected during test runs and changes in code between builds to determine which files have changed and which specific tests touched those files. Parasoft’s analysis engine can analyze the delta between two builds and identify the subset of regression tests that need to be executed. It also understands the dependencies on the units modified to determine the ripple effect the changes have made on other units.
Parasoft Jtest and dotTEST provide insight into the impact of software changes and recommend where to add tests and where further regression testing is needed.
Parasoft’s software test automation tools accelerate verification by automating the many tedious aspects of record keeping, documentation, reporting, analysis, and reporting.