Logo for GIGAOM 365x70

See what API testing solution came out on top in the GigaOm Radar Report. Get your free analyst report >>

DO-178C Software Compliance for Aerospace and Defense

Software Integration Testing

Integration testing follows unit testing with the goal of validating the architectural design. It ensures that higher level functional capabilities in software components, including subsystems and not units, behave and perform as expected. Testing software integrations can be done bottom up and top down with a combination of approaches in many software organizations.

Integration testing is a critical aspect of the software verification process in DO-178C. The explicit requirements for integration testing can be found primarily in Section 5.4 Integration Process and Section 6.4 Software Testing.

Section 6.4.3 Requirements-Based Testing Methods in DO-178C requires hardware and software requirements-based testing, which includes integration testing. Section 6.4.3 b is more specific and outlines requirements-based integration testing as a method that concentrates on the “inter-relationships between the software requirements” and on the “implementation of requirements by the software architecture.”

DO-178C lists the following typical errors revealed by integration testing.

  • Incorrect interrupt handling
  • Failure to satisfy execution time requirements
  • Incorrect software response to hardware transients or hardware failures, for example, start-up sequencing, transient input loads, and input power transients
  • Data bus and other resource contention problems, for example, memory mapping
  • Inability of built-in test to detect failures
  • Errors in hardware/software interfaces
  • Incorrect behavior of control loops
  • Incorrect control of memory management hardware or other hardware devices under software control
  • Stack overflow
  • Incorrect operation of mechanism(s) used to confirm the correctness and compatibility of field-loadable software
  • Violations of software partitioning
  • Incorrect initialization of variables and constants
  • Parameter passing errors
  • Data corruption, especially global data
  • Inadequate end-to-end numerical resolution
  • Incorrect sequencing of events and operations

Bottom-up Integration

This approach begins by taking a unit test case and removing stubs and/or mocks to incorporate additional software units to construct higher-level functionality that can be tested. Functionality maps to or equates to a high-level requirement. Integration test cases are used to verify and validate high-level requirements.

Top-Down Integration

In this testing, the highest-level software components or modules are tested first. Progressively, testing of lower-level modules follows or functional capabilities map to high-level requirements. This approach assumes significant subsystems are complete enough to be tested as a whole.

The V-model is good for illustrating the relationship between the stages of development and stages of validation. At each testing stage, more complete portions of the software are validated against the phase that defines it.

For some, the V-model might imply a Waterfall development method. However, this is not the case. DO-178C and previous versions of the standard do not specify a development methodology. The V-model shows a required set of development phases. Organizations determine how to address those phases. Teams can adopt a Waterfall, Agile, Spiral, or any development methodology, and be compliant to the standard.

Diagram showing the V-model development process and the relationship between each phase, and the verification and validation inferred at each stage of testing.
The V-model development process showing the relationship between each phase and the verification and validation inferred at each stage of testing.

While the act of executing tests and gathering their results is considered software validation, it’s supported by a parallel verification process that involves the following activities to make sure teams are building the process and the product correctly.

  • Reviews
  • Walkthroughs
  • Code analysis
  • Traceability
  • Test
  • Code coverage and more

The key role of verification is to ensure that the building of delivered artifacts from the previous stage to specification is compliant with company and industry guidelines.

Integration & System Testing as Part of a Continuous Testing Process

Performing some level of test automation is foundational for continuous testing. Many organizations start by simply automating manual integration and system testing (top down) or unit testing (bottom up).

To enable continuous testing, organizations need to focus on creating a scalable test automation practice that builds on a foundation of unit tests that are isolated and faster to execute. 0nce unit testing is fully automated, the next step is integration testing and eventually system testing.

Continuous testing leverages automation and data derived from testing to provide a realtime, objective assessment of the risks associated with a system under development. Applied uniformly, it allows both business and technical managers to make better tradeoff decisions between release scope, time, and quality.

Continuous testing is a powerful testing methodology that ensures continuous code quality through the SDLC. It enforces compliance in static code analysis and is always identifying safety and security defects during each developer’s commit action by also integrating unit, integration, and system testing in the loop.

The diagram below illustrates how different phases of testing are part of a continuous process that relies on a feedback loop of test results and analysis.

A diagram showing the continuous testing cycle
A continuous testing cycle

Analysis & Reporting in Support of Integration & System Testing

Parasoft test automation tools support the validation (actual execution testing activities) in terms of test automation and continuous testing. These tools also support the verification of these activities, which means supporting the process and standard requirements. Key aspects of safety-critical software development are requirements traceability and code coverage.

DO-178C considers traceability a key activity and artifact of the development process. Sections 5.4 Software Development Process and 6.4 Software Testing require bidirectional traceability between high-level and low-level requirements and the implementation, verification, and validation of assets, which include:

Icon inside a blue circle showing white open and close brackets with a forward slash in between to represent code.

Source code

Icon of a clipboard with a checkmark in the center

Requirement documents

Icon inside a blue circle showing four vertical white lines of varying heights representing a graph.

Test results

Icon of a lightbulb

Development plans and more

Requirements analysis requires “All software requirements should be identified in such a way as to make it possible to demonstrate traceability between the requirement and software system testing.” Providing a requirements traceability matrix helps satisfy this requirement.

Photo of an MQ-9B SeaGuardian flying above the clouds.

Two-Way Traceability

Requirements in safety-critical software are the key driver for product design and development. These requirements include functional safety, application requirements, and nonfunctional requirements that fully define the product. This reliance on documented requirements is a mixed blessing because poor requirements are one of the critical causes of safety incidents in software. In other words, the implementation wasn’t at fault, but poor or missing requirements were.


Automating Bidirectional Traceability

Maintaining traceability records on any sort of scale requires automation. Application life cycle management tools include requirements management capabilities that are mature and tend to be the hub for traceability.

Integrated software testing tools like Parasoft complete the verification and validation of requirements by providing an automated bidirectional traceability to the executable test case. This includes the pass or fail result and traces down to the source code that implements the requirement.

Parasoft integrates with market leading requirements management tools or ALM systems including:

As shown in the image below, each of Parasoft’s test automation solutions (C/C++test, C/C++test CT, Jtest, dotTEST, S0Atest, and Selenic) used within the development life cycle supports the association of tests with work items defined in these systems, such as requirements, defects, and test cases or test runs. The central reporting and analytics dashboard, Parasoft DTP, manages traceability.

Screenshot of Parasoft DTP showing the DO-178C reporting dashboard with project testing status.
An example of a DO-178C reporting dashboard that captures the project’s testing status and progress towards completion.

Parasoft DTP correlates the unique identifiers from the management system with:

  • Static analysis findings
  • Code coverage
  • Results from unit, integration, and functional tests

Results are displayed within Parasoft DTP’s traceability reports and sent back to the requirements management. They provide full bidirectional traceability and reporting as part of the system’s traceability matrix.

Screenshot of Parasoft DTP showing Codebeamer traceability matrix. It lists system requirements from high level to low level along with test cases and test results.
Codebeamer traceability matrix, which lists system requirements from high level to low level along with test cases and test results.

The traceability reporting in Parasoft DTP is highly customizable. The following image shows a requirements traceability matrix template for requirements authored in Polarion and traces to the test cases, static analysis findings, the source code files, and the manual code reviews.

Screenshot of Parasoft DTP showing Polarian Requirements Traceability matrix.
Requirements traceability matrix template from Parasoft DTP integrated with Siemens Polarion.

The bidirectional correlation between test results and work items provides the basis of requirements traceability. Parasoft DTP adds test and code coverage analysis to evaluate test completeness. Maintaining this bidirectional correlation between requirements, tests, and the artifacts that implement them is an essential component of traceability.

Code Coverage

Code coverage expresses the degree to which the application’s source code is exercised by all testing practices, including unit, integration, and system testing-both automated and manual.

Collecting coverage data throughout the life cycle enables more accurate quality and coverage metrics, while exposing untested or under tested parts of the application.

As with traceability, code coverage is a key metric in airborne systems development. DO-178C has specific requirements in Section 6.4.4 Test Coverage Analysis. These requirements extend beyond code coverage and include the test coverage of all high-level and low-level requirements, along with the test coverage of the entire software structure.

Section 6.4.4.2 Structural Code Analysis requires the test coverage of source code beyond what may already be covered with requirements-based testing. This ensures that all code is executed by tests before certification. This code coverage analysis may reveal issues such as missing tests and dead or deactivated code. Section 6.4.4.3 Structural Coverage Analysis Resolution requires the remediation of these discrepancies discovered during coverage analysis.

Application coverage can also help organizations focus testing efforts when time constraints limit their ability to run the full suite of manual regression tests. Capturing coverage data on the running system on its target hardware during integration and system testing completes code coverage from unit testing.

Benefits of Aggregate Code Coverage

Captured coverage data is leveraged as part of the continuous integration (CI) process as well as the tester’s workflow. Parasoft DTP performs advanced analytics on code coverage from all tests, source code changes, static analysis results, and test results. The results help identify untested and undertested code and other high risk areas in the software.

Analyzing code, executing tests, tracking coverage, and reporting the data in a dashboard or chart is a useful first step toward assessing risk, but teams must still dedicate significant time and resources to reading the tea leaves and hope that they’ve interpreted the data correctly.

Understanding the potential risks in the application requires advanced analytics processes that merge and correlate the data. This provides greater visibility into the true code coverage and helps identify testing gaps and overlapping tests. For example, what’s the true coverage for the application under test when your tools report different coverage values for unit tests, automated functional tests, and manual tests?

The percentages cannot simply be added together because the tests overlap. This is a critical step for understanding the level of risk associated with the application under development.

Screenshot of Parasoft DTP showing the Application Coverage dashboard with aggregated code coverage from various testing methods.
Aggregated code coverage from various testing methods

Understanding the Impact of Code Changes on Testing With Test Impact Analysis

Test impact analysis uses data collected during test runs and changes in code between builds to determine which files have changed and which specific tests touched those files. Parasoft’s analysis engine can analyze the delta between two builds and identify the subset of regression tests that need to be executed. It also understands the dependencies on the units modified to determine the ripple effect the changes have made on other units.

Parasoft Jtest and dotTEST provide insight into the impact of software changes and recommend where to add tests and where further regression testing is needed.

Accelerating Integration & System Testing With Test Automation Tools

Parasoft’s software test automation tools accelerate verification by automating the many tedious aspects of record keeping, documentation, reporting, analysis, and reporting.

Blue circle with a white icon of two stacked arrows in the center. The top one points left, the bottom one points right.

Two-way traceability

Two-way traceability for all artifacts ensures requirements have code and tests to prove they are being fulfilled. Metrics, test results, and static analysis results are traced to components and vice versa.

Icon inside a blue circle showing a white downward pointing arrow.

Code and test coverage

Code and test coverage verifies all requirements are implemented and makes sure the implementation is tested as required.

Blue circle with a white icon of a target mark in the center.

Target and host-based test execution

Target and host-based test execution supports different validation techniques as required.

Icon inside a blue circle showing a white clock at 4:00 p.m.

Smart test execution

Smart test execution manages change with a focus on tests for only code that changed and any impacted dependents.

Blue circle with a white icon in the center of four vertical, uneven lines representing data flow.

Reporting and analytics

Reporting and analytics provides insight to make important decisions and keeps track of progress. Decision making needs to be based on data collected from the automated processes.

Blue circle with a white icon of three arrows pointing clockwise in a circle representing continuous.

Automated documentation generation

Automated documentation generation from analytics and test results support process and standards compliance.

Icon inside a blue circle showing a security shield outlined in white with a check mark in the center.

Standards compliance automation

Standards compliance automation reduces the overhead and complexity by automating the most repetitive and tedious processes. The tools can keep track of the project history and relating results against requirements, software components, tests, and recorded deviations.

Dark blue banner with image of man talking to woman holding a tablet in hand in a server room.
Image of man and woman with tablet in hand having a discussion in a server room.

Elevate your software testing with Parasoft solutions.