Featured Webinar: AI-Enhanced API Testing: A No-Code Approach to Testing | Watch Now

ISO 26262 Software Compliance in the Automotive Industry

Software Integration Testing

Integration testing follows unit testing with the goal of validating the architectural design or high-level requirements. Integration testing can be done bottom-up and top-down with a combination of approaches likely in many software organizations.

Bottom-Up Integration

Testers take unit test cases and remove stubs or combine them with other code units that make up higher levels of functionality. These types of integrated test cases are used to validate high-level requirements.

Top-Down Integration

In this testing, the high-level modules or subsystems are tested first. Progressively, including the testing of lower-level modules (sub-subsystems) follows. This approach assumes significant subsystems are complete enough to include and test as a whole.

The V-model is good for illustrating the relationship between the stages of development and stages of validation. At each testing stage, more complete portions of the software are validated against the phase that defines it.

The V-model might imply a waterfall development method. However, there are ways to incorporate Agile, DevOps, and CI/CD into this type of product development while still being standards-compliant.

Graphic of the V-model showing the relationship between architectural design and integration testing.
The V-model of software development shows the relationship between each phase and the validation inferred at each testing stage.

While the act of performing tests is considered software validation, it’s supported by a parallel verification process that involves the following activities to make sure teams are building the process and the product correctly:

Reviews

Walkthroughs

Analysis

Traceability

Test

Code coverage and more

The key role of verification is to ensure building delivered artifacts from the previous stage to specification in compliance with company and industry guidelines. 

Integration and System Testing as Part of a Continuous Testing Process

Performing some level of test automation is foundational for continuous testing. Many organizations start by automating manual integration and system testing (top-down) or unit testing (bottom-up).

To enable continuous testing, organizations need to focus on creating a scalable test automation practice that builds on a foundation of unit tests, which are isolated and faster to execute. Once unit testing is fully automated, the next step is integration testing and eventually system testing.

Continuous testing leverages automation and data derived from testing to provide a real-time, objective assessment of the risks associated with a system under development. Applied uniformly, it allows both business and technical managers to make better trade-off decisions between release scope, time, and quality.

Continuous testing isn’t just more automation. It’s a larger reassessment of software quality practices that are driven by an organization’s cost of quality and balanced for speed and agility. Even within the V-model used in safety-critical software development, continuous testing is still a viable approach, particularly during phases of testing, for example, during unit testing and integration testing.

The diagram below illustrates how different phases of testing are part of a continuous process that relies on a feedback loop of test results and analysis.

Example of a continuous testing pipeline
A continuous testing cycle

Parasoft Analysis and Reporting in Support of Integration and System Testing

Parasoft test automation tools support the validation (actual testing activities) in terms of test automation and continuous testing. These tools also support the verification of these activities, which means supporting the process and standards requirements. A key aspect of safety-critical automotive software development is requirements traceability and code coverage.

Two Way Traceability

Requirements in safety-critical software are the key driver for product design and development. These requirements include functional safety, application requirements, and nonfunctional requirements that fully define the product. This reliance on documented requirements is a mixed blessing because poor requirements are one of the critical causes of safety incidents in software. In other words, the implementation wasn’t at fault, but poor or missing requirements were.

Automating Bidirectional Traceability

Maintaining traceability records on any sort of scale requires automation. Application lifecycle management tools include requirements management capabilities that are mature and tend to be the hub for traceability. Integrated software testing tools like Parasoft complete the verification and validation of requirements by providing an automated bidirectional traceability to the executable test case. This includes the pass or fail result and traces down to the source code that implements the requirement.

Parasoft integrates with market-leading requirements management tools or ALM systems such as PTC Codebeamer, Polarion from Siemens, Atlassian Jira, Jama Connect, Jira, and others. As shown in the image below, each of Parasoft’s test automation solutions, C/C++test, C/C++test CT, Jtest, dotTEST, SOAtest, and Selenic, used within the development life cycle, supports the association of tests with work items defined in these systems, such as requirements, defects, and test case/test runs. Traceability is managed through Parasoft DTP’s central reporting and analytics dashboard.

Graphic of a test results and analysis feedback loop for different phases of testing in a continuous process.
Requirements traceability and reporting

Parasoft DTP correlates the unique identifiers from the management system with:

  • Static analysis findings
  • Code coverage
  • Results from unit, integration, and functional tests

Screenshot of Parasoft DTP Reporting Center dashboard showing bidirectional requirements traceability metrics.
Parasoft provides a reporting dashboard that captures the project’s testing status, correlation to requirements, and progress toward completion.

Results are displayed within Parasoft DTP’s traceability reports and sent back to the requirements management system. They provide full bidirectional traceability and reporting as part of the system’s traceability matrix.

Snapshot of a Codebeamer traceability matrix
Codebeamer traceability matrix. System requirements to high-level requirements to low-level requirements to test cases and test results.

The traceability reporting in Parasoft DTP is highly customizable. The following image shows a requirements traceability matrix template for requirements authored in Polarion that trace to the test cases, static analysis findings, the source code files, and the manual code reviews.

Screenshot of Parasoft DTP requirements traceability matrix integrated with Siemens Polarion.
Requirements traceability matrix template from Parasoft DTP integrated with Siemens Polarion.
The bidirectional correlation between test results and work items provides the basis for requirements traceability. Parasoft DTP adds test and code coverage analysis to evaluate test completeness. Maintaining this bidirectional correlation between requirements, tests, and the artifacts that implement them is an essential component of traceability.

Code Coverage

Code coverage expresses the degree to which the application’s source code is exercised by all testing practices, including unit, integration, and system testing — both automated and manual.

Collecting coverage data throughout the life cycle enables more accurate quality and coverage metrics, while exposing untested or under tested parts of the application. Depending on the safety integrity level (ASIL in ISO 26262), the depth and completeness of the code coverage will vary.

Application coverage can also help organizations focus testing efforts when time constraints limit their ability to run the full suite of manual regression tests. Capturing coverage data on the running system on its target hardware during integration and system testing completes code coverage from unit testing.

Benefits of Aggregate Code Coverage

Captured coverage data is leveraged as part of the continuous integration (CI) process, as well as part of the tester’s workflow. Parasoft DTP performs advanced analytics on code coverage from all tests, source code changes, static analysis results, and test results. The results help identify untested and undertested code and other high-risk areas in the software.

Analyzing code, executing tests, tracking coverage, and reporting the data in a dashboard or chart is a useful first step toward assessing risk, but teams must still dedicate significant time and resources to reading the tea leaves and hope that they’ve interpreted the data correctly.

Understanding the potential risks in the application requires advanced analytics processes that merge and correlate the data. This provides greater visibility into the true code coverage and helps identify testing gaps and overlapping tests. For example, what is the true coverage for the application under test when your tools report different coverage values for unit tests, automated functional tests, and manual tests?

The percentages cannot simply be added together because the tests overlap. This is a critical step for understanding the level of risk associated with the application under development.

Screenshot of Parasoft DTP reporting and analytics dashboard showing managing change.
Parasoft DTP reporting and analytics dashboard

Accelerating Integration and System Testing With Parasoft Test Automation Tools

Parasoft’s software test automation tools accelerate verification by automating the many tedious aspects of record keeping, documentation, reporting, analysis, and reporting.

Icon inside a blue circle showing a white horizontal line with arrows on both ends.

Two-way traceability for all artifacts ensures requirements have code and tests to prove they are being fulfilled. Metrics, test results, and static analysis results are traced to components and vice versa.

Icon inside a blue circle showing white open and close brackets with a forward slash in between to represent code.

Code and test coverage verifies all requirements are implemented and makes sure the implementation is tested as required.

Icon inside a blue circle showing a white outline of a target.

Target and host-based test execution supports different validation techniques as required.

Icon inside a blue circle showing the outline of a small white circle at the center of three other circles, connected to each one with a line representing connected smart devices.

Smart test execution manages change with a focus on tests for only code that changed and any impacted dependents.

Icon inside a blue circle showing four vertical white lines of varying heights representing a graph.

Reporting and analytics provides insight to make important decisions and keeps track of progress. Decision making needs to be based on data collected from the automated processes.

Icon inside a blue circle showing a white outline of a guideline checklist.

Automated documentation generation from analytics and test results support process and standards compliance.

Icon inside a blue circle showing three white arrows forming a continuous circle.

Standards compliance automation reduces the overhead and complexity by automating the most repetitive and tedious processes. The tools can keep track of the project history and relating results against requirements, software components, tests, and recorded deviations.

Dark blue banner with image of man talking to woman holding a tablet in hand in a server room.
Image of man and woman with tablet in hand having a discussion in a server room.

Elevate your software testing with Parasoft solutions.