Content from Introduction to Unit Testing


Last updated on 2025-08-05 | Edit this page

Overview

Questions

  • What is unit testing?
  • Why do we need unit tests?
  • What makes src code difficult to unit test?

Objectives

  • Define the key aspects of a good unit test (isolated, testing minimal functionality, fast, etc).
  • Understand the key anatomy of a unit test in any language.
  • Explain the benefit of unit tests on top of integration/e2e tests.
  • Understand when to run unit tests.

What is unit testing?


Unit testing is a way of verifying the validity of a code base by testing its smallest individual components, or units.

“If the parts don’t work by themselves, they probably won’t work well together” – (Thomas and Hunt, 2019, The pragmatic programmer, Topic 51).

Several key aspects define a unit test. They should be…

  • Isolated - Does not rely on any other unit of code within the repository.
  • Minimal - Tests only one unit of code.
  • Fast - Run on the scale of ms or s.
Callout

Other forms of testing

There are other forms of testing, such as integration testing in which two or more units of a code base are tested to verify that they work together, or that they are correctly integrated. However, today we are focusing on unit tests as it is often the case that many of these larger tests are written using the same test tools and frameworks, hence we will make progress with both by starting with unit testing.

What does a unit test look like?

All unit tests tend to follow the same pattern of Given-When-Then.

  • Given we are in some specific starting state
    • Units of code almost always have some inputs. These inputs may be scalars to be passed into a function, but they may also be an external dependency such as a database, file or array which must be allocated.
    • This database, file or array memory must exist before the unit can be tested. Hence, we must set up this state in advance of calling the unit we are testing.
  • When we carry out a specific action
    • This is the step in which we call the unit of code to be tested, such as a call to a function or subroutine.
    • We should limit the number of actions being performed here to ensure it is easy to determine which unit is failing in the event that a test fails.
  • Then some specific event/outcome will have occurred.
    • Once we have called our unit of code, we must check that what we expected to happen did indeed happen.
    • This could mean comparing a scalar or vector quantity returned from the called unit against some expected value. However, it could be something more complex such as validating the contents of a database or outputted file.
Challenge

Challenge 1: Write a unit test in sudo code.

Assuming you have a function reverse_array which reverses the order of an allocated array. Write a unit test in pseudo code for reverse_array using the pattern above.

! Given
Allocate the input array `input_array`
Fill `input_array`, for example with `(1,2,3,4)`
Allocate the expected output array `expected_output_array`
Fill `expected_output_array` with the correct expected output, i.e., `(4,3,2,1)`

! When
Call `reverse_array` with `input_array`

! Then
for each element in `input_array`:
   Assert that the corresponding element of `expected_output_array` matches that of `input_array`

When should unit tests be run?

A major benefit of unit tests is the ability to identify bugs at the earliest possible stage. Therefore, unit tests should be run frequently throughout the development process. Passing unit tests give you and your collaborators confidence that changes to your code aren’t modifying the previously expected behaviour, so run your unit tests…

  • if you make a change locally
  • if you raise a merge request
  • if you plan to do a release
  • if you are reviewing someone else’s changes
  • if you have recently installed your code into a new environment
  • if your dependencies have been updated

Basically, all the time.

Do we really need unit tests?

Yes!

You may be thinking that you don’t require unit tests as you already have some well-defined end-to-end test cases which demonstrate that your code base works as expected. However, consider the case where this end-to-end test begins to fail. The message for this failure is likely to be something along the lines of

Expected my_special_number to be 1.234 but got 5.678

If you have a comprehensive understanding of your code, perhaps this is all you need. However, assuming the newest feature that caused this failure was not written by you, it’s going to be difficult to identify what is going wrong without some lengthy debugging.

Now imagine the situation where this developer added unit tests for their new code. When running these unit tests, you may see something like

test_populate_arrays Failed: Expected 1 for index 1 but got 0

This is much clearer. We immediately have an idea of what could be going wrong and the unit test itself will help us determine the problematic code to investigate.

Challenge

Challenge 2: Unit test bad practices

Take a look at 1-into-to-unit-tests/challenge in the exercises repository.

A solution is provided in 1-into-to-unit-tests/solution.

References


Content from Introduction to Unit Testing in Fortran


Last updated on 2025-08-05 | Edit this page

Overview

Questions

  • Why is there a lack of unit tests in existing Fortran codes?
  • Why do we need to write unit tests for Fortran code?

Objectives

  • Understand the need for unit testing Fortran code.
  • Identify Fortran code which is problematic for unit testing.
  • Understand steps to make Fortran more unit testable.

Why your Fortran may not already be unit tested


Many Fortran codes don’t yet have unit tests. This can be for a number of reasons…

  • Lack of resources (time and/or money).
    • Funding is earmarked for furthering science.
  • The code is not able be unit tested in its current state.
    • Code is written in Fortran 77 or uses outdated paradigms which make unit testing near impossible.
  • Lack of skills.
    • Developers don’t have the skills to implement unit tests.

What’s the matter with Fortran?


Assuming we have the time, money and the skills to implement some unit tests in a Fortran project, what about the code makes it a challenge to do so?

Various bad practices can make it difficult to unit test Fortran including…

  • Global variables
  • Large, multipurpose procedures
Challenge

Challenge 1: Identify bad practice for unit testing Fortran

Take a look at 2-intro-to-fortran-unit-tests/challenge in the exercises repository.

A solution is provided in 2-intro-to-fortran-unit-tests/solution.

How to start improving


Unit tests do not need to be added to a Fortran project in one huge implementation. The best approach is to incrementally add unit tests

  1. Ensure an end-to-end test exists
    • This can be written in a different language if needed. Pytest is often a good choice.
  2. Identify a procedure with minimal dependencies
    • If one can’t be found extract one into a procedure
  3. Ensure end-to-end test covers this procedure
    • Try to break the procedure and check your test fails as expected.
  4. Remove all non-local state in the procedure
  5. Check end-to-end test passes
  6. Write unit test(s) for procedure
  7. Repeat steps 1-6

This incremental approach will, over time, become smoother. As more unit tests exist the code will become easier to unit test as its infrastructure will become more mature and feature rich.

Callout

Working effectively with legacy code

Untested codebase is effectively legacy code. Therefore, a great resource for us is Working Effectively with Legacy Code (Feathers, 2004)

If you don’t have time to read the entire book, there is a good summary of the key point in this blog post The key points of Working Effectively with Legacy Code

References


Content from Fortran Unit Test Syntax


Last updated on 2025-08-05 | Edit this page

Overview

Questions

  • What is the syntax of writing a unit test in Fortran?
  • How do I build my tests with my existing build system?

Objectives

  • Able to write a unit test for a Fortran procedure with test-drive, veggies and/or pFUnit.
  • Understand the similarities between each framework and where they differ.

What frameworks will we look at?


  • Veggies
    • Integrated with FPM and CMake.
  • test-drive
    • This is the least featured of the frameworks.
    • Requires more boilerplate than the other frameworks.
    • Integrated with FPM and CMake.
  • pFUnit
    • Most feature rich framework.
    • Requires writing tests in a non-standard file format which is then converted to F90 before compilation.
    • Integrated with CMake.

The shared structure of a test module


All three frameworks share a basic structure for their test modules.

F90

module test_something
    ! use veggies|testdrive|funit
    ! use the src to be tested
    implicit none

    ! Define types to act as test parameters (and test case for pfunit)
contains

    ! Define a test suite (collection of tests) to be returned from a procedure

    ! Define the actual test execution code which will call the src and execute assertions

    ! Define constructors for your derived types (test parameters/cases)
end module test_something

Let’s dive into the syntax


We will use the game of life example from challenge 1 of the last episode to highlight the difference in syntax between the three frameworks.

Define types to act as test parameters (and test case for pfunit)

This step is similar for all three frameworks and uses standard Fortran syntax to define a derived type. The key differences are:

  • Whether the derived type extends another type or not.
  • The required type-bound procedures.
  • Whether a test case derived type is needed.

F90

type, extends(input_t) :: my_test_params
    integer :: input, expected_output
end type my_test_params

F90

type :: my_test_params
    integer :: input, expected_output
end my_test_params

F90

@testParameter
type, extends(AbstractTestParameter) :: my_test_params
    integer :: input, expected_output
contains
        procedure :: toString => my_test_params_toString
end type my_test_params

@TestCase(testParameters={my_test_suite()}, constructor=my_test_params_to_my_test_case)
type, extends(ParameterizedTestCase) :: my_test_case
    type(my_test_params) :: params
end type my_test_case

Define a test suite (collection of tests) to be returned from a procedure

In this section we define our suite of tests to test the unit in question. This can return a single test but it’s likely that there are multiple scenarios and edge cases we would like to test. Therefore, we return an array of tests rather than a single test.

For Veggies, we define a function which returns a Veggies derived-type that takes an array of test parameters representing different test scenarios and a generic test function, in this case check_my_src_procedure. This test function is where we actually call our src procedure and carry out assertions (see the next section).

F90

function my_test_suite() result(tests)
    type(test_item_t) :: tests

    type(example_t) :: my_test_data(1)

    ! Given input is 1, output is 2
    my_test_data(1) = example_t(my_test_params(1, 2))

    tests = describe( &
        "my_src_procedure", &
        [ it( &
            "given some inputs, when I call my_src_procedure, Then we get the expected output", &
            my_test_data, &
            check_my_src_procedure &
        )] &
    )
end function my_test_suite

For test-drive, we define a subroutine which populates an array of test parameters called the testsuite. To build this testsuite we provide additional subroutines which actually set up the test parameters and then call the test function.

F90

subroutine my_test_suite(testsuite)
    type(unittest_type), allocatable, intent(out) :: testsuite(:)

    testsuite = [ &
        new_unittest("my_src_procedure: Given input is 1, output is 2", test_my_procedure_with_input_1) &
    ]
end subroutine my_test_suite

!> Given input is 1, output is 2
subroutine test_my_procedure_with_input_1(error)
    type(error_type), allocatable, intent(out) :: error

    type(my_test_params) :: params

    params%input = 1
    params%expected_output = 2

    call check_my_src_procedure(error, params)
end subroutine test_my_procedure_with_input_1

For pFUnit, we define a function which returns an array of our test parameter derived-type.

F90

function my_test_suite() result(params)
    type(my_test_params), allocatable :: params(:)

    params = [ &
        my_test_params(1, 2) & ! Given input is 1, output is 2
    ]
end function my_test_suite

Define the actual test execution code which will call the src and execute assertions

This is where we actually call our src procedure and carry out assertions.

For Veggies, we define a function which takes a veggies input_t type and returns a veggies result_t type. As this input_t type is generic compared to out parameter type, we do some additional verification to ensure we are passing the expected test parameter type.

F90

function check_my_src_procedure(params) result(result_)
    class(input_t), intent(in) :: params
    type(result_t) :: result_

    integer :: actual_output

    select type (params)
    type is (my_test_params)
        call my_src_procedure(params%input, actual_output)

        reult_ = assert_equal(params%expected_output, actual_output, "Unexpected output from my_src_procedure")
    class default
        result_ = fail("Didn't get my_test_params")

    end select

end function check_my_src_procedure

For test-drive, we define a subroutine which takes an error and an instance of our test parameters derived-type.

F90

subroutine check_my_src_procedure(error, params)
    type(error_type), allocatable, intent(out) :: error
    class(my_test_params), intent(in) :: params

    integer :: actual_output
    
    call my_src_procedure(params%input, actual_output)

    call check(error, params%expected_output, actual_output, "Unexpected output from my_src_procedure")
    if (allocated(error)) return
end subroutine check_my_src_procedure
Callout

We must check if error has been allocated after every check. i.e.

F90

call check(...)
if (allocated(error)) return

call check(...)
if (allocated(error)) return

For pFUnit, we define a subroutine which takes an instance of our test case derived-type and is annotated with the pFUnit annotation @Test.

F90

@Test
subroutine TestMySrcProcedure(this)
    class (my_test_case), intent(inout) :: this

    integer :: actual_output

    call my_src_procedure(this%params%input, actual_output)

    @assertEqual(this%params%input, actual_output, "Unexpected output from my_src_procedure")
end subroutine TestMySrcProcedure

Define constructors for your derived types (test parameters/cases)

For Veggies and test-drive, this step is not always required but can be useful to simplify populating multiple different test cases. For example, if we wished to test a subroutine which performs some operations on a large matrix we could create a constructor to populate this matrix with random values. We would then need to call this constructor with different inputs to generate multiple test cases.

If we want to add a constructor for these types, it must be declared, at this point as an interface to the derived-type

Shown here is how to create an arbitrarily simple constructor. This would not actually be necessary as compilers can handle this for us. However, we use the same syntax for more complex derived types. First, declare your constructor,

F90

interface my_test_params
    module procedure my_test_params_constructor 
end interface my_test_params

Then, implement your constructor,

F90

contains
    function my_test_params_constructor(input, expected_output) result(params)
        integer, intent(in) :: input, expected_output

        type(my_test_params) :: params

        my_test_params%input = input
        my_test_params%expected_output = expected_output
    end function check_for_steady_state_in_out_constructor

For pFUnit, we are required to define two functions

  • A conversion from test parameters to a test case
  • A conversion from test parameters to a string

F90

function my_test_params_to_my_test_case(testParameter) result(tst)
    type (my_test_case) :: tst
    type (my_test_params), intent(in) :: testParameter

    tst%params%input = testParameter%input
    tst%params%expected_output = testParameter%expected_output
end function my_test_params_to_my_test_case

function my_test_params_toString(testParameter) result(string)
    class (my_test_params), intent(in) :: this
    character(:), allocatable :: string

    character(len=80) :: buffer

    write(buffer,'("Given ",i4," we expect to get ",i4)') this%input, this%expected_output
    string = trim(buffer)
end function my_test_params_toString
Challenge

Challenge: Write Fortran unit tests in multiple frameworks.

A solution is provided in 3-fortran-unit-test-syntax/solution.

Content from Understanding test output


Last updated on 2025-07-30 | Edit this page

Overview

Questions

  • How do I know if my tests are failing?
  • How do I fix a failing tests?

Objectives

  • Understand where the name and description of a test is defined for each framework.
  • Understand the success and failure output of a test for each framework.
  • Able to filter which tests are run at a time.
  • Able to follow the output of a failing test through to the cause of the failure.

Test output


Each of the three frameworks print the output of their tests to the terminal in different formats. We have a lot of control over the contents of this output, whether that’s for a failing test or a passing test.

Several aspects of the output can be defined by us for all frameworks,

  • The name and description of each test.
  • The message printed in the event of a failed assertion.

Defining the name and description of a test


Each of the three frameworks offer the ability to give a name to a test and to add a description which gives a more detailed explanation of what exactly is being tested.

The name and description of a Veggies test are defined within the testsuite function.

F90

function my_test_suite() result(tests)
    type(test_item_t) :: tests
    type(example_t) :: my_test_data(1)
    
    ! Given input is 1, output is 2
    my_test_data(1) = example_t(my_test_params(1, 2))

    tests = describe( &
        "my_src_procedure", &
        [ it( &
            "with specific inputs causes something else specific to happen", &
            my_test_data, &
            check_my_src_procedure &
        )] &
    )
end function my_test_suite

The first argument given to describe defines an overarching name which is applied to all the tests within it, in the case "my_src_procedure". Each it then defines its own descriptor which tells us exactly which scenario we are testing, in this case "with specific inputs causes something else specific to happen". This results in the output.

BASH

$ fpm test
Test that
    my_src_procedure
        with specific inputs causes something else specific to happen

A total of 1 test cases

All Passed
Took 4.47e-4 seconds

With test-drive, we can name both a testsuite and an individual test within a testsuite. The testuite name is applied within the program. In the example below we are giving the testuite defined by test_my_src_procedure_testsuite the name my_src_procedure.

F90

!...
type(testsuite_type), allocatable :: testsuites(:)

testsuites = [ &
    new_testsuite("my_src_procedure", test_my_src_procedure_testsuite) &
]
!...

Within the test suite test_my_src_procedure_testsuite we can then give names to each test. In the example below we have defined two tests "a special test case" and "another special test case".

F90

subroutine test_my_src_procedure_testsuite(testsuite)
    type(unittest_type), allocatable, intent(out) :: testsuite(:)

    testsuite =[ &
        new_unittest("a special test case", test_transpose_special_case), &
        new_unittest("another special test case", test_transpose_other_special_case) &
    ]
end subroutine test_my_src_procedure_testsuite

This results in the following output

BASH

$ fpm test
# Running testdrive tests suite
# Testing: my_src_procedure
  Starting a special test case ... (1/2)
       ... a special test case [PASSED]
  Starting another special test case ... (2/2)
       ... another special test case [PASSED]

With pFUnit, we name a test within the CMakeLists.txt. In the example below we define a test with the name pfunit_my_src_procedure_tests.

CMAKE

find_package(PFUNIT REQUIRED)
enable_testing()

# Filter out the main.f90 files. We can only have one main() function in our tests
set(PROJ_SRC_FILES_EXEC_MAIN ${PROJ_SRC_FILES})
list(FILTER PROJ_SRC_FILES_EXEC_MAIN EXCLUDE REGEX ".*main.f90")

# Create library for src code
add_library (sut STATIC ${PROJ_SRC_FILES_EXEC_MAIN})

# List all  test files
file(GLOB
  test_srcs
  "${PROJECT_SOURCE_DIR}/test/pfunit/*.pf"
)

# evolve_board tests
set(test_my_src_procedure ${test_srcs})
list(FILTER test_my_src_procedure INCLUDE REGEX ".*test_my_src_procedure.pf")

add_pfunit_ctest (pfunit_my_src_procedure_tests
  TEST_SOURCES ${test_my_src_procedure}
  LINK_LIBRARIES sut # your application library
  )

The other aspect of a pFUnit test in which we can add a descriptor is the string printed to describe an individual test case. This is defined within the toString function. In the example below, we directly print a description contained within the parameter set itself.

F90

@testParameter
type, extends(AbstractTestParameter) :: test_my_src_procedure_params
    integer :: input, output
    character(len=100) :: description
contains
    procedure :: toString => test_my_src_procedure_params_toString
end type test_my_src_procedure_params
!..
function test_my_src_procedure_testsuite() result(params)
    !...
    ! Define a set of input params within the testsuite function
    params(1) = test_my_src_procedure_params(input, output, "Some description")
    !...
end function test_my_src_procedure_testsuite
!..
function test_my_src_procedure_params_toString(this) result(string)
    class(test_my_src_procedure_params), intent(in) :: this
    character(:), allocatable :: string

    string = trim(this%description)
end function test_my_src_procedure_params_toString
!..

This results in the output

BASH

$ ctest 
Test project /Users/connoraird/work/fortran-unit-testing-exercises/episodes/4-debugging-a-broken-test/challenge-1/build-cmake
    Start 2: pfunit_transpose_tests
1/1 Test #2: pfunit_transpose_tests ...........   Passed    0.24 sec

100% tests passed, 0 tests failed out of 2

Total Test time (real) =   0.55 sec

Notice that this is the output from ctest and if there are no test failures, only a short summary is outputted. In the event of a failure we can get more detail via the --output-on-failure flag

BASH

$ ctest --output-on-failure
    Start 1: pfunit_my_src_procedure_tests
1/1 Test #1: pfunit_my_src_procedure_tests ...........***Failed  Error regular expression found in output. Regex=[Encountered 1 or more failures/errors during testing]  0.01 sec
 

 Start: <test_my_src_procedure.TestMySrcProcedure[Some description][Some description]>
. Failure in <test_my_src_procedure.TestMySrcProcedure[Some description][Some description]>
F   end: <test_my_src_procedure.TestMySrcProcedure[Some description][Some description]>

Time:         0.000 seconds
  
Failure
 in: 
test_my_src_procedure.TestMySrcProcedure[Some description][Some description]
  Location: 
[test_my_src_procedure.pf:59]
ArrayAssertEqual failure:
      Expected: <2.00000000>
        Actual: <1.00000000>
    Difference: <1.00000000> (greater than tolerance of 0.999999975E-5)
  
 FAILURES!!!
Tests run: 1, Failures: 1, Errors: 0
, Disabled: 0
STOP *** Encountered 1 or more failures/errors during testing. ***


0% tests passed, 1 tests failed out of 1

Total Test time (real) =   0.02 sec

The following tests FAILED:
          1 - pfunit_my_src_procedure_tests (Failed)
Errors while running CTest
Discussion

Challenge 1: Rename a test and improve its output.

Using one of the previous exercises we’ve looked at, try to rename a test in each of the three frameworks. Can you improve the information outputted in the event of a test failure?

Filtering tests


Each of the three frameworks offer the ability to filter which tests run. This can be useful when debugging a failing test in order to reduce the noise on the terminal screen, especially if there are many tests failing and you wish to tackle them one at a time.

Veggies comes with a built-in mechanism for filtering tests via the CLI flag -f.

-f string, --filter string    Only run cases or collections whose
                              description matches the given regular
                              expression. This option may be provided
                              multiple times to filter repeatedly before
                              executing the suite.

Using the example above, we could filter for this specific test with the following command

SH

fpm test -- -f "my_src_procedure" -f "specific inputs"

test-drive does not come with mechanism for filtering individual tests out-of-the-box. However, we are able to add this functionality ourselves by implementing a custom test runner. The example provided below allows running a single testsuite or an individual test.

F90

program test_main
    use testdrive, only : run_testsuite, new_testsuite, testsuite_type, &
            & select_suite, run_selected, get_argument

    use test_my_src_procedure, only : test_my_src_procedure_testsuite

    implicit none

    type(testsuite_type), allocatable :: testsuites(:)

    testsuites = [ &
        new_testsuite("my_src_procedure", test_my_src_procedure_testsuite) &
    ]

    call run_tests(testsuites)

contains

    subroutine run_tests(testsuites)
        use, intrinsic :: iso_fortran_env, only : error_unit

        type(testsuite_type), allocatable, intent(in) :: testsuites(:)

        integer :: stat, is
        character(len=:), allocatable :: suite_name, test_name
        character(len=*), parameter :: fmt = '("#", *(1x, a))'

        stat = 0

        call get_argument(1, suite_name)
        call get_argument(2, test_name)

        write(error_unit, fmt) "Running testdrive tests suite"
        if (allocated(suite_name)) then
            is = select_suite(testsuites, suite_name)
            if (is > 0 .and. is <= size(testsuites)) then
                if (allocated(test_name)) then
                    write(error_unit, fmt) "Suite:", testsuites(is)%name
                    call run_selected(testsuites(is)%collect, test_name, error_unit, stat)
                    if (stat < 0) then
                        error stop 1
                    end if
                else
                    write(error_unit, fmt) "Testing:", testsuites(is)%name
                    call run_testsuite(testsuites(is)%collect, error_unit, stat)
                end if
            else
                write(error_unit, fmt) "Available testsuites"
                do is = 1, size(testsuites)
                    write(error_unit, fmt) "-", testsuites(is)%name
                end do
                error stop 1
            end if
        else
            do is = 1, size(testsuites)
                write(error_unit, fmt) "Testing:", testsuites(is)%name
                call run_testsuite(testsuites(is)%collect, error_unit, stat)
            end do
        end if

        if (stat > 0) then
            write(error_unit, '(i0, 1x, a)') stat, "test(s) failed!"
            error stop 1
        end if
    end subroutine run_tests

end program test_main

With this in place, as long as we run the test executable itself, we can then filter with the following command

BASH

/path/to/test/exec my_src_procedure "a special test case"

If we run with ctest, we are limited to ctest’s filtering mechanism.

pFUnit leverages CTest’s mechanism to filter tests.

When building test-drive and veggies with CMake, to maintian the ability to run our tests individually, we can add named tests to ctest. To do this, we can add the following to the CMakeLists.txt.

CMAKE

# Create a list of tests
set(
  tests
  "my_src_procedure"
)
#...
# Define test executable and Link library 
#...
# Define tests using the veggies test executable
foreach(t IN LISTS tests)
  add_test(NAME "veggies_${t}" COMMAND "test_${PROJECT_NAME}-veggies" "-f" "${t}")
endforeach()

# Or define tests using the test-drive executable
foreach(t IN LISTS tests)
  add_test(NAME "testdrive_${t}" COMMAND "test_${PROJECT_NAME}-test-drive" "${t}" WORKING_DIRECTORY "${CMAKE_SOURCE_DIR}")
endforeach()

Debugging a failing test


As with the output of a passing test, the output of a failing test differs depending on the framework used to write them. As shown above, the information we get from a test output is highly configurable. The more effort we put in when writing tests the easier it will be to debug should it fail. For example, it’s clear which of the following options will be easier to debug should the assertion fail.

F90

! This will not be very clear
do row = 1, nrow
    do col = 1, ncol
        call check(error, params%expected_output_matrix(row, col), actual_output(row, col), &
            "Actual and expected output matrices did not match")
        if (allocated(error)) return
    end do
end do

! This is much better
do row = 1, nrow
    do col = 1, ncol
        write(failure_message,'(a,i1,a,i1,a,F3.1,a,F3.1)') "Unexpected value for output(", row, ",", col, "), got ", &
            actual_output(row, col), " expected ", params%expected_output_matrix(row, col)
        call check(error, params%expected_output_matrix(row, col), actual_output(row, col), failure_message)
        if (allocated(error)) return
    end do
end do
Challenge

Challenge 2: Debug and fix a failing test.

Take a look at the 4-debugging-a-broken-test/challenge-1 README.md in the exercises repository.

A solution is provided in README-solution.md.

Content from Testing parallel code


Last updated on 2025-08-05 | Edit this page

Overview

Questions

  • How do I unit test a procedure which makes MPI calls?
  • How do I easily test different numbers of MPI ranks?
  • How do I test a procedure which uses OMP directives?
  • How do I easily test different numbers of OMP threads?

Objectives

  • Understand what is different when testing parallel vs serial code.

What’s the difference?


Depending on the parallelisation tool and strategy employed, the implementation of parallel code can be very different to that of serial code. This is especially true for code which utilises the message passing interface (MPI). These codes almost always contain some functionality in which processes, or ranks, communicate by exchanging messages. This message passing is often complex and will always benefit from testing.

There is added complexity when testing MPI code compared to serial as the logical path through the code is changed depending on the number of ranks with which the code is executed. Therefore, it is important that we test for a range of numbers of ranks. This will require controlling the number of ranks running the src and is not something we want to implement ourselves. This limits the tools available to us. pFUnit is currently the only tool which supports testing MPI code. Therefore, we will be focusing on pFUnit for this section.

Tips for writing testable MPI code


Where possible, separate calls to the MPI library into units (subroutines or functions).

If a procedure does not contain any calls to the MPI library, then it can be tested with a serial unit test. Therefore, separating MPI calls into their own units makes for a simpler test suite for most of your logic. Only, procedures with MPI library calls will require the more complex parallel pFUnit tests.

Pass the MPI communicator information into each procedure to be tested.

If we pass the MPI communicator into a procedure, we can define this to be whatever we wish in our tests. This allows us to use the communicator provided by pFUnit or some other communicator specific to our problem.

Creating types to wrap this information along with any other MPI specific information (neighbour ranks, etc) can be a convenient approach.

## Syntax of writing MPI enabled pFUnit tests

Firstly, we must change how we define our test parameters:

  • We now use MPITestParameter instead of AbstractTestParameter.
    • MPITestParameter inherits from AbstractTestParameter and provides an additional parameter in its constructor which corresponds to the number of processors for which a particular test should be ran.
  • We can’t know for certain the rank of each process for the pFUnit communicator until the test case runs. Therefore, we now need to build arrays of input parameters with the rank of a process matching the index of the parameter array. For example, rank 0 would access index 1 of the input array during testing, rank 1 would access index 2 and so on. See below for an example.

F90

@testParameter(constructor=new_exchange_boundaries_test_params)
type, extends(MPITestParameter) :: my_test_params
    integer, allocatable :: input(:), expected_output(:)
contains
    procedure :: toString => my_test_params_toString
end type my_test_params

We therefore need to update how we populate our test parameters to take into account the rank indexing:

F90

function my_test_suite() result(params)
    type(my_test_params), allocatable :: params(:)
    integer, allocatable :: input(:), expected_output(:)
    integer, max_number_of_ranks

    max_number_of_ranks = 2
    allocate(params(max_number_of_ranks))
    allocate(input(max_number_of_ranks))
    allocate(expected_output(max_number_of_ranks))

    ! Tests with one rank
    input(1) = 1
    expected_output(1) = 2
    params(1) = my_test_params(1, input, expected_output)

    ! Tests with two ranks
    !     rank 0
    input(1) = 1
    expected_output(1) = 1
    !     rank 1
    input(2) = 1
    expected_output(2) = 1
    params(2) = my_test_params(2, input, expected_output)
end function my_test_suite

We also need to change how we define our test case:

  • We now use MPITestCase instead of ParameterizedTestCase
    • MPITestCase provides several helpful methods for us to use whilst testing
      • getProcessRank() returns the rank of the current process allowing per rank selection of inputs and expected outputs.
      • getMpiCommunicator() returns the MPI communicator created by pFUnit to control the number of ranks per test.
      • getNumProcesses() returns the number of MPI ranks for the current test.

F90

@TestCase(testParameters={my_test_suite()}, constructor=my_test_params_to_my_test_case)
type, extends(MPITestCase) :: my_test_case
    type(my_test_params) :: params
end type my_test_case

Finally, we ensure each process accesses the correct rank index parameters during the test

F90

@Test
subroutine TestMySrcProcedure(this)
    class (my_test_case), intent(inout) :: this

    integer :: actual_output, rank_index

    rank_index = this%getProcessRank() + 1

    call my_src_procedure(this%params%input(rank_index), actual_output)

    @assertEqual(this%params%expected_output(rank_index), actual_output, "Unexpected output from my_src_procedure")
end subroutine TestMySrcProcedure
Challenge

Challenge 1: Testing MPI parallel code

Take a look at 5-testing-parallel-code/challenge in the exercises repository.

A solution is provided in 5-testing-parallel-code/solution.