Running and writing tests

Finesse contains a large suite of tests. These are run automatically when changes are made to the code and pushed to the git repository, but can also be run manually with pytest.

Note

In the examples below, it is assumed that the pytest command is being run from the root Finesse repository directory.

Pytest

Pytest is a framework for writing tests and provides a number of useful features over the standard library’s unittest. Pytest strongly encourages the use of “fixtures” with tests, which are functions which set up test resources to be provided to tests themselves. Fixtures are often used in Finesse testing to set up models, which individual test functions then check. For more information, refer to the Pytest documentation.

Pytest is installed as part of the development dependencies for Finesse.

Running the tests provided with Finesse

Pytest provides a command-line interface (CLI) for executing tests. To run all of the tests, simply run Pytest:

$ pytest

Note

Pytest is configured in /setup.cfg to run tests in the /tests directory by default.

Pytest’s CLI can run individual or combinations of tests by specifying specific subdirectories, modules or even particular functions. For instance, functional tests can be run with pytest tests/functional. Individual test functions inside a module can be run by adding two colons and the name of the function after the path to the module, e.g. pytest tests/test_thing.py::test_thing.

Run pytest -h for more details.

Displaying output and logs during tests

Output from print statements is normally ignored by pytest, however it can be forced to collect and display this output by passing the -s flag. Logs are also switched off by default, but passing the --log-level argument with the relevant log level (e.g. --log-level=debug) switches this on.

Types of test

There are two broad categories of test in the test directory: functional and validation. These are described in the following sections.

Functional tests

These tests check the behaviour of individual parts of the code, such as functions, classes and modules. They do not check the physical predictions of Finesse, just that the building blocks are doing what they are supposed to do.

Functional tests may check the behaviour of atomic pieces of of code like functions, methods, classes, etc. (usually called unit tests) or check behaviour of multiple units together (usually called integration tests). These types of test sometimes use mock objects to mimic the behaviour of context-dependent parts of the system in which the code runs, such as the network, user input, or databases.

Validation tests

Validation tests are to test the correctness of the high level outputs from Finesse, such as its predicted interferometer behaviour. It may be useful to use validation tests to e.g. compare scripts using Finesse to analytical models, or to compare the results of two separate Finesse scripts, or to other simulation tools.

The validation test directory also contains IPython notebooks. These define more complex validation tests which check behaviour against analytical models. Every notebook in the validation directory is executed on a per commit bases by BrumSoftTest. See 000_example_validation_notebook.ipynb for an example.

Writing tests

Before starting, it is useful to take a look at existing tests and the Pytest documentation to understand how to write good (and avoid writing bad) tests. More guidance for particular features of Pytest used in the Finesse test suite is given in the following sections.

Fixtures

A powerful feature of Pytest is so-called fixtures which allow you to define reusable code to test in test functions. Fixtures are by default generated for every test input, but can also be scoped for whole modules or packages if they are expensive to run. Fixtures can be inherited too, allowing fixtures to build upon other fixtures. See the Pytest documentation for more information.

Mock objects

In order to test the functionality of a single unit, it is sometimes necessary to use mock objects to mimic the behaviour of other functions/methods/classes required for the operation of the function under test. Pytest provides tools for managing mock objects in the form of the monkeypatch fixture. This allows you to change attributes or fields of Python objects passed to each test, with the old state being restored automatically after the test passes. This can be used for example to patch the return values of functions used in tests, so that you can quickly create a particular program state for the test to check without including lots of setup code.

Parameterising tests

Pytest provides a parametrize decorator that lets you define sets of values to pass to the test function across multiple calls. This lets you quickly perform the same test with multiple inputs. Furthermore, when you define multiple parametrize decorators, Pytest automatically computes and runs all possible combinations of inputs, allowing you to concisely define dozens or hundreds of inputs.

Test IDs

Particular tests can be run using the pytest CLI:

$ pytest /path/to/test.py::test_function

If the test is parametrised, you can even run a particular input to the test by referring to its parameters:

# Assume test_function is parametrised with a "cos(pi)" input...
$ pytest /path/to/test.py::test_function[cos(pi)]

Tests with multiple parametrised inputs can generate long and ugly test IDs in the CLI. These are especially true when test inputs are multiline strings. Test IDs can be assigned to input parameters using pytest.param() and assigning the id argument to a descriptive string:

@pytest.mark.parametrize(
    "input_a,input_b",
    (
        pytest.param("a", "b", id="a-and-b"),  # The ID shouldn't contain whitespace.
    )
)
def test_function(input_a, input_b):
    ...

This string is then shown in the CLI by the test runner if that particular test input fails, and can be referred to to run only that particular test:

$ pytest /path/to/test.py::test_function[a-and-b]