Running and writing tests

Source code: tests

Finesse contains a large suite of tests configured to use Pytest. These are run automatically when changes are made to the code and pushed to the git repository, but can also be run manually with the console command pytest. Pytest is installed automatically as part of the development dependencies for Finesse.


Pytest is a framework for writing tests and provides a number of useful features over the standard library’s unittest. Pytest strongly encourages the use of “fixtures” with tests, which are functions which set up test resources to be provided to tests themselves. Fixtures are often used in Finesse testing to set up models, which individual test functions then check.

Refer to the pytest documentation for more information.

Running the tests provided with Finesse

Pytest provides a command-line interface (CLI) for executing tests. To run all of the tests, simply run Pytest (you must have installed Finesse in developer mode):

$ pytest


Pytest is configured in pyproject.toml to run tests in the tests directory by default.

Pytest’s CLI can run individual or combinations of tests by specifying specific subdirectories, modules or even particular functions with particular inputs (see Test IDs).

Displaying output and logs during tests

Output from print statements is normally ignored by pytest unless a particular test fails. It can be forced to collect and display this output as it occurs by passing the -s flag.

Log display is ignored for all tests, even those that fail, unless the --log-level argument to the pytest command is set to the desired log level (e.g. --log-level=debug). As with standard output, though, this only displays log messages for tests that fail. To display live log messages, set --log-cli-level instead.

Types of test

Finesse shuns the traditional labelling of tests as “unit” or “integration” tests. Instead there are two broad categories of test in the Finesse test directory: functional and validation. These are described in the following sections.

Functional tests

These tests check the behaviour of individual parts of the Finesse code, such as functions, classes and modules. They do not check the physical predictions of Finesse, just that the building blocks are doing what they are supposed to do.

Functional tests may check the behaviour of atomic pieces of of code like functions, methods, classes, etc. (usually called unit tests) or check behaviour of multiple units together (usually called integration tests). These types of test sometimes use mock objects to mimic the behaviour of context-dependent parts of the system in which the code runs, such as the network, user input, or databases.

Validation tests

Validation tests are to test the correctness of the high level outputs from Finesse, such as its predicted interferometer behaviour. It may be useful to use validation tests to e.g. compare scripts using Finesse to analytical models, or to compare the results of two separate Finesse scripts, or to other simulation tools.

The validation test directory also contains IPython notebooks. These define more complex validation tests which check behaviour against analytical models, and can be run manually to verify expected behaviour.

Writing tests

Before starting, it is useful to take a look at existing tests and the Pytest documentation to understand how to write good (and avoid writing bad) tests. More guidance for particular features of Pytest used in the Finesse test suite is given in the following sections.


A powerful feature of Pytest is so-called fixtures which allow you to define reusable code to test in test functions. Fixtures are by default generated for every test input, but can also be scoped for whole modules or packages if they are expensive to run. Fixtures can be inherited too, allowing fixtures to build upon other fixtures.

Mock objects

In order to test the functionality of a single unit, it is sometimes necessary to use mock objects to mimic the behaviour of other functions/methods/classes required for the operation of the function under test. Pytest provides tools for managing mock objects in the form of the monkeypatch fixture. This allows you to change attributes or fields of Python objects passed to each test, with the old state being restored automatically after the test passes. This can be used for example to patch the return values of functions used in tests, so that you can quickly create a particular program state for the test to check without including lots of setup code.

Parameterising tests

Pytest provides a parametrize decorator that lets you define sets of values to pass to the test function across multiple calls. This lets you quickly perform the same test with multiple inputs. Furthermore, when you define multiple parametrize decorators, Pytest automatically computes and runs all possible combinations of inputs, allowing you to concisely define dozens or hundreds of inputs.

Property-based testing

Sometimes it’s useful not to hard-code specific inputs for tests, but to define what types of input the test should allow, and let some external code generate the inputs. The external code can then generate example inputs, run the test, and try to find edge cases that fail. This is called property-based testing or “fuzzing”, and Finesse uses the Hypothesis library to provide this functionality.

Hypothesis provides a set of standard inputs such as integers, floats, lists, etc., and the ability to define custom types. A set of custom types for Finesse tests can be found in /tests/testutils/

Hypothesis will run for a fixed time per test before giving up. The default is set low enough such that running the whole test suite does not take too long, but to increase the time (and thus the number of inputs hypothesis tests), specify the --hypothesis-max-examples argument to pytest.

Test IDs

See also

The Selecting tests based on their node ID section of the Pytest documumentation

Particular tests can be run using the pytest CLI:

$ pytest /path/to/

If the test is parametrised, you can even run a particular input to the test by referring to its parameters:

# Assume test_function is parametrised with a "cos(pi)" input...
$ pytest /path/to/[cos(pi)]

Tests with multiple parametrised inputs can generate long and ugly test IDs in the CLI. These are especially true when test inputs are multiline strings. Test IDs can be assigned to input parameters using pytest.param() and assigning the id argument to a descriptive string:

        pytest.param("a", "b", id="a-and-b"),  # The ID shouldn't contain whitespace.
def test_function(input_a, input_b):

This string is then shown in the CLI by the test runner if that particular test input fails, and can be referred to to run only that particular test:

$ pytest /path/to/[a-and-b]

Test markers

Tests of a particular category can be given “markers”, which allows them to be included or excluded from a particular test suite. Some tests, such as those that use hypothesis, are automatically marked.

Available markers can be listed with pytest --markers

Tests with a particular marker can be run or not run using e.g.:

# Run hypothesis tests.
$ pytest -m hypothesis
# Don't run hypothesis tests (run everything else).
$ pytest -m "not hypothesis"