Advanced Testing Techniques¶
Unit and integration tests are great, but they are often not enough for large and complex codebases. There are several other advanced testing techniques that are being adopted as a new standard throughout organisations. We discuss a few of them below.
Mocking¶
Mock: verb,
- to tease or laugh at in a scornful or contemptuous manner
- to make a replica or imitation of something
Mocking
Replace a real object with a pretend object, which records how it is called, and can assert if it is called wrong
Mocking frameworks¶
- C: CMocka
- C++: googletest
- Python: unittest.mock
Recording calls with mock¶
Mock objects record the calls made to them:
from unittest.mock import Mock
function = Mock(name="myroutine", return_value=2)
function(1)
function(5, "hello", a=True)
function.mock_calls
The arguments of each call can be recovered
name, args, kwargs = function.mock_calls[1]
args, kwargs
Mock objects can return different values for each call
function = Mock(name="myroutine", side_effect=[2, "xyz"])
function(1)
function(1, "hello", {'a': True})
We expect an error if there are no return values left in the list:
function()
Using mocks to model test resources¶
Often we want to write tests for code which interacts with remote resources. (E.g. databases, the internet, or data files.)
We don't want to have our tests actually interact with the remote resource, as this would mean our tests failed due to lost internet connections, for example.
Instead, we can use mocks to assert that our code does the right thing in terms of the messages it sends: the parameters of the function calls it makes to the remote resource.
For example, consider the following code that downloads a map from the internet:
# sending requests to the web is not fully supported on jupyterlite yet, and the
# cells below might error out on the browser (jupyterlite) version of this notebook
import requests
def map_at(lat, long, satellite=False, zoom=12,
size=(400, 400)):
base = "https://static-maps.yandex.ru/1.x/?"
params = dict(
z = zoom,
size = ",".join(map(str,size)),
ll = ",".join(map(str,(long,lat))),
lang = "en_US")
if satellite:
params["l"] = "sat"
else:
params["l"] = "map"
return requests.get(base, params=params)
london_map = map_at(51.5073509, -0.1277583)
from IPython.display import Image
%matplotlib inline
Image(london_map.content)
We would like to test that it is building the parameters correctly. We can do this by mocking the requests object. We need to temporarily replace a method in the library with a mock. We can use "patch" to do this:
from unittest.mock import patch
with patch.object(requests,'get') as mock_get:
london_map = map_at(51.5073509, -0.1277583)
print(mock_get.mock_calls)
Our tests then look like:
def test_build_default_params():
with patch.object(requests,'get') as mock_get:
default_map = map_at(51.0, 0.0)
mock_get.assert_called_with(
"https://static-maps.yandex.ru/1.x/?",
params={
'z':12,
'size':'400,400',
'll':'0.0,51.0',
'lang':'en_US',
'l': 'map'
}
)
test_build_default_params()
That was quiet, so it passed. When I'm writing tests, I usually modify one of the expectations, to something 'wrong', just to check it's not passing "by accident", run the tests, then change it back!
Testing functions that call other functions¶
def partial_derivative(function, at, direction, delta=1.0):
f_x = function(at)
x_plus_delta = at[:]
x_plus_delta[direction] += delta
f_x_plus_delta = function(x_plus_delta)
return (f_x_plus_delta - f_x) / delta
We want to test that the above function does the right thing. It is supposed to compute the derivative of a function of a vector in a particular direction.
E.g.:
partial_derivative(sum, [0,0,0], 1)
How do we assert that it is doing the right thing? With tests like this:
from unittest.mock import MagicMock
def test_derivative_2d_y_direction():
func = MagicMock()
partial_derivative(func, [0,0], 1)
func.assert_any_call([0, 1.0])
func.assert_any_call([0, 0])
test_derivative_2d_y_direction()
We made our mock a "Magic Mock" because otherwise, the mock results f_x_plus_delta
and f_x
can't be subtracted:
MagicMock() - MagicMock()
Mock() - Mock()
Static type hints¶
Although static type hints are not actual "tests," they can be checked under test runs (or CI pipelines) using static typing tools and libraries. Checking if the codebase is statically typed and the types are correct can help in finding silent bugs, dead code, and unreachable statements, which is often missed during unit and integration testing.
Detecting dead code¶
For example, let's consider the following piece of code:
%%writefile static_types_example.py
def smart_square(a: float | int | bool | str) -> int | float:
if isinstance(a, (float, int)):
return a * a
elif isinstance(a, (str, bool)):
try:
result = float(a) * float(a)
return result
except ValueError:
raise ValueError(f"a should be of type float/int or convertible to float; got {type(a)}")
elif not isinstance(a, (float, int, bool, str)):
raise NotImplementedError
The code looks good enough, squaring the argument if it is of type float
or int
and attempting to convert it to float
if it is not. It looks like
the code is clean, and testing it gives us no errors too -
%%writefile test_static_types_example.py
import pytest
from static_types_example import smart_square
def test_smart_square():
assert smart_square(2) == 4
assert isinstance(smart_square(2), int)
assert smart_square(2.) == 4.
assert isinstance(smart_square(2.), float)
assert smart_square("2") == 4.
assert smart_square(True) == 1.
with pytest.raises(ValueError, match="float/int or convertible to float; got <class 'str'>"):
smart_square("false")
%%bash
pytest test_static_types_example.py
Even though the tests look good, we can notice one peculiar
behavior. We cannot test the NotImplementedError
because it is not reachable,
given that either the if
or the elif
condition will always be met
and the argument type cannot be anything other than float
, int
, bool
,
or str
; hence, the code will never go to the else
statement.
This is called "unreachable" or "dead" code, and having it in your codebase is a bad practice. How do we detect it? Static types!
Let's run mypy with --warn-unreachable
-
%%bash
mypy static_types_example.py --warn-unreachable
The type checker points out that the line 9 (else
) statement, is in fact
unreachable. This could either be a bug - code that should be reachable
but for some reason is not - or just dead code - code that will never
be reached and can be removed. In out case it is dead code, and can be
removed safely, given that we explicitly tell users what type of arguments should
be passed in.
%%writefile static_types_example.py
def smart_square(a: float | int | bool | str) -> int | float:
if isinstance(a, (float, int)):
return a * a
elif isinstance(a, (str, bool)):
try:
result = float(a) * float(a)
return result
except ValueError:
raise ValueError(f"a should be of type float/int or convertible to float; got {type(a)}")
%%bash
mypy static_types_example.py --warn-unreachable
No errors!
Huge real-life codebases always benefit from adding static type and checking the using tools like mypy. These checks can be automated in the CI using pre-commit hooks (for instance, the mypy pre-commit hook) and pre-commit.ci.
Property based testing¶
Property-based testing is a testing method that automatically generates and tests a wide range of inputs, often missed by tests written by humans.
Hypthesis is a modern property based testing implementation for Python. The library creates unit tests. In a nutshell, Hypothesis can parametrize test, running test function over a wide range of matching data from a "search strategy" established by the library. Through paratemerization, Hypothesis can catch bugs which might go unnoticed by writing manual inputs for the tests.
Property based testing is being adopted by software written in various languages, especially by industries, to ensure the effectiveness of the tests written for their software suite.
Mutation testing¶
Mutation testing checks the effectiveness of your tests by making minor modification to the codebase and running the test suite. The tests were not specific or good enough if they pass with the modifications made by the mutation testing framework.
mutmut is a mutation testing library for Python which is being adopted recently in testing frameworks of large projects. There exists other libraries that perform a similar task such as -
- pytest-testmon: pytest plug-in which automatically selects and re-executes only tests affected by recent changes
- MutPy (unmaintained): mutation testing tool for Python 3.3+ source code
- Cosmic Ray: mutation testing for Python
Overall, mutation testing is a very powerful way to check is your tests are actually working, and this form of testing is beneficial for projects with a large test suite.