Antares Simulator
Power System Simulator
|
Here is an automatic testing python script system.
This program performs the following :
Note that each study found is supposed to contain the definition of checks performed by scripts after the simulation on the study is completed. So, each study is supposed to contain a file check-config.json for that purpose. This file is build manually for each study.
For now, there are 2 entry points that can be used to run tests automatically.
The first one runs tests as we explained above, the second one is a bit different : it does not search for studies, but is given an explicit path to a study. And this study does not contain any check definition under the form of a json file : checks are explicitly defined and supplied in the tests.
We won't comment the second script, as understanding the first one should ease understanding the second.
TO DO : maybe we should think of integrating tests on unfeasible problems in the regular testing system ?
Here we examine the second part of script test_from_json.py. Note that The first part is dedicated to finding studies and storing the definition of the checks to be made on results after a simulation is run on studies (recall the check-config.json file). Let's look at the test_from_json.py (second part).
The content of this script is quite short, despite all the tests it has to perform. This is partially due to the power of pytest and its features : fixtures and especially parametrization, which is responsible for running a test many times, each time with a different set of arguments.
In the following, we comment the content of this script. Lines of this scripts are numbered so that they can be refered to whenever we need.
Line 1 : Following tests are marked as a collection, that is belonging to the same category
pytest comes with the notion of fixture. Fixtures allow executing a piece of code just before a test runs. To take benefit of a fixture, a test needs to be given this fixture as argument. Fixture themselves can also be given arguments, we'll see how we do it (in the context of the current testing system) when we talk about parametrization. Fixtures return a result to be used in the test. Let's look at a simple test :
When my_test executes, it first calls my_fixture, which returns a result. This result can be anything. So it can be a constant (like a string) or an object.
Note that, when a test takes a fixture as argument, this argument is both a way to call the fixture and represents the result of the fixture itself.
As a result, the argument can be (and should be) used in the test. We'll see examples of this in the following.
A fixture can be a bit more complex than the previously displayed one : it can be divided in 2 parts. The first would be a setup operation (executed just before the test begins) and the second part would be a teardown operation (executed just before the test ends). In the previous example, the fixture only has a setup part.
In order to supply a fixture with both setup and teardown, we need to use the yield python keyword. The yield instruction returns the fixture's result back to the test.
Fixtures can be supplied with parameters. This can be done by giving an argument to the fixture. In the following snippet, the fixture study_path only returns the parameter that it's given.
We'll see at least another example of such feature later on.
Another trait of fixtures is that they can be nested : a fixture can call other fixtures. fixture 1 can have a fixture 2 as an argument. This means that, when fixture 1 comes to execute, fixture 2 is called before execiting fixture 1's body, and so fixture 1 has access to fixture 2's results during its execution.
Back to script test_from_json.py.
Let's recall line 2 :
Here is the place where we allow calling the body of our test multiple times. By "multiple time" we mean that the test will be run with different arguments each time.
In our example, fixture study_path, that is waiting for an argument, will be passed to the test, supplied with a different argument each time.
Same thing for the test_check_data value : it is not a feature (more a simple variable), but it will be passed to the test with a different value each time the test runs.
How do we do that ?
The first argument of the parametrize decorator ('study_path, test_check_data') represents the test's arguments to be changing each time the test runs.
The second argument is a list (json_collector.pairs()). Each element in the list is a pair (tuples with 2 elements) :
So, a test is run for each element of this list. It receives the first value of the pair as first argument, and the second value of the pair as second argument. This means that, for each test, fixture study_path receives as argument a path to a study, and that the variable test_check_data is supplied with an object containing all necessary data to performed a check.
Note : be aware that 2 pairs (study path, checks to do) can have the same study path : several checks can be made on the same study results.
TO CHECK IN CODE : for a given study, can we have many checks requiring as much as simulation runs ? My guess is yes, but to be checked
With the previous explanations in mind, we're can describe what's happening when tests are run.
So the test is run multiple times due to parametrization. In fact, the test body is executed as many times as there are elements is the list json_collector.pairs(), that is as many studies spotted by the script test_from_json.py (see first part of the script, not commented in this doc).
Here we talk about line 3 of script test_from_json.py.
For a given run of the test's body, that is for each study previously found in a directory, some fixtures are first run.
Let's look at its content :
As we can see, this fixture is a setup/teardown fixture, and it calls 2 other fixtures (simulation and resultsRemover) before running. So when check_runner runs :
Note that the last line here will be executed when the current test ends.
This concerns line 4 of script test_from_json.py.
Here we create a list of checks to perform on the results of a simulation on the current study is created.
This list is build from the description of the checks contained in test_check_data.
For that, create_checks is called, with natural information for a check : the path to the study used to fetch things to check, and te kind of thing to check (in test_check_data). The left argument of create_checks is the simulation previoulsy prepared by fixture check_runner. This simulation is needed for special kinds of check : the ones that need the return code of Antares Simulator just after it has run.
Note that all checks in that list is an instance of a class necessarily derived from the more general class check_interface. This parent class forced every child to have a method run() and a method name(), so that, when this list is traversed for any reason, a call to the run() or name() method does not fail.
Note also that, at this stage, the simulation has not been run yet, but will be run at the next line of test_from_json.py.
This is about line 5 of script test_from_json.py.
This is the place where the script runs:
Indeed the run() method of the check_runner fixture contains these 2 instructions.
As we already saw, the check_runner fixture is a setup/teardown fixture. This means that when each test associated to a study ends, the teardown part of this fixture is run.
Looking at the content of check_handler.teardown() method, we see what it does :
Doc : Clarify if a check in the check-config.json can be composed of several sub-checks, and how it works in this case. Code :