|
|
TODO
|
|
|
If you developing within the C++ Pulse code base, or you would just like to run and view data from Pulse, you will need to be familiar with our [Test Suite Tool](Test-Suite). The following sections will step you through the tests we provide and how to run them.
|
|
|
|
|
|
Verification:
|
|
|
Unit tests
|
|
|
Quick Start
|
|
|
-----------
|
|
|
From a command terminal from the `<pulse/install/bin>` directory run the following command:
|
|
|
```bash
|
|
|
# On Windows
|
|
|
> run DebugRun
|
|
|
# On Linux
|
|
|
$ ./run.sh DebugRun
|
|
|
```
|
|
|
This will run the instructions in the [DebugRun.config](https://gitlab.kitware.com/physiology/engine/blob/master/data/config/DebugRun.config) file located in your `<pulse/source>/data/config` directory.
|
|
|
|
|
|
Executing the suite with this config file will do the following :
|
|
|
1. Create an engine and run the BasicStandard sceanario where it will write out the scenarios data requests into a `<pulse/bin>/test_results/scenarios/patient/BasicStandardResults.csv` file.
|
|
|
|
|
|
Validation:
|
|
|
Healthy homeostatic tables
|
|
|
Condition/action validation - scenarios
|
|
|
2. Compare the generated BasicStandardResults.csv file with the BasicStandardResults.csv file located at `<pulse/bin>/verification/scenarios/patient/BasicStandardResults.csv` The comparison will compare every value between the two files, if the percent difference between a value is greater than the config files defined `PercentDifference` variable (usually 2%), an error will be logged. Each column of values has errors associated with it. `<pulse/bin>/test_results/scenarios/patient/BasicStandardResultsReport.json` will be written out reporting any errors.
|
|
|
|
|
|
```json
|
|
|
{
|
|
|
"TestSuite": [{
|
|
|
"Name": "BasicStandardResultsReport",
|
|
|
"Performed": true,
|
|
|
"Tests": 1,
|
|
|
"TestCase": [{
|
|
|
"Name": "BasicStandardResultsReport"
|
|
|
}]
|
|
|
}]
|
|
|
}
|
|
|
```
|
|
|
|
|
|
Viewing results
|
|
|
How do you know if something failed?
|
|
|
3. Generate a jpg line plots for each of the data columns. The plots will be located in the `<pulse/bin>/test_results/scenarios/patient/BasicStandardResults` folder. Columns that matched the baseline file with no errors will generate a jpg outlined in green, while columns that had comparison errors with the baseline file will generate a jpg outlined in red.
|
|
|
|
|
|
<table border="2" align="center">
|
|
|
<tr>
|
|
|
<th colspan="2">Example of passing (green boarder) and failing (red boarder) plots.</th>
|
|
|
</tr>
|
|
|
<tr>
|
|
|
<td align="left" valign="center">
|
|
|
<img src="/uploads/9634b3e6e99e44320987624b4c414d1c/MeanArterialPressurevsTime.jpg" width="300" height="200">
|
|
|
</td>
|
|
|
<td align="right" valign="center">
|
|
|
<img src="/uploads/c473668aa75aee60a76f1230a458d06b/MeanArterialPressurevsTime.jpg" width="300" height="200">
|
|
|
</td>
|
|
|
</tr>
|
|
|
</table>
|
|
|
|
|
|
The red line is generated from your current build.<br>
|
|
|
The black line is expected line generated from the baseline/verification CSV file.
|
|
|
|
|
|
If there is no baseline csv file, the test suite will plot data from the generated CSV with a white border, and a single black line for data values over time.
|
|
|
|
|
|
4. A `<pulse/bin>/test_results/DebugRun.html` report is generated to list the pass/fail status of each run in the config file.
|
|
|
|
|
|
<img src="uploads/2b401003ca76417a12c366f592106e1b/image.png" width="50%"/>
|
|
|
|
|
|
Next, let's dig into how this works and what all we can run.
|
|
|
|
|
|
Verification Test Configurations
|
|
|
--------------------------------
|
|
|
|
|
|
The Pulse test suite is comprised of a Java based set of tools that can execute something in Pulse, evaluate the results generated, generate line plots of the data generated vs Time, and construct a report describing any errors encouted during the test. All test data (CSV, plots, html reportes, etc.) will be generated under the `<pulse/bin>/test_results` folder.
|
|
|
|
|
|
A config file format is uses key value pairs to define what tests to run. These config files can be found in `<pulse/source/data/config>`
|
|
|
|
|
|
```
|
|
|
# Run Flags
|
|
|
ReportName=Debugging Test Summary
|
|
|
# You can set this to false if you already have results and just want to post process them
|
|
|
ExecuteTests=true
|
|
|
# Post process the results
|
|
|
PlotResults=true
|
|
|
# When comparing the generated values with baseline values,
|
|
|
# how different can they be before we throw an error
|
|
|
PercentDifference=2.0
|
|
|
# The suite can execute multiple tests at the same time using threads
|
|
|
# You can set the number of threads we use, Using a negative number
|
|
|
# indicates we will use the total number of cores available minus this number
|
|
|
Threads=-1
|
|
|
# Executors are the Java classes that can execute a test run type
|
|
|
# These should not be changed
|
|
|
Executor=com.kitware.pulse.testing.ScenarioTestDriver
|
|
|
# Macro defines common execution flags in short hand
|
|
|
# These should not be changed
|
|
|
Macro ScenarioTest=ScenarioTestDriver FastPlot Baseline=scenarios/ Computed=./test_results/scenarios
|
|
|
|
|
|
# Execution runs are the input and the execution flags, including the executor for the run
|
|
|
# You can add/remove lines to specify what tests the suite will run
|
|
|
patient/BasicStandard.json= ScenarioTest
|
|
|
```
|
|
|
|
|
|
An test run is defined in one line:
|
|
|
```
|
|
|
patient/BasicStandard.json=ScenarioTest
|
|
|
```
|
|
|
The key (left side of this statement) defines the unit test name or the scenario file name. It is a relative path from the <pulse/bin>/verification directory. The value (right side of this statement) defines execution flags with the Macro `ScenarioTest` which defines the `Baseline` directory as `scenarios`
|
|
|
|
|
|
The following config files are provided in `<pulse/source>/data/config`
|
|
|
|
|
|
- **CDMUnitTests**- A list of all common data model tests. These test various generic algorithms implemented in for use in physiology modelling. Some of these tests create a CSV file, some do not. A json report is directly generated by the test if no CSV file is provided.
|
|
|
|
|
|
- **EngineUnitTests**- A list of all Pulse engine tests. These test various physiology related algorithms implemented in Pulse. Some of these tests create a CSV file, some do not. A json report is directly generated by the test if no CSV file is provided.
|
|
|
|
|
|
- **SystemVerification**- These scenarios create a CSV file for every physiological system with all data related to each system's validation data.
|
|
|
|
|
|
- **PatientVerification**- These scenarios create a CSV file for every patient with all data related to patient validation data.
|
|
|
|
|
|
- **DrugPKVerification**- These scenarios create a CSV file for every drug with all data related to drub PK validation data. The PD effects are turned off in Pulse so we can validate how the drug flows throughout the body (PK effects) for an extended time.
|
|
|
|
|
|
- **LongVerificationScenarios**- Any scenarios simulating over 6 to 12 hours are contained here.
|
|
|
|
|
|
- **ScenarioVerification**- All other scenarios provided by Pulse are contained in this file. Some scenarios have validation data related to them and some do not.
|
|
|
|
|
|
You can execute any of these config files via our `run` tool.
|
|
|
|
|
|
```bash
|
|
|
$./run.sh CDMUnitTests
|
|
|
$./run.sh EngineUnitTests
|
|
|
$./run.sh SystemVerification
|
|
|
$./run.sh PatientVerification
|
|
|
$./run.sh DrugPKVerification
|
|
|
$./run.sh LongVerificationScenarios
|
|
|
$./run.sh ScenarioVerification
|
|
|
```
|
|
|
|
|
|
### Verification Reporting
|
|
|
|
|
|
For any test that generates a CSV file. There will be a baseline CSV file under the `<pulse/bin>/verification` directory. (If you do not have these, execute `run updateBaselines` from a terminal in the ~<pulse/bin>` directory.) Once the suite executes a test with a CSV file, a post processor will perform the following tasks.
|
|
|
|
|
|
- Compare the generated CSV file to the baseline/verification CSV file.
|
|
|
- Write a json report, listing any generated data that was not with 2% of the data found in the baseline/verification CSV file.
|
|
|
- Generate line plots for each data requested in a folder under the `<pulse/bin>/test_results` folder.
|
|
|
- Create an HTML report that summarized the results of each test.
|
|
|
|
|
|
This is demonstrated above in the `Quick Start`
|
|
|
|
|
|
### Verification Analysis
|
|
|
|
|
|
Generally, we look at the HTML report, identify any failing tests, then examine the plots of any failing tests to figure out at what point in time did the engine deviate from the expected results. Once we know when things start to deviate, we can identify which action was called around that time that introduced the deviation. From there we can jump into model mathmatics to see if the deviation is acceptable or not.
|
|
|
|
|
|
### Development Approach
|
|
|
|
|
|
Running the full test suite, and even the ScenarioVerification.config can take quite a long time. As we develop new functionality in Pulse, we often edit the DebugRun.config file to contain the subset of tests that we would like to run that are associated with the feature we are adding to Pulse. For example, if we are modifying the Pulse hemorrhage model, we would open the ScenarioVerification.config and copy out several hemorrhage related scenario and put them in our Debug Run file (and comment out BasicStandard). Now as we update our model, we can quickly run a few scenarios via `run DebugRun` and view how our updates are affecting the results.
|
|
|
|
|
|
<img src="uploads/580494f8a1c1fe8651aca52b5e14aeb5/image.png" width="50%"/>
|
|
|
|
|
|
Once we are happy with how our model updates are performing with these scenarios, we can run the full suite of tests to see how this update affects all other scenarios, via `run SceanrioVerfication`
|
|
|
|
|
|
|
|
|
Validation
|
|
|
----------
|
|
|
|
|
|
|
|
|
Rebasing
|
... | ... | |