Commit 03468b07 authored by Maxime Hubert's avatar Maxime Hubert
Browse files

* Update validate-execution command with a `--score` flag...

* Update validate-execution command with a `--score` flag #11
* Update validate-system command, and add a `--strict` flag #12
* Improved documentation for each command #14
* New _free disk_ metric in resources monitoring #15
* Bug fixes
parent 131a01b3
stages:
- build
- test
- release
- documentation
- build-test
- test-baseline
build:
stage: build
test-abstract-cli:
stage: build-test
image: python:3.5-stretch
script:
- python3 -m pip install -r ./requirements.txt
- python3 -m pip install -e .
- actev -h
- python3 -m unittest discover -s test/
test-baseline:
stage: test-baseline
image: python:3.5-stretch
script:
......@@ -13,3 +21,5 @@ build:
- python3 -m pip install -e .
- actev -h
- actev validate-system
only:
- /^baseline/
## Summary
(Summarize the bug encountered concisely)
## Steps to reproduce
(How one can reproduce the issue - this is very important)
## Example Project
(If possible, please create an example project here on GitLab.com that exhibits the problematic behaviour, and link to it here in the bug report)
(If you are using an older version of GitLab, this will also determine whether the bug has been fixed in a more recent version)
## What is the current bug behavior?
(What actually happens)
## What is the expected correct behavior?
(What you should see instead)
## Relevant logs and/or screenshots
(Paste any relevant logs - please use code blocks (```) to format console output,
logs, and code as it's very hard to read otherwise.)
## Possible fixes
(If you can, link to the line of code that might be responsible for the problem)
(Don't forget to add a priority label and other relevant labels.)
/label ~Bug
- [ ] Includes a summary of the purpose of the MR and commits
- [ ] Link to any issues that this MR addresses
- [ ] Includes tests
- [ ] (if needed) documentation was updated
- [ ] Passes CI
- [ ] [When ready for code review] Added label `Review` (to both issue and MR)
- [ ] Code reviewer assigned to merge request
- [ ] Issue label changed from `Review` to `Ready` when MR is accepted by code reviewer.
# Instructions
### Developer
1. Open a MR at any time when working on an issue. The source branch should be your issue branch (naming irrelevant) and `develop`.
1. Make sure to include the label `Doing` on *both* the issue and MR.
2. Address the issue and make sure:
* your code is readable and maintainable
* your code includes tests
* you updated any relevant documentation
* your code passes the CI (automatic, look for :white_check_mark:)
* you included a summary of your changes for the code Reviewer
* you updated the progress in the MR checklist
3. Change the `Doing` label in `Review` on the Issue and MR (Issue labels and MR labels are separate!!!)
3. Assign a code reviewer
3. (if the code reviewer asks) Start back at 2., make changes and send back to review
### Reviewer
Your role is to be a second set of eyes to check that everything as it should be.
This is also an opportunity to share insights on code readability, brevity, etc.
The code review is not an evaluation.
1. Check that :
* you understand what this issue addresses
* the CI pipeline passes
* tests are present in the MR
* the code is readable and you can follow the reasoning
* any documentation was updated to reflect the changes
2. When appropriate, suggest actionable changes using the MR comments or the line-in-code comments
3. Asking for clarification is also O.K.
4. When you feel like the MR is ready, use the "Merge" button with the "delete issue branch" option (or delete the branch separately)
5. Change the Issue label from `Review` to `Ready`
1.2.0 - 07.01.19
================
* Update validate-execution command with a `--score` flag https://gitlab.kitware.com/actev/diva_evaluation_cli/issues/11
* Update validate-system command, and add a `--strict` flag https://gitlab.kitware.com/actev/diva_evaluation_cli/issues/12
* Improved documentation for each command
* New _free disk_ metric in resources monitoring
* Bug fixes
1.1.9 - 05.23.19
================
......
......@@ -19,7 +19,7 @@ Get the CLI
Clone the repository:
```
git clone https://gitlab.kitware.com/alexandreB/diva_evaluation_cli.git
git clone https://gitlab.kitware.com/actev/diva_evaluation_cli.git
```
Install it
......@@ -76,7 +76,7 @@ Fork it
Click on the “Fork” button to make a copy of the repository in your own space and add this repository as a remote upstream to get the latest updates
```
$ git remote add upstream https://gitlab.kitware.com/alexandreB/diva_evaluation_cli.git
$ git remote add upstream https://gitlab.kitware.com/actev/diva_evaluation_cli.git
```
Update it
......@@ -102,5 +102,4 @@ More information about the development and the update of the CLI here: [developm
Contact
=======
* alexandre.boyer@nist.gov
* maxime.hubert@nist.gov
* diva-nist@nist.gov
__version__ = '1.1.9'
__version__ = '1.2.0'
......@@ -23,6 +23,7 @@ class ActevValidateExecution(ActevCommand):
* file-index or f: path to file index json file for test set
* activity-index or a: path to activity index json file for test set
* result or -R: path to result of the ActEV scorer
* score or -s: sets flag to score system
"""
def __init__(self):
......@@ -39,9 +40,10 @@ class ActevValidateExecution(ActevCommand):
required_named = arg_parser.add_argument_group('required named arguments')
required_named.add_argument("-o", "--output", help="path to experiment output json file", required=True)
required_named.add_argument("-r", "--reference", help="path to reference json file", required=True)
required_named.add_argument("-r", "--reference", help="path to reference json file", required=False)
required_named.add_argument("-a", "--activity-index", help="path to activity index json file", required=True)
required_named.add_argument("-f", "--file-index", help="path to file index json file", required=True)
required_named.add_argument("-R", "--result", help="path to result of the ActEV scorer", required=True)
required_named.add_argument("-R", "--result", help="path to result of the ActEV scorer", required=False)
required_named.add_argument("-s", "--score", help="sets flag to score system", required=False, action='store_true')
arg_parser.set_defaults(func=ActevValidateExecution.command, object=self)
......@@ -26,5 +26,8 @@ class ActevValidateSystem(ActevCommand):
arg_parser(:obj:`ArgParser`): Python arg parser to describe how to parse the command
"""
arg_parser.description = "Checks the structure of the directory after ActEV-system-setup is run"
arg_parser.description = """Checks the structure of the directory after ActEV-system-setup is run.
Also, checks for self-reported system outputs."""
arg_parser.add_argument("--strict", help="Exits with an error in case of an invalid system.", action="store_true", default=False)
arg_parser.set_defaults(func=ActevValidateSystem.command, object=self)
......@@ -4,7 +4,7 @@ This file should not be modified.
"""
import os
def entry_point(output, reference, activity_index, file_index, result):
def entry_point(output, reference, activity_index, file_index, result, score):
"""Private entry point.
Test the execution of the system on each validation data set provided in container_output directory
......@@ -20,14 +20,18 @@ def entry_point(output, reference, activity_index, file_index, result):
# go into the right directory to execute the script
path = os.path.dirname(__file__)
execution_validation_dir = os.path.join(path, '../implementation/validate_execution')
if score:
s = "true"
else:
s = "false"
installation_script = os.path.join(execution_validation_dir, 'install.sh')
script = os.path.join(execution_validation_dir, 'score.sh')
script += " " + output + \
" " + reference + \
" " + activity_index + \
" " + file_index + \
" " + result
" " + result + \
" " + s
# execute the script
# status is the exit status code returned by the program
......
......@@ -5,11 +5,16 @@ This file should not be modified.
import os
from diva_evaluation_cli.bin.private_src.implementation.validate_system.validate_system import validate_system
def entry_point():
def entry_point(strict):
"""Private entry point.
Test the execution of the system on each validation data set provided in container_output directory
Args:
strict (bool): Whether to cause a failure in case of an error or not
Checks the structure of the directory after ActEV-system-setup is run. Checks for expected API contents, etc.
"""
validate_system()
validate_system(strict)
......@@ -44,6 +44,6 @@ if [ $name != "" ] && [ $name != $package ]; then
fi
cd ..
mv $archive_dir/* .
rm -r $archive_dir
mv -f $archive_dir/* .
rm -rf $archive_dir
......@@ -22,7 +22,11 @@ def psutil_snapshot():
),
'total_disk_io_write': psutil_parse_readable_bytes(
psutil.disk_io_counters(perdisk=False, nowrap=True).write_bytes
)
),
'free_disk_space': {
disk[1]: psutil_parse_readable_bytes(psutil.disk_usage(disk[1])[2])
for disk in psutil.disk_partitions()
}
}
return snapshot_dict
......
......@@ -14,15 +14,24 @@ reference=$2
activity=$3
file=$4
result=$5
score=$6
# Execute ActEV Scorer
cd ActEV_Scorer
python2 ActEV_Scorer.py \
ActEV18_AD \
-s $output \
-r $reference \
-a $activity \
-f $file \
-o $result \
-v
if [ $score == "true" ] ; then
python2 ActEV_Scorer.py \
ActEV18_AD \
-s $output \
-r $reference \
-a $activity \
-f $file \
-o $result \
-v
else
python2 ActEV_Scorer.py \
ActEV18_AD \
-s $output \
-a $activity \
-f $file \
-v \
-V
fi
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment