Commit 1231567c authored by Alexandre Boyer's avatar Alexandre Boyer
Browse files

Merge branch 'development' into 'master'

Add actev validate-execution

See merge request alexandreB/diva_evaluation_cli!15
parents 4ffb6a4d 635e0e0c
Pipeline #124076 passed with stage
in 22 seconds
# Basic gitignore config
## Basic gitignore config
*.swp
*.pyc
*.egg-info
## CLI gitignore config
# Commands history
diva_evaluation_cli/bin/private_src/implementation/status/command_history.json
# Virtual environments
python_env/
[submodule "diva_evaluation_cli/bin/private_src/implementation/validate_execution/ActEV_Scorer"]
path = diva_evaluation_cli/bin/private_src/implementation/validate_execution/ActEV_Scorer
url = https://github.com/usnistgov/ActEV_Scorer.git
1.1 - 11.16.18
==============
* Add a new command: actev validate-execution
* Complete documentation
* Modify installation script: add requirements installation
1.0.3 - 11.09.18
================
......
......@@ -37,45 +37,6 @@ $ cd diva_evaluation_cli
$ diva_evaluation_cli/bin/install.sh
```
Manual installation
-------------------
### Create an editable package
In order to test your code with the CLI, you need to create a python package of the project.
An editable package is the best option to avoid to package the project at each modification of the code.
* Go into the clone of this repository:
```
$ cd diva_evaluation_cli
```
* Run the following command to install the CLI:
```
$ python3 -m pip install -e . --user
```
**Note: if you are using a python virtual environment, remove the `--user` option from the command.**
### Configure the PATH variable
* Check that the $PATH environment variable contains `~/.local/bin`:
```
$ echo $PATH
```
* If it is not the case, add it to the path:
```
$ PATH="${PATH}:~/.local/bin"
$ export $PATH
```
**Note: add these lines to your bashrc and source it to always have the right PATH.**
Test the installation
---------------------
......
__version__ = '1.0.3'
__version__ = '1.1'
......@@ -34,13 +34,14 @@ from diva_evaluation_cli.bin.commands.actev_experiment_cleanup import ActevExper
from diva_evaluation_cli.bin.commands.actev_merge_chunks import ActevMergeChunks
from diva_evaluation_cli.bin.commands.actev_exec import ActevExec
from diva_evaluation_cli.bin.commands.actev_status import ActevStatus
from diva_evaluation_cli.bin.commands.actev_validate_execution import ActevValidateExecution
private_subcommands = [
ActevGetSystem(),
ActevValidateSystem(),
ActevExec(),
ActevStatus()
ActevStatus(),
ActevValidateExecution()
]
public_subcommands = [
......
"""
USAGE
ActEV validation-execution
Description
-----------
Test the execution of the system on each validation data set provided in container_output directory.
Compare the newly generated to the expected output and the reference.
Args
----
output of o: path to experiment output json file
reference or r: path to reference json file
file-index or f: path to file index json file for test set
activity-index or a: path to activity index json file for test set
result or -R: path to result of the ActEV scorer
Warning: this file should not be modified: see src/entry_points to add your source code.
"""
import logging
from diva_evaluation_cli.bin.commands.actev_command import ActevCommand
from diva_evaluation_cli.bin.private_src.entry_points.actev_validate_execution import entry_point
class ActevValidateExecution(ActevCommand):
def __init__(self):
super(ActevValidateExecution, self).__init__('validate-execution', entry_point)
def cli_parser(self, arg_parser):
""" Configure the description and the arguments (positional and optional) to parse.
@param arg_parser: python arg parser to describe how to parse the command
"""
arg_parser.description = "Test the execution of the system on each validation data set provided"
required_named = arg_parser.add_argument_group('required named arguments')
required_named.add_argument("-o", "--output", help="path to experiment output json file", required=True)
required_named.add_argument("-r", "--reference", help="path to reference json file", required=True)
required_named.add_argument("-a", "--activity-index", help="path to activity index json file", required=True)
required_named.add_argument("-f", "--file-index", help="path to file index json file", required=True)
required_named.add_argument("-R", "--result", help="path to result of the ActEV scorer", required=True)
arg_parser.set_defaults(func=ActevValidateExecution.command, object=self)
......@@ -31,16 +31,12 @@ else
options=''
fi
# Check if pip is installed
python3 -m pip > /dev/null
EXIT_STATUS=$?
cd "$(dirname "$0")"
if [ $EXIT_STATUS -ne 0 ];then
echo "Please install pip before running the script:"
echo "sudo apt-get install python3-pip"
exit 1
fi
sudo apt-get install python3-pip -y
sudo apt-get install python3-dev -y
python3 -m pip install setuptools $options
python3 -m pip install -e . -U $options
python3 -m pip install -r ../../requirements.txt $options
python3 -m pip install -e ../../. -U $options
"""
ENTRY POINT
This file should not be modified.
"""
import os
def entry_point(output, reference, activity_index, file_index, result):
""" Private entry points.
"""
# go into the right directory to execute the script
path = os.path.dirname(__file__)
execution_validation_dir = os.path.join(path, '../implementation/validate_execution')
installation_script = os.path.join(execution_validation_dir, 'install.sh')
script = os.path.join(execution_validation_dir, 'score.sh')
script += " " + output + \
" " + reference + \
" " + activity_index + \
" " + file_index + \
" " + result
# execute the script
# status is the exit status code returned by the program
status = os.system('cd ' + execution_validation_dir + \
';. ' + installation_script + \
';' + script)
if status != 0:
raise Exception("Error occured in install.sh or score.sh")
Subproject commit 6fc44aa6ea9513edc5278fe27ee4bf9bbcbfa931
#!/bin/bash
env_dir="python_env"
path=`pwd`
path_to_env_dir="$path/$env_dir"
if [ -d $path_to_env_dir ];then
. ./$env_dir/bin/activate
else
virtualenv -p /usr/bin/python2 $env_dir
. ./$env_dir/bin/activate
python -m pip --no-cache-dir install -r requirements.txt
fi
munkres==1.0.12
scipy==1.0.0
matplotlib==2.0.2
jsonschema==2.5.1
#!/bin/bash
cd "$(dirname "$0")"
# Get the ActEV scorer submodule
git submodule update --init --recursive
if [ $? -ne 0 ];then
exit 1
fi
# Configure the scorer with the right arguments
output=$1
reference=$2
activity=$3
file=$4
result=$5
# Execute ActEV Scorer
cd ActEV_Scorer
python2 ActEV_Scorer.py \
ActEV18_AD \
-s $output \
-r $reference \
-a $activity \
-f $file \
-o $result \
-v
{
"Chunk1": {
"activities": [
"Loading",
"Closing_Trunk",
"Exiting",
"Entering",
"Closing"
],
"files": [
"VIRAT_S_000000.mp4"
]
},
"Chunk2": {
"activities": [
"Loading",
"Closing_Trunk",
"Exiting",
"Entering",
"Closing"
],
"files": [
"VIRAT_S_000001.mp4"
]
}
}
\ No newline at end of file
......@@ -225,7 +225,7 @@ actev experiment-cleanup
```
Status
-------------
------
```
actev status -h
......@@ -258,3 +258,18 @@ Example:
actev chunk-query -i Chunk1
```
Validate and score an output
----------------------------
Generic command:
```
actev validate-execution -o <path to output result> -r <path to reference file> -a <path to activity> -f <path to file> -R <path to scoring result>
```
Example:
```
actev validate-execution ~/output.json -r diva_evaluation_cli/container_output/dataset/output.json -a ~/activity.json -f ~/file.json -R ~/result.json
```
doc/figures/chunks.png

206 KB | W: | H:

doc/figures/chunks.png

241 KB | W: | H:

doc/figures/chunks.png
doc/figures/chunks.png
doc/figures/chunks.png
doc/figures/chunks.png
  • 2-up
  • Swipe
  • Onion skin
doc/figures/execution_cli.png

116 KB | W: | H:

doc/figures/execution_cli.png

190 KB | W: | H:

doc/figures/execution_cli.png
doc/figures/execution_cli.png
doc/figures/execution_cli.png
doc/figures/execution_cli.png
  • 2-up
  • Swipe
  • Onion skin
doc/figures/process_cli.png

192 KB | W: | H:

doc/figures/process_cli.png

230 KB | W: | H:

doc/figures/process_cli.png
doc/figures/process_cli.png
doc/figures/process_cli.png
doc/figures/process_cli.png
  • 2-up
  • Swipe
  • Onion skin
doc/figures/process_evaluation.png

212 KB | W: | H:

doc/figures/process_evaluation.png

236 KB | W: | H:

doc/figures/process_evaluation.png
doc/figures/process_evaluation.png
doc/figures/process_evaluation.png
doc/figures/process_evaluation.png
  • 2-up
  • Swipe
  • Onion skin
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment