Skip to content
Snippets Groups Projects
Commit 09939080 authored by Scott Wittenburg's avatar Scott Wittenburg Committed by Kitware Robot
Browse files

Merge topic 'update-container-documentation'


bbb52b56 Singularity: Add documentation on how to build and run containers
568ceb78 Docker: Replace example shell scripts with actual documentation

Acked-by: default avatarKitware Robot <kwrobot@kitware.com>
Acked-by: default avatarBen Boeckel <ben.boeckel@kitware.com>
Merge-request: !602
parents 1aae49e6 bbb52b56
No related branches found
No related tags found
No related merge requests found
# Introduction
The goal of this document is to describe the `Dockerfile` in this directory and how to use it to build, deploy, and run a variety of ParaView Docker images.
The basic idea of the `Dockerfile` is that it will first install some packages needed for running the ParaView Superbuild, then clone the superbuild and use the `CMake` initial cache located in `cmake/sites/Docker-Ubuntu-18_04.cmake` to provide the build options.
## Building images
This section describes building the ParaView `Docker` images using the `Dockerfile` in this directory.
### Description of build arguments
The `Dockerfile` accepts several build arguments (provided in the form `--build-arg OPTION=VALUE`) allowing the user to specify the following build options:
#### `BASE_IMAGE`
This could be an `nvidia` Docker image, or just a basic `ubuntu`. The `nvidia` images are useful for creating `EGL` builds of ParaView, while other `ubuntu` images are good for OSMesa builds.
#### `RENDERING`
The options here are either `egl` or `osmesa`, make sure to pick a compatible base image for the option you choose.
#### `SUPERBUILD_REPO`
Defaults to main gitlab repo, but could point to a fork for testing branches. The reason we need to clone to superbuild (instead of simply checking out the branch we want locally and building from that) is that `Docker` does not provide any kind of directory binding/mounting during the build process, likely for reasons related to build reproducibility.
#### `SUPERBUILD_TAG`
This could be any branch name, tag, or commit which exists on the `SUPERBUILD_REPO`, but defaults to latest release tag.
#### `PARAVIEW_TAG`
The option for this are the same as for `SUPERBUILD_TAG`, above.
#### `DEV_BUILD`
In order to allow you to preserve both the superbuild build tree as well as the version of `CMake` used during the build, this option accepts a value of `"true"`. Any other value (including the default of `"false"`) results in the build tree and `CMake` installation getting cleaned out to reduce the final size of the built image. This option can be helpful if you want to use the resulting `Docker` image to develop plugins against a particular version of ParaView.
### Build command-line examples
The simplest build just accepts all the defaults:
```
docker build --rm -t pv-vVERSION-egl
```
Here is an example of specifying non-default arguments for some of the build options. This command builds an image using OSMesa for rendering, chooses the `master` branch of ParaView, and picks a branch of the superbuild from a developer fork:
```
docker build --rm \
--build-arg BASE_IMAGE="ubuntu:18.04" \
--build-arg RENDERING="osmesa" \
--build-arg SUPERBUILD_REPO="https://gitlab.kitware.com/<some.user>/paraview-superbuild.git" \
--build-arg SUPERBUILD_TAG="custom-branch-in-development" \
--build-arg PARAVIEW_TAG=master \
-t pv-master-osmesa-custom \
.
```
## Deploying images
Deploying images you have built is a matter of tagging them and pushing them to Dockerhub (or some other image registry). In order to tag images, you probably need to be logged in with your `Docker` ID on your local machine. This can be accomplished by typing:
```
docker login
```
Then provide your `Docker` ID and password at the prompts.
To tag your image, the command looks like:
```
docker tag <local-image-tag> <desired-tag-name-for-registry>
```
For example to tag a local image tagged `pv-v5.6.1-egl`, as `kitware/paraviewweb:pv-v5.6.1-egl`, the command is as follows:
```
docker tag pv-v5.6.1-egl kitware/paraviewweb:pv-v5.6.1-egl
```
Once the image is tagged, you can push it to the registry using a command like the following (to push to Dockerhub):
```
docker push kitware/paraviewweb:pv-v5.6.1-egl
```
## Running images
To run images you have built as containers, use the `docker run` command. To run a shell on the OSMesa container, you only need the image tag. For example:
```
docker run --rm -ti pv-v5.6.1-osmesa
```
In order to run images based on the `nvidia-docker2` runtime (e.g. any `EGL` images you have built), you need to provide an extra argument to `docker run`, for example:
```
docker run --rm --runtime=nvidia -ti pv-v5.6.1-egl
```
Of course for that to work, you not only need the `nvidia-docker2` container runtime packages installed on your system, you also need an NVidia graphics card with the latest drivers installed.
#!/bin/bash
# Simple build, choosing defaults for all options:
docker build --rm -t pv-v5.6.0-egl .
# Build version 5.6.0 w/ EGL:
# docker build --rm \
# --build-arg BASE_IMAGE="nvidia/opengl:1.0-glvnd-devel-ubuntu18.04" \
# --build-arg RENDERING="egl" \
# --build-arg SUPERBUILD_REPO="https://gitlab.kitware.com/scott.wittenburg/paraview-superbuild.git" \
# --build-arg SUPERBUILD_TAG="add-dockerfile-and-build-script" \
# --build-arg PARAVIEW_TAG=v5.6.0 \
# -t pv-v5.6.0-egl \
# .
# Build version 5.6.0 w/ OSMesa:
# docker build --rm \
# --build-arg BASE_IMAGE="ubuntu:18.04" \
# --build-arg RENDERING="osmesa" \
# --build-arg SUPERBUILD_REPO="https://gitlab.kitware.com/scott.wittenburg/paraview-superbuild.git" \
# --build-arg SUPERBUILD_TAG="add-dockerfile-and-build-script" \
# --build-arg PARAVIEW_TAG=v5.6.0 \
# -t pv-v5.6.0-osmesa \
# .
#!/bin/bash
# docker login (+ enter username and password)
docker tag pv-v5.6.0-egl kitware/paraviewweb:pv-v5.6.0-egl
docker push kitware/paraviewweb:pv-v5.6.0-egl
#!/bin/bash
# If you want to run the OSMesa version, no runtime arg is needed:
docker run --rm -ti pv-v5.6.0-osmesa
# Or if you have nvidia-docker2 installed and want to run the egl version:
# docker run --rm --runtime=nvidia -ti pv-v5.6.0-egl
# Introduction
The goal of this document is to describe how to build and run ParaView `Singularity` containers using the recipes in this directory. These recipes have been tested with (and likely require) the latest stable version of `Singularity` available at the time of this writing, version `3.2`. The documentation for this version is available [here](https://sylabs.io/guides/3.2/user-guide/).
The build recipes provided in this directory are very similar to those found in the `Scripts/docker/ubuntu/` directory, however there are some differences introduced as a result of some of the differences between `Singularity` and `Docker`. One difference, noticeable immediately, is that `Singularity` does not allow for the provision of arguments or options at build time, perhaps for reasons of build reproducibility. This results in extra work required to do any customization of the build, which is described in the first section.
## Building containers
To build a `Singularity` container using the recipes in this directory, you need to have `Singularity` installed. The "Quick Start" section in the documentation linked above provides instructions for doing this. Once it is properly installed, a build command looks like the following:
```
sudo /opt/singularity/bin/singularity build pv-release-egl.sif Singularity.egl
```
In the above command `pv-release-egl.sif` is the name you want to give the resulting container, and `Singularity.egl` is the name of the recipe file in the current directory. To build an OSMesa version, use the `Singularity.osmesa` recipe:
```
sudo /opt/singularity/bin/singularity build pv-release-osmesa.sif Singularity.osmesa
```
### Customizing the build
The process of customizing the build is slightly more cumbersome with `Singularity` than it is with `Docker`. Instead of specifying build arguments on the command line, the only way to change settings is to change values in your local working copy of one of the recipes (`Singularity.egl` or `Singularity.osmesa`). The values that may be of interest to change can be found at the top of the `%post%` section of the recipes and include `RENDERING`, `PARAVIEW_TAG`, `SUPERBUILD_TAG`, `SUPERBUILD_REPO`, and `DEV_BUILD`. The meanings of these options are the same as they are for the `Dockerfile`, which are described [here](/Scripts/docker/ubuntu/README.md).
Then just run the build commands as described in the section above.
## Container applications
`Singularity` gives us the ability to describe `apps` within the recipe which behave like documentable shortcuts to functionality in the container. The apps we have built into the containers include `pvpython`, `pvbatch`, and `pvserver`, as well as the web applications which get built with the superbuild (`visualizer`, `lite`, `divvy`, and `flow`).
### Running the applications
To run one of the applications, the command has the form:
```
singularity run [--nv] --app <app-name> <container-name> [app-arguments]
```
The option `--nv` is needed when running a container where ParaView was built with `EGL` support. For example to run `pvpython` on a script you have in your current directory, you could just type (e.g.):
```
singularity run --nv --app pvpython pv-release-egl.sif testPythonScript.py
```
### Getting help on applications
To get help on one of the applications built into a container, type (e.g.):
```
singularity run-help --app visualizer pv-v5.6.1-egl.sif
```
Which will print the following help text:
```
Run the ParaViewWeb Visualizer server. The server python script (`pvw-visualizer.py`)
as well as the `--content` arguments are already provided for you, but you may still
want to provide other arguments such as `--data <path-to-data-dir>` (note that you
must bind-mount that path when running singularity for the image to see it), or
`--port <port-number>`.
Example:
$ singularity run --nv \
--bind /<path-to-data-dir>:/data \
--app visualizer \
pv-release-egl.sif --data /data --port 9091
```
### Running a shell
To run a shell in the container, type (e.g.):
```
singularity shell pv-release-osmesa.sif
```
This will put you in a shell running within the specified container, which in this case will be expected in the current working directory. By default, `Singularity` bind mounts the current working directory, so if you write files there, you'll see them when you leave the container. Here's a small snippet to demonstrate:
```
$ ls -l | grep pv-release-osmesa
-rwxr-xr-x 1 me me 459804672 Jun 10 18:43 pv-release-osmesa.sif
$ singularity shell pv-release-osmesa.sif
Singularity pv-release-osmesa.sif:/home/me/some/directory> /opt/paraview/bin/pvpython
Python 2.7.15rc1 (default, Apr 15 2018, 21:51:34)
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from paraview.simple import *
>>> coneSrc = Cone()
>>> coneRepr = Show(coneSrc)
>>> coneView = Render()
>>> WriteImage('osmesacone.png')
>>> <Ctrl-D>
Singularity pv-release-osmesa.sif:/home/me/some/directory> exit
exit
$ ls -l | grep osmesacone
-rw-r--r-- 1 me me 3896 Jun 19 16:01 osmesacone.png
```
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment