UserSection.tex 18.3 KB
Newer Older
1 2 3
% not sure why it's not picking up the icon command from LatexMacros.tex...

4 5 6 7
This section describes ParaView Catalyst
from the perspective of the simulation user.
As described in the previous section,
Catalyst changes the workflow with the goal of
efficiently extracting useful insights during the
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
numerical simulation process.

\caption{Traditional workflow (blue) and ParaView Catalyst enhanced workflow (green).}

With the ParaView Catalyst enhanced workflow, the user specifies
visualization and analysis output during the pre-processing step.
These output data are then generated during the simulation run and
later analyzed by the user. The Catalyst output can be
produced in a variety of formats such as rendered images with
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
pseudo-coloring of variables; plots (e.g. bar graphs, line plots, etc.);
data extracts (e.g. iso-surfaces, slices, streamlines, etc);
and computed quantities (e.g. lift on a wing, maximum stress, flow
rate, etc.). The goal of the enhanced workflow is to reduce the time to
gain insight into a given physical problem by performing some of the traditional post-processing
work \textit{in situ}. While the enhanced workflow uses ParaView Catalyst
to produce \textit{in situ} outputs,
the user does not need to be familiar with ParaView to use this functionality.
Configuration of the pre-processing
step can be based on generic information to produce desired
outputs (e.g. an iso-surface value and the variable to iso-surface) and the output can be written in either image
file or other formats with which the user has experience.

There are two major ways in which the user can utilize Catalyst for
\textit{in situ} analysis and visualization. The first is
to specify a set of parameters that are passed into a pre-configured
Catalyst pipeline. The second is to create a Catalyst pipeline script
using ParaView's GUI.
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64

\section{Pre-Configured ParaView Catalyst Pipelines}
Creating pre-configured Catalyst pipelines places
more responsibility on the simulation developer but can simplify
matters for the user. Using pre-configured pipelines
can lower the barrier to using Catalyst with a simulation code.
The concept is that for most filters there is a
limited set of parameters that need to be set. For example, for a slice filter the user only needs
to specify a point and a normal defining the slice plane. Another example is for the threshold
filter where only the variable and range needs to be specified. For each pipeline though, the
parameters should also include a file name to output to and an output frequency. These
parameters can be presented for the user to set in their normal workflow for creating their
simulation inputs.

\section{Creating ParaView Catalyst Scripts in ParaView}
The downside to using pre-configured scripts is that they are only as useful as the simulation
developer makes them. These scripts can cover a large amount of use cases of interest to the
user but inevitably the user will want more functionality or better control. This is where it is
useful for the simulation user to create their own Catalyst Python scripts pipeline using the
ParaView GUI.

There are two main prerequisites for creating Catalyst Python scripts in the ParaView GUI. The
65 66
first is that ParaView is built with the Catalyst Script Generator plugin (previously called
the CoProcessing Script Generator plugin for versions of ParaView before 4.2) enabled. Note that this plugin is
enabled by default when building ParaView from source as well as for versions of ParaView
68 69 70 71
installed from the available installers. Additionally, the version of ParaView used to generate the
script should also correspond to the version of ParaView Catalyst that the simulation code runs with. The
second prerequisite is that the user has a representative dataset to start from. What we mean
by this is that when reading the dataset from disk into ParaView that it is the same dataset type
72 73
(e.g. vtkUnstructuredGrid, vtkImageData, etc.) and has the same attributes defined over the
grids as the simulation adaptor code will provide to Catalyst during simulation runs. Ideally, the
geometry and the attribute ranges will be similar to what is provided by the simulation run's
75 76
configuration. The steps to create a Catalyst Python pipeline in the ParaView GUI are:
77 78 79 80 81 82
\item First load the ParaView plugin for creating the scripts. Do this by selecting ``Manage Plugins\ldots''
under the Tools menu (\menu{Tools > Manage Plugins\ldots}). In the window that pops up, select
CatalystScriptGeneratorPlugin and press the “Load Selected” button. After this, press the Close
button to close the window. This will create two new top-level menu items, \menu{Writers} and
\menu{CoProcessing}. Note that you can have the plugin automatically loaded when ParaView
starts up by expanding the CatalystScriptGeneratorPlugin information by clicking on the + sign in
the box to the left of it and then by checking the box to the right of Auto Load.
\item Next, load in a representative dataset and create a pipeline. In this case though, instead
85 86
of actually writing the desired output to a file we need to specify when and where the
files will be created when running the simulation. For data extracts we specify at this
point that information by choosing an appropriate writer under the \menu{Writers} menu. The
88 89 90 91 92 93 94 95 96 97 98 99 100 101
user should specify a descriptive file name as well as a write frequency in the Properties
panel as shown in the image below. The file name must contain a \%t in it as this gets
replaced by the time step when creating the file. Note that the step to specify screenshot
outputs for Catalyst is done later.
\caption{Example of a pipeline with one writer included. It writes output from
the Slice filter. Since the writer is selected its file name and write frequency properties are also shown.}
\item Once the full Catalyst pipeline has been created, the Python script must be exported
from ParaView. This is done by choosing the Export State wizard under the
CoProcessing menu (\menu{CoProcessing > Export State}). The user can click on the Next button in the initial window that
103 104
pops up.
\item After that, the user must select the sources (i.e. pipeline objects without any input
105 106 107 108 109 110
connections) that the adaptor will create and add them to the output. Note that typically this
does not include sources from the \menu{Sources} menu since the generated Python
script will instantiate those objects as needed (e.g. for seeds for a streamline).
In the case shown in Figure~\ref{fig:pipeline} the source is the
filename\_10.pvti reader that is analogous to the input that the
simulation code's adaptor will provide. The user can either double click on the desired
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164
sources in the left box to add them to the right box or select the desired sources in the
left box and click Add. This is shown in Figure~\ref{fig:sourceselector} below. After all of the proper sources
have been selected, click on Next.
\caption{Selecting filename\_10.pvti as an input for the Catalyst pipeline.}
\item The next step is labeling the inputs. The most common case is a single input in which
case we use the convention that it should be named “input”, the default value. For
situations where the adaptor can provide multiple sources (e.g. fluid-structure interaction
codes where a separate input exists for the fluid domain and the solid domain), the user
will need to label which input corresponds to which label. This is shown in Figure~\ref{fig:sourcelabelling}.
After this is done, click Next.
\caption{Providing identifier strings for Catalyst inputs.}
\item The next page in the wizard gives the user the option to allow Catalyst to check for a
Live Visualization connection and to output screenshots from different views. Check the
box next to Live Visualization to enable it. For screenshots, there are a variety of
options. The first is a global option which will rescale the lookup table for pseudo-coloring
to the current data range for all views. The other options are per view and are:
\item Image type -- choice of image format to output the screenshot in.
\item File Name -- the name of the file to create. It must contain a \%t in it so that the
actual simulation time step value will replace it.
\item Write Frequency -- how often the screenshot should be created.
\item Magnification -- the user can create an image with a higher resolution than the
resolution shown in the current ParaView GUI view.
\item Fit to Screen -- specify whether to fit the data in the screenshot. This gives the
same results in Catalyst as clicking on the \icon{Images/fittoscreen.png}
button in the ParaView GUI.
If there are multiple views, the user should toggle through each one with the Next View
and Previous View buttons in the window. After everything has been set, click on the
Finish button to create the Python script.
\caption{Setting the parameters for outputting screenshots.}
\item The final step is specifying the name of the generated Python script. Specify a directory
and a name to save the script at and click OK when finished.

165 166 167
\subsection{Creating a Representative Dataset}
A question that often arises is how to create a representative dataset. There are two ways to do
this. The first way is to run the
simulation with Catalyst with a script that outputs the full grid with all attribute information.
169 170
Appendix~\ref{appendix:gridwriterscript} has a script that can be used for this purpose.
The second way is by using the sources and filters in ParaView.
171 172 173
The easiest grids to create within the GUI are image data grids (i.e. uniform rectilinear grids),
polydata and unstructured grids. For those knowledgeable enough about VTK, the
programmable source can also be used to create all grid types. If a multi-block grid is needed,
the group datasets filter can be used to group together multiple datasets into a single output.
175 176 177 178
The next step is to create the attribute information (i.e. point and/or cell data). This can be easily
done with the calculator filter as it can create data with one or three components, name the
array to match the name of the array provided by the adaptor, and set an appropriate range of
values for the data. Once this is done, the user should save this out and then read the file back
in to have the reader act as the source for the pipeline.
180 181 182 183 184

\subsection{Manipulating Python Scripts}
For users that are comfortable programming in Python, we encourage them to modify the given
scripts as desired. The following information can be helpful for doing this:
Andrew Bauer's avatar
Andrew Bauer committed
\item Sphinx generated ParaView Python API documentation at\\
186 187
\item Using the ParaView GUI trace functionality to determine how to create desired filters and
set their parameters. This is done with \menu{Start Trace} and \menu{Stop Trace} under the \menu{Tools}
190 191
\item Using the ParaView GUI Python shell with tab completion. This is done with
\menu{Python Shell} under the \menu{Tools} menu.
192 193 194 195 196 197 198 199

\section{ParaView Live}
In addition to being able to set up pipelines \textit{a priori}, through
ParaView's live capabilities the analyst can connect to the running simulation
through the ParaView GUI in order to modify existing pipelines.
This is useful for modifying the existing pipelines to improve
the quality of information coming out of a Catalyst enabled simulation.
200 201 202 203 204
The live connection is done through \menu{Catalyst > Connect\ldots}. This connects the simulation
to the pvserver where the data will be sent to. After the connection is made, the GUI's pipeline
will look like Figure~\ref{fig:catalystlivepipeline}. The live connection uses ParaView's concept
of not performing anything computationally expensive without specific prompting by the user.
Thus, by default none of the Catalyst extracts are sent to the server. This is indicated
205 206 207
by the \icon{Images/pqLinkIn16d.png} icon to the left of the pipeline sources. To have the output
from a source sent to the ParaView server, click on the \icon{Images/pqLinkIn16d.png} icon.
It will then change to \icon{Images/pqLinkIn16.png} to indicate that it is available on the
208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226
ParaView server. This is shown for Contour0 in Figure~\ref{fig:catalystlivepipeline}.
To stop the extract from being sent to the server, just delete the object in the ParaView
server's pipeline (e.g. Extract: Contour0 in Figure~\ref{fig:catalystlivepipeline}).

\caption{ParaView GUI pipeline for live connection.}

Beginning in ParaView 4.2, the live functionality was improved to allow the simulation
to also pause the Catalyst enabled simulation run. This is useful for examining the simulation
state at a specific point in time. The design is based up debugging tools such that the
simulation can be paused at the next available call to the Catalyst libraries, at a
specific time step or when the simulation time passes a specific value.
Additionally, a breakpoint that has not
been reached yet can be removed as well.
These controls are available under the \menu{Catalyst} menu. Note that the Pipeline
228 229 230
Browser shows the simulation run state to signify the status of the simulation. The icons for
this are:
231 232 233
\item \icon{Images/pqInsituServerRunning16.png} indicates the simulation is running with no breakpoint set.
\item \icon{Images/pqInsituBreakpoint16.png} indicates that the simulation has reached a breakpoint.
\item \icon{Images/pqInsituServerPaused16.png} indicates that a breakpoint has been set but not yet reached.
234 235
A demonstration of this functionality is at \url{}.
236 237 238 239

\section{Avoiding Data Explosion}
A key point to keep in mind when creating Catalyst pipelines is that the choice and order of
240 241
filters can make a dramatic difference in the performance of Catalyst (this is true with
ParaView as well). Often, the source of
performance degradation is when dealing with very large amounts of data. For memory-limited
machines like today's supercomputers, poor decisions when creating a pipeline can cause the
244 245 246 247 248
executable to crash due to insufficient memory. The worst case scenario is creating an
unstructured grid from a topologically regular grid. This is because the filter will change from
using a compact grid data structure to a more general grid data structure.

We classify the filters into several categories, ordered from most memory efficient to least
Andrew Bauer's avatar
Andrew Bauer committed
memory efficient and list some commonly used filters for each category:
250 251
\item Total shallow copy or output independent of input -- negligible memory used in creating a
Andrew Bauer's avatar
Andrew Bauer committed
252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281
filter's output. The filters in this category are:
\item Annotate Time
\item Append Attributes
\item Extract Block
\item Extract Datasets
\item Extract Level
\item Glyph
\item Group Datasets
\item Histogram
\item Integrate Variables
\item Normal Glyphs
\item Outline
\item Outline Corners
\item Plot Over Line
\item Probe Location
\item Add field data -- the same grid is used but an extra variable is stored. The filters in this category are:
\item Block Scalars
\item Calculator\item Cell Data to Point Data\item Compute Derivatives\item Curvature\item Elevation
\item Generate Ids\item Generate Surface Normals\item Gradient\item Level Scalars\item Median\item Mesh Quality
\item Octree Depth Limit\item Octree Depth Scalars\item Point Data to Cell Data\item Process Id Scalars
\item Random Vectors\item Resample with Dataset\item Surface Flow\item Surface Vectors\item Transform
\item Warp (scalar)\item Warp (vector)
\item Topology changing, dimension reduction -- the output is a polygonal dataset but the output
Andrew Bauer's avatar
Andrew Bauer committed
283 284 285 286 287 288 289
cells are one or more dimensions less than the input cell dimensions. The filters in this category are:
\item Cell Centers\item Contour\item Extract CTH Fragments\item Extract CTH Parts\item Extract Surface
\item Feature Edges\item Mask Points\item Outline (curvilinear)\item Slice\item Stream Tracer
\item Topology changing, moderate reduction -- reduces the total number of cells in the dataset
Andrew Bauer's avatar
Andrew Bauer committed
291 292 293 294 295 296 297 298 299 300 301
but outputs in either a polygonal or unstructured grid format. The filters in this category are:
\item Clip
\item Decimate
\item Extract Cells by Region
\item Extract Selection
\item Quadric Clustering
\item Threshold
302 303
\item Topology changing, no reduction -- does not reduce the number of cells in the dataset while
changing the topology of the dataset and outputs in either a polygonal or unstructured grid
Andrew Bauer's avatar
Andrew Bauer committed
304 305 306 307 308 309 310 311
format. The filters in this category are:
\item Append Datasets\item Append Geometry\item Clean\item Clean to Grid\item Connectivity\item D3\item Delaunay 2D/3D
\item Extract Edges\item Linear Extrusion\item Loop Subdivision\item Reflect\item Rotational Extrusion\item Shrink
\item Smooth\item Subdivide\item Tessellate\item Tetrahedralize\item Triangle Strips\item Triangulate
312 313 314 315 316 317 318
When creating a pipeline, the filters should generally be ordered in this same fashion to limit
data explosion. For example, pipelines should be organized to reduce dimensionality early.
Additionally, reduction is preferred over extraction (e.g. the Slice filter is preferred over the Clip
filter). Extracting should only be done when reducing by an order of magnitude or more. When
outputting data extracts, subsampling (e.g. the Extract Subset filter or the Decimate filter) can
be used to reduce file size but caution should be used to make sure that the data reduction
Andrew Bauer's avatar
Andrew Bauer committed
doesn't hide any fine features.