Commit 13af75e9 authored by Andrew Bauer's avatar Andrew Bauer

Decent first cut at update to 5.1.2

parent 0b518d92
Pipeline #28690 failed with stage
......@@ -21,7 +21,7 @@ developers included Nathan Fabian, Jeffrey Mauldin and Ken Moreland.\end{minipag
\includegraphics[width=1in]{Images/lanl.jpg} & \begin{minipage}[b]{5in}Jim Ahrens is the project lead at Los Alamos National Laboratory.
The LANL team has been integrating Catalyst with various LANL
simulation codes and has contributed to the development of the
library and Cinema.\end{minipage} \\[.3cm]
library and Cinema (\url{http://cinemascience.org/}).\end{minipage} \\[.3cm]
\includegraphics[width=1in]{Images/armysbir2.jpg} & \begin{minipage}[b]{5in}Mark Potsdam, from Aeroflightdynamics Directorate,
was the main technical point of contact for Army SBIRs and alongside Andrew Wissink has
contributed significantly to the vision of Catalyst.\end{minipage} \\
......
This diff is collapsed.
......@@ -12,7 +12,7 @@ running a simulation at scale. In one simple example, the executable size was re
MB when linking with ParaView to less than 20 MB when linking with Catalyst.
The main steps for configuring Catalyst are:
\begin{enumerate}
\item Setting up an “edition”
\item Set up an “edition”
\item Extract the desired code from ParaView source tree into a separate Catalyst source tree
\item Build Catalyst
\end{enumerate}
......@@ -138,24 +138,24 @@ Catalyst source tree and so the CMakeLists.txt file in that directory needs to b
order to include that class to be added to the build.
\begin{minted}{json}
"modules":[
{
"name":"vtkFiltersCore",
"path":"VTK/Filters/Core",
"include":[
{
"path":"vtkArrayCalculator.cxx"
},
{
"path":"vtkArrayCalculator.h"
}
],
"replace":[
{
"path":"VTK/Filters/Core/CMakeLists.txt"
}
],
"cswrap":true
}
{
"name":"vtkFiltersCore",
"path":"VTK/Filters/Core",
"include":[
{
"path":"vtkArrayCalculator.cxx"
},
{
"path":"vtkArrayCalculator.h"
}
],
"replace":[
{
"path":"VTK/Filters/Core/CMakeLists.txt"
}
],
"cswrap":true
}
]
\end{minted}
In this case, the CMakeLists.txt file that needs to be copied to the Catalyst source tree exists in
......
\subsection{Reusing Simulation Memory for Non-VTK Compliant Memory Layouts}\label{appendix:alternatememorylayout}
Recent work in VTK has added the ability to reuse the simulation's memory and data structures
in the co-processing pipeline.
We start with information on creating a class that derives from vtkDataArray that uses pre-allocated memory
that does not match up with VTK's expected layout. The abstract class to derive from for this
purpose is the vtkMappedDataArray. We first go through an example of this with the vtkCPExodusIIResultsArrayTemplate class
which is part of VTK.
The vtkCPExodusIIResultsArrayTemplate class is a
templated class that is a concrete implementation of vtkMappedDataArray. This class
should only be used if the data array has more than one component.
It can be used
as is if the simulation memory layout has the following constraints:
\begin{itemize}
\item The components of the data are each stored in contiguous arrays.
\item The component array data is stored in the same order as the points or cells in
the VTK dataset for point data or cell data, respectively.
\end{itemize}
If these two conditions are met then the main function of interest in this class
is:
\begin{itemize}
\item void SetExodusScalarArrays(std::vector\textless Scalar*\textgreater arrays, vtkIdType numTuples, bool save)
\end{itemize}
Here, arrays is used to pass the pointers to the beginning of each component array. The size of arrays
sets the number of components in the vtkCPExodusIIResultsArrayTemplate object. The number of tuples
is set by numTuples. Finally, if save is set to false then the object will delete the arrays
using the delete method on each component array when it is done with the memory. Otherwise it
assumes that the memory will be de-allocated elsewhere. The following code snippet demonstrates
its use.
\begin{minted}{c++}
vtkCPExodusIIResultsArrayTemplate<double>* vtkarray =
vtkCPExodusIIResultsArrayTemplate<double>::New();
vtkarray->SetName("velocity");
std::vector<double*> simulationarrays;
simulationarrays.push_back(xvelocity);
simulationarrays.push_back(yvelocity);
simulationarrays.push_back(zvelocity);
vtkarray->SetExodusScalarArrays(myarrays, grid->GetNumberOfPoints(), true);
grid->GetPointData()->AddArray(vtkarray);
vtkarray->Delete();
\end{minted}
If the vtkCPExodusIIResultsArrayTemplate class is not appropriate for mapping
simulation memory to VTK memory, a class that derives from vtkMappedDataArray
will need to be written. The virtual methods that need to be reimplemented are (note that
Scalar is the templated data type):
\begin{itemize}
\item void Initialize()
\item void GetTuples(vtkIdList *ptIds, vtkAbstractArray *output)
\item void GetTuples(vtkIdType p1, vtkIdType p2, vtkAbstractArray *output)
\item void Squeeze()
\item vtkArrayIterator *NewIterator()
\item vtkIdType LookupValue(vtkVariant value)
\item void LookupValue(vtkVariant value, vtkIdList *ids)
\item vtkVariant GetVariantValue(vtkIdType idx)
\item void ClearLookup()
\item double* GetTuple(vtkIdType i)
\item void GetTuple(vtkIdType i, double *tuple)
\item vtkIdType LookupTypedValue(Scalar value)
\item void LookupTypedValue(Scalar value, vtkIdList *ids)
\item Scalar GetValue(vtkIdType idx)
\item Scalar\& GetValueReference(vtkIdType idx)
\item void GetTupleValue(vtkIdType idx, Scalar *t)
\end{itemize}
Since once the object is properly set up it should be considered a read-only class (i.e.
nothing in VTK should be modifying any of its contents), the following methods
should be implemented with only errors to ensure they aren't being used:
\begin{itemize}
\item int Allocate(vtkIdType sz, vtkIdType ext)
\item int Resize(vtkIdType numTuples)
\item void SetNumberOfTuples(vtkIdType number)
\item void SetTuple(vtkIdType i, vtkIdType j, vtkAbstractArray *source)
\item void SetTuple(vtkIdType i, const float *source)
\item void SetTuple(vtkIdType i, const double *source)
\item void InsertTuple(vtkIdType i, vtkIdType j, vtkAbstractArray *source)
\item void InsertTuple(vtkIdType i, const float *source)
\item void InsertTuple(vtkIdType i, const double *source)
\item void InsertTuples(vtkIdList *dstIds, vtkIdList *srcIds, vtkAbstractArray *source)
\item vtkIdType InsertNextTuple(vtkIdType j, vtkAbstractArray *source)
\item vtkIdType InsertNextTuple(const float *source)
\item vtkIdType InsertNextTuple(const double *source)
\item void DeepCopy(vtkAbstractArray *aa)
\item void DeepCopy(vtkDataArray *da)
\item void InterpolateTuple(vtkIdType i, vtkIdList *ptIndices, vtkAbstractArray* source, double* weights)
\item void InterpolateTuple(vtkIdType i, vtkIdType id1, vtkAbstractArray *source1, vtkIdType id2, vtkAbstractArray *source2, double t)
\item void SetVariantValue(vtkIdType idx, vtkVariant value)
\item void RemoveTuple(vtkIdType id)
\item void RemoveFirstTuple()
\item void RemoveLastTuple()
\item void SetTupleValue(vtkIdType i, const Scalar *t)
\item void InsertTupleValue(vtkIdType i, const Scalar *t)
\item vtkIdType InsertNextTupleValue(const Scalar *t)
\item void SetValue(vtkIdType idx, Scalar value)
\item vtkIdType InsertNextValue(Scalar v)
\item void InsertValue(vtkIdType idx, Scalar v)
\end{itemize}
Using classes derived from vtkMappedDataArray along with any of the topologically
structured grids, the adaptor will use a negligible amount of additional memory in creating VTK data structures
representing simulation grids and fields. For vtkPolyDatas and vtkUnstructuredGrids,
the memory to store the cells can still be substantial. VTK has recently added
the vtkMappedUnstructuredGrid class as a way to do this. Since vtkUnstructuredGrids can
store all of the cells that a vtkPolyData can, we have not done the same for vtkPolyDatas though.
Using the vtkMappedUnstructuredGrid class to represent the simulation code's grid
inside of VTK is quite complex and beyond the scope of this document. For developer's interested
in using this though we refer them to \url{www.vtk.org/Wiki/VTK/InSituDataStructures}.
This diff is collapsed.
\section{Examples}
There are a wide variety of VTK examples at \url{www.vtk.org/Wiki/VTK/Examples}. This site
includes C, \Cplusplus, Fortran and Python examples but is targeted for general VTK development.
Examples specific to ParaView Catalyst can be found at directly in the ParaView source code
under the Examples/Catalyst subdirectories. Descriptionts of the examples are listed below.
Examples specific to ParaView Catalyst can be found directly in the ParaView source code
under the Examples/Catalyst subdirectories. Descriptionts of these examples are listed below.
\begin{description}
\item[FortranPoissonSolver] \hfill \\
An example of a parallel, finite difference discretization of the Poisson equation
......@@ -32,16 +32,16 @@ under the Examples/Catalyst subdirectories. Descriptionts of the examples are li
Catalyst. The grid is a vtkImageData.
\item[CxxMultiPieceExample] \hfill \\
A \Cplusplus example of a simulation code interfacing with
Catalyst. The grid is a vtkMultiPiece data set with
Catalyst. The grid is a vtkMultiPiece dataset with
a single vtkImageData for each process.
\item[CxxNonOverlappingAMRExample] \hfill \\
A \Cplusplus example of a simulation code interfacing with
Catalyst. The grid is a vtkNonOverlappingAMR
data set.
dataset.
\item[CxxOverlappingAMRExample] \hfill \\
A \Cplusplus example of a simulation code interfacing with
Catalyst. The grid is a vtkOverlappingAMR
data set.
dataset.
\item[CxxPVSMPipelineExample] \hfill \\
An example where we manually create a Catalyst
pipeline in \Cplusplus code using ParaView's server-manager.
......@@ -68,6 +68,5 @@ under the Examples/Catalyst subdirectories. Descriptionts of the examples are li
An example of an adaptor where we use VTK's newer structure-of-arrays (SOA)
classes to map simulation data structures to
VTK data arrays to save on memory use by Catalyst. Note that this example
has been added to the ParaView source code after the ParaView 5.1 release
but works properly with the ParaView 5.1.
has been added to the ParaView source code in the ParaView 5.1.2 release.
\end{description}
......@@ -71,7 +71,7 @@ code to save a full dataset.
\begin{center}
\includegraphics[width=4in]{Images/filtercomputetimes.png}
\caption{Comparison of compute time in seconds for certain analysis operations vs. saving
the full data set for a 6 process run on a desktop machine.}
the full dataset for a 6 process run on a desktop machine.}
\label{fig:filtercomputetimes}
\end{center}
\end{figure}
......@@ -97,9 +97,9 @@ simulation workflow of pre-processing, simulation, and post-processing (shown in
to one that integrates post-processing directly into the simulation process as shown in
Figure~\ref{fig:catalystworkflow}. This integration of simulation with post-processing provides several key advantages. First, it
avoids the need to save out intermediate results for the purpose of post-processing; instead
post-processing can be performed \textit{in situ}
the post-processing work can be performed \textit{in situ}
as the simulation is running, This saves considerable
time as illustrated below in Figure~\ref{fig:scalingplots}.
time as illustrated in Figure~\ref{fig:scalingplots}.
\begin{figure}[htb]
\centering
......@@ -126,7 +126,7 @@ Thus writing out extracts significantly reduces the total IO cost.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=4in]{Images/outputsizes.png}
\caption{Comparison of file size in bytes for saving full data set vs. saving specific analysis outputs.}
\caption{Comparison of file size in bytes for saving full dataset vs. saving specific analysis outputs.}
\label{fig:outputsizes}
\end{center}
\end{figure}
......
......@@ -58,7 +58,7 @@ simulation inputs.
The downside to using pre-configured scripts is that they are only as useful as the simulation
developer makes them. These scripts can cover a large amount of use cases of interest to the
user but inevitably the user will want more functionality or better control. This is where it is
useful for the simulation user to create their own Catalyst Python scripts pipeline using the
useful for the simulation user to create their own Catalyst Python scripts' pipeline using the
ParaView GUI.
There are two main prerequisites for creating Catalyst Python scripts in the ParaView GUI. The
......@@ -305,7 +305,7 @@ filter's output. The filters in this category are:
\begin{itemize}
\item Block Scalars
\item Calculator\item Cell Data to Point Data\item Compute Derivatives\item Curvature\item Elevation
\item Generate Ids\item Generate Surface Normals\item Gradient\item Level Scalars\item Median\item Mesh Quality
\item Generate Ids\item Generate Surface Normals\item Gradient\item Gradient of Unstructured DataSet\item Level Scalars\item Median\item Mesh Quality
\item Octree Depth Limit\item Octree Depth Scalars\item Point Data to Cell Data\item Process Id Scalars
\item Random Vectors\item Resample with Dataset\item Surface Flow\item Surface Vectors\item Transform
\item Warp (scalar)\item Warp (vector)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment