Commit 70701745 authored by Andrew Bauer's avatar Andrew Bauer

Improvements for Catalyst UG.

Using minted for code sections, improved a couple of images and
other general fixes.
parent 3f153170
......@@ -7,11 +7,10 @@ references. The latter two classes keep track of other vtkObjects by managing th
reference count. When these objects are created, they increment the reference count of the
object they are referring to and when they go out of scope, they decrement the reference count
of the object they are referring to. The following example demonstrates this.
\begin{cpplst}
\begin{minted}{c++}
{
vtkNew<vtkDoubleArray> a; // a's ref count = 1
a->Setame(``an array'');
a->Setame("an array");
vtkSmartPointer<vtkPointData> pd =
vtkSmartPointer<vtkPointData>::New(); // pd's ref count = 1
pd->AddArray(a.GetPointer()); // a's ref count = 2
......@@ -25,18 +24,17 @@ of the object they are referring to. The following example demonstrates this.
pd3->Delete(); // pd3 is deleted
pd2->GetClassName(); // bug!
} // don't need to call Delete on any object
\end{cpplst}
\end{minted}
Note that when passing a pointer returned from vtkNew as a parameter to a method that the
GetPointer() method must be used. Other than this caveat, vtkSmartPointer and vtkNew objects
can be treated as pointers.
\subsection{ParaView Catalyst for Outputting the Full Dataset}\label{appendix:gridwriterscript}
\subsection{ParaView Catalyst Python Script for Outputting the Full Dataset}\label{appendix:gridwriterscript}
The following script will write out the full dataset every time step for the ``input'' grid provided by
the adaptor to Catalyst. Change ``input'' on line 7 to the appropriate identifier
for adaptors that provide multiple grids. Note that this file is available at \url{https://github.com/Kitware/ParaViewCatalystExampleCode/blob/master/SampleScripts/gridwriter.py}.
\begin{python}
\begin{minted}{python}
from paraview.simple import *
from paraview import coprocessing
......@@ -45,20 +43,27 @@ def CreateCoProcessor():
class Pipeline:
adaptorinput = coprocessor.CreateProducer( datadescription, "input" )
grid = adaptorinput.GetClientSideObject().GetOutputDataObject(0)
if grid.IsA('vtkImageData') or grid.IsA('vtkUniformGrid'):
writer = coprocessor.CreateWriter( XMLPImageDataWriter, "filename_%t.pvti", 1 )
elif grid.IsA('vtkRectilinearGrid'):
writer = coprocessor.CreateWriter( XMLPRectilinearGridWriter, "filename_%t.pvtr", 1 )
elif grid.IsA('vtkStructuredGrid'):
writer = coprocessor.CreateWriter( XMLPStructuredGridWriter, "filename_%t.pvts", 1 )
elif grid.IsA('vtkPolyData'):
writer = coprocessor.CreateWriter( XMLPPolyDataWriter, "filename_%t.pvtp", 1 )
elif grid.IsA('vtkUnstructuredGrid'):
writer = coprocessor.CreateWriter( XMLPUnstructuredGridWriter, "filename_%t.pvtu", 1 )
elif grid.IsA('vtkUniformGridAMR'):
writer = coprocessor.CreateWriter( XMLHierarchicalBoxDataWriter, "filename_%t.vthb", 1 )
elif grid.IsA('vtkMultiBlockDataSet'):
writer = coprocessor.CreateWriter( XMLMultiBlockDataWriter, "filename_%t.vtm", 1 )
if grid.IsA("vtkImageData") or grid.IsA("vtkUniformGrid"):
writer = coprocessor.CreateWriter( XMLPImageDataWriter, \
"filename_%t.pvti", 1 )
elif grid.IsA("vtkRectilinearGrid"):
writer = coprocessor.CreateWriter( XMLPRectilinearGridWriter, \
"filename_%t.pvtr", 1 )
elif grid.IsA("vtkStructuredGrid"):
writer = coprocessor.CreateWriter( XMLPStructuredGridWriter, \
"filename_%t.pvts", 1 )
elif grid.IsA("vtkPolyData"):
writer = coprocessor.CreateWriter( XMLPPolyDataWriter, \
"filename_%t.pvtp", 1 )
elif grid.IsA("vtkUnstructuredGrid"):
writer = coprocessor.CreateWriter( XMLPUnstructuredGridWriter, \
"filename_%t.pvtu", 1 )
elif grid.IsA("vtkUniformGridAMR"):
writer = coprocessor.CreateWriter( XMLHierarchicalBoxDataWriter, \
"filename_%t.vthb", 1 )
elif grid.IsA("vtkMultiBlockDataSet"):
writer = coprocessor.CreateWriter( XMLMultiBlockDataWriter, \
"filename_%t.vtm", 1 )
else:
print "Don't know how to create a writer for a ", grid.GetClassName()
......@@ -69,7 +74,7 @@ def CreateCoProcessor():
self.Pipeline = _CreatePipeline(self, datadescription)
coprocessor = CoProcessor()
freqs = {'input': [1]}
freqs = {"input": [1]}
coprocessor.SetUpdateFrequencies(freqs)
return coprocessor
......@@ -91,8 +96,8 @@ def DoCoProcessing(datadescription):
"Callback to do co-processing for current timestep"
global coprocessor
coprocessor.UpdateProducers(datadescription)
coprocessor.WriteData(datadescription);
\end{python}
coprocessor.WriteData(datadescription)
\end{minted}
\subsection{Reusing Simulation Memory for Non-VTK Compliant Memory Layouts}\label{appendix:alternatememorylayout}
Recent work in VTK has added the ability to reuse the simulation's memory and data structures
......@@ -122,11 +127,10 @@ is set by numTuples. Finally, if save is set to false then the object will delet
using the delete [] method on each component array when it is done with the memory. Otherwise it
assumes that some other code will de-allocate that memory. The following code snippet demonstrates
its use.
\begin{cpplst}
\begin{minted}{c++}
vtkCPExodusIIResultsArrayTemplate<double>* vtkarray =
vtkCPExodusIIResultsArrayTemplate<double>::New();
vtkarray->SetName(``velocity'');
vtkarray->SetName("velocity");
std::vector<double*> simulationarrays;
simulationarrays.push_back(xvelocity);
simulationarrays.push_back(yvelocity);
......@@ -134,8 +138,7 @@ simulationarrays.push_back(zvelocity);
vtkarray->SetExodusScalarArrays(myarrays, grid->GetNumberOfPoints(), true);
grid->GetPointData()->AddArray(vtkarray);
vtkarray->Delete();
\end{cpplst}
\end{minted}
If the vtkCPExodusIIResultsArrayTemplate class is not appropriate for mapping
simulation memory to VTK memory, a class that derives from vtkMappedDataArray
......@@ -156,7 +159,7 @@ Scalar is the templated data type):
\item vtkIdType LookupTypedValue(Scalar value)
\item void LookupTypedValue(Scalar value, vtkIdList *ids)
\item Scalar GetValue(vtkIdType idx)
\item Scalar& GetValueReference(vtkIdType idx)
\item Scalar\& GetValueReference(vtkIdType idx)
\item void GetTupleValue(vtkIdType idx, Scalar *t)
\end{itemize}
......
......@@ -22,17 +22,17 @@ There can be many editions of Catalyst and these editions can be combined to cre
customized Catalyst builds. Assuming that the desired editions have already been created, the
second step is automated and is done by invoking the following command from the
\textless ParaView\_source\_dir\textgreater /Catalyst directory:
\begin{cpplst}
\begin{minted}{python}
python catalyze.py -i <edition_dir> -o <Catalyst_source_dir>
\end{cpplst}
\end{minted}
Note that more editions can be added with the -i \textless edition\_dir\textgreater and that these are processed in
the order they are given, first to last. For the minimal base edition included with ParaView, this
would be -i Editions/Base. The generated Catalyst source tree will be put in
\textless Catalyst\_source\_dir\textgreater. For configuring Catalyst from the desired build directory, do the
following:
\begin{cpplst}
\begin{minted}{python}
<Catalyst_source_dir>/cmake.sh <Catalyst_source_dir>
\end{cpplst}
\end{minted}
The next step is to build Catalyst (e.g. using make on Linux systems).
\subsection{Creating a ParaView Catalyst Edition}
The main operations for creating an edition of Catalyst are:
......@@ -50,18 +50,22 @@ information with a Python script called catalyze.py that is located in the
By default, Catalyst will be built with the default ParaView build parameters (e.g. build with
shared libraries) unless one of the Catalyst editions changes that in its manifest.json file. An
example of this is shown below:
\begin{python}
``cmake'':{
``cache'':[
\begin{minted}{json}
{
``name'':``BUILD_SHARED_LIBS'',
``type'':``BOOL'',
``value'':``OFF''
"edition": "Custom",
"cmake":{
"cache":[
{
"name":"BUILD_SHARED_LIBS",
"type":"BOOL",
"value":"OFF"
}
]
}
}
]
}
\end{python}
Here, ParaView's CMake option of building shared libraries will be set to OFF. It should be
\end{minted}
Here, ParaView's CMake option of building shared libraries will be set to OFF for this edtion
named Custom. It should be
noted that users can still change the build configuration from these settings but it should be
done after Catalyst is configured with the cmake.sh script.
\subsection{Copying Files from the ParaView Source Tree into the Created Catalyst Source Tree}
......@@ -71,36 +75,41 @@ Catalyst source tree. Most of these files will be filters but there may also be
classes that are needed to be copied over as well. In the following JSON snippet we
demonstrate how to copy the vtkPVArrayCalculator class into the generated Catalyst source
tree.
\begin{python}
``modules'':[
{
``name'':``vtkPVVTKExtensionsDefault'',
``path'':``ParaViewCore/VTKExtensions/Default''
``include'':[
{
``path'':``vtkPVArrayCalculator.cxx''
},
\begin{minted}{json}
{
``path'':``vtkPVArrayCalculator.h''
"edition": "Custom",
"modules":[
{
"name":"vtkPVVTKExtensionsDefault",
"path":"ParaViewCore/VTKExtensions/Default"
"include":[
{
"path":"vtkPVArrayCalculator.cxx"
},
{
"path":"vtkPVArrayCalculator.h"
}
],
"cswrap":true
}
]
}
],
``cswrap'':true
}
}
]
\end{python}
\end{minted}
A description of the pertinent information follows:
\begin{itemize}
\item "name":"vtkPVVTKExtensionsDefault" – the name of the VTK or ParaView module.
\item {\ttfamily "}name{\ttfamily "}:{\ttfamily "}vtkPVVTKExtensionsDefault{\ttfamily "} -- the name of the VTK or ParaView module.
In this case it is vtkPVVTKExtensionsDefault. The name of the module can be found
in the modules.cmake file in the corresponding directory. It is the first argument to the
vtk\_module() function.
\item "path":"ParaViewCore/VTKExtensions/Default" – the subdirectory location of the
module relative to the main ParaView source tree directory (e.g.
\textless ParaView\_source\_dir\textgreater/ParaViewCore/VTKExtensions/Default in this case)
\item ``path'':``vtkPVArrayCalculator.cxx'' – the name of the file to copy from the ParaView
\item {\ttfamily "}path{\ttfamily "}:{\ttfamily "}ParaViewCore/VTKExtensions/Default{\ttfamily "} --
the subdirectory location of the
module relative to the main source tree directory (e.g.
\textless ParaView\_source\_dir\textgreater/ParaViewCore/VTKExtensions/Default
in this case)
\item {\ttfamily "}path{\ttfamily "}:{\ttfamily "}vtkPVArrayCalculator.cxx{\ttfamily "} --
the name of the file to copy from the ParaView
source tree to the generated Catalyst source tree.
\item ``cswrap'':true – if the source code needs to be client-server wrapped such that it is
\item {\ttfamily "}cswrap{\ttfamily "}:true -- if the source code needs to be client-server wrapped such that it is
available through ParaView's server-manager. For filters that are used through
ParaView's Python interface or through a server-manager hard-coded \Cplusplus pipeline
this should be true. For helper classes this should be false.
......@@ -127,75 +136,75 @@ file in the same directory. This is done with the “replace” keyword. An exam
below for the vtkFiltersCore module. Here, the vtkArrayCalculator source code is added to the
Catalyst source tree and so the CMakeLists.txt file in that directory needs to be modified in
order to include that class to be added to the build.
\begin{python}
``modules'':[
\begin{minted}{json}
"modules":[
{
``name'':``vtkFiltersCore'',
``path'':``VTK/Filters/Core'',
``include'':[
"name":"vtkFiltersCore",
"path":"VTK/Filters/Core",
"include":[
{
``path'':``vtkArrayCalculator.cxx''
"path":"vtkArrayCalculator.cxx"
},
{
``path'':``vtkArrayCalculator.h''
"path":"vtkArrayCalculator.h"
}
],
``replace'':[
"replace":[
{
``path'':''VTK/Filters/Core/CMakeLists.txt''
"path":"VTK/Filters/Core/CMakeLists.txt"
}
],
``cswrap'':true
"cswrap":true
}
]
\end{python}
\end{minted}
In this case, the CMakeLists.txt file that needs to be copied to the Catalyst source tree exists in
the \textless edition\_dir\textgreater/VTK/Filters/Core directory, where edition\_dir is the location of this custom
edition of Catalyst. Since the Base edition already includes some files from this directory, we
want to make sure that the CMakeLists.txt file from this edition also includes those from the
Base edition. This CMakeLists.txt file is shown below:
\begin{python}
\begin{minted}{cmake}
set(Module_SRCS
vtkArrayCalculator.cxx
vtkCellDataToPointData.cxx
vtkContourFilter.cxx
vtkContourGrid.cxx
vtkContourHelper.cxx
vtkCutter.cxx
vtkExecutionTimer.cxx
vtkFeatureEdges.cxx
vtkGridSynchronizedTemplates3D.cxx
vtkMarchingCubes.cxx
vtkMarchingSquares.cxx
vtkPointDataToCellData.cxx
vtkPolyDataNormals.cxx
vtkProbeFilter.cxx
vtkQuadricClustering.cxx
vtkRectilinearSynchronizedTemplates.cxx
vtkSynchronizedTemplates2D.cxx
vtkSynchronizedTemplates3D.cxx
vtkSynchronizedTemplatesCutter3D.cxx
vtkThreshold.cxx
vtkAppendCompositeDataLeaves.cxx
vtkAppendFilter.cxx
vtkAppendPolyData.cxx
vtkImageAppend.cxx
vtkArrayCalculator.cxx
vtkCellDataToPointData.cxx
vtkContourFilter.cxx
vtkContourGrid.cxx
vtkContourHelper.cxx
vtkCutter.cxx
vtkExecutionTimer.cxx
vtkFeatureEdges.cxx
vtkGridSynchronizedTemplates3D.cxx
vtkMarchingCubes.cxx
vtkMarchingSquares.cxx
vtkPointDataToCellData.cxx
vtkPolyDataNormals.cxx
vtkProbeFilter.cxx
vtkQuadricClustering.cxx
vtkRectilinearSynchronizedTemplates.cxx
vtkSynchronizedTemplates2D.cxx
vtkSynchronizedTemplates3D.cxx
vtkSynchronizedTemplatesCutter3D.cxx
vtkThreshold.cxx
vtkAppendCompositeDataLeaves.cxx
vtkAppendFilter.cxx
vtkAppendPolyData.cxx
vtkImageAppend.cxx
)
set_source_files_properties(
vtkContourHelper
WRAP_EXCLUDE
vtkContourHelper
WRAP_EXCLUDE
)
vtk_module_library(vtkFiltersCore ${Module_SRCS})
\end{python}
\end{minted}
Note that this CMakeLists.txt file does two things. Firstly it specifies which files to be compiled in
the source directory. Next, it specifies properties of the source files. In the above example,
vtkContourHelper is given a property specifying that it should not be wrapped. Another property
which is commonly set indicates that a class is an abstract class (i.e. it has pure virtual
functions). An example of how to do this is shown below.
\begin{python}
\begin{minted}{cmake}
set_source_files_properties(
vtkXMLPStructuredDataWriter
vtkXMLStructuredDataWriter
ABSTRACT
vtkXMLPStructuredDataWriter
vtkXMLStructuredDataWriter
ABSTRACT
)
\end{python}
\end{minted}
include(UseLATEX)
message("dirs are ${CMAKE_CURRENT_SOURCE_DIR} tooo ${CMAKE_CURRENT_BINARY_DIR}")
file(COPY ${CMAKE_CURRENT_SOURCE_DIR}/../ParaView/menukeys.sty DESTINATION ${CMAKE_CURRENT_BINARY_DIR})
add_latex_document(
ParaViewCatalystUsersGuide.tex
INPUTS
......
This diff is collapsed.
ParaViewCatalyst/Images/fullcatalystworkflow.png

209 KB | W: | H:

ParaViewCatalyst/Images/fullcatalystworkflow.png

213 KB | W: | H:

ParaViewCatalyst/Images/fullcatalystworkflow.png
ParaViewCatalyst/Images/fullcatalystworkflow.png
ParaViewCatalyst/Images/fullcatalystworkflow.png
ParaViewCatalyst/Images/fullcatalystworkflow.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -12,7 +12,7 @@ The ParaView Catalyst library is a system that addresses such challenges. It is
integrated directly into large-scale numerical codes. Built on and designed to interoperate with
the standard visualization toolkit VTK and and scalable ParaView application, it enables
simulations to intelligently perform analysis, generate relevant output data, and visualize results
concurrent with a running simulation. This ability to visualize and analyze data from simulations
concurrent with a running simulation. This ability to concurrently visualize and analyze data from simulations
is referred to synonymously as \textit{in situ} processing, co-processing, co-analysis,
and co-visualization. Thus ParaView Catalyst is often referred to as a co-processing, or
\textit{in situ}, library for high-performance computing (HPC).
......@@ -23,7 +23,7 @@ an example workflow to demonstrate just how easy Catalyst is to use in practice.
Computing systems have been increasing in speed and capacity for many years now. Yet not all
of the various subsystems which make up a computing environment have been advancing
equally as fast. This has led to many changes in the way large-scale computing is performed.
For example, simulations have long been scaling towards hundreds of thousands of parallel
For example, simulations have long been scaling towards millions of parallel
computing cores in recognition that serial processing is inherently limited by the bottleneck of a
single processor. As a result, parallel computing methods and systems are now central to
modern computer simulation. Similarly, with the number of computing cores
......@@ -144,7 +144,7 @@ longer in duration) to be tossed out because initial conditions, boundary condit
were specified incorrectly. By
checking intermediate results it's possible to catch mistakes like
these and terminate such runs before they incur excessive costs. Similarly, co-processing
enables debugging of simulation codes. Visualization can be used to great effect to identify
enables improved debugging of simulation codes. Visualization can be used to great effect to identify
regions of instability or numerical breakdown.
ParaView Catalyst was created as a library to achieve the integration of simulation and post-processing. It
......@@ -167,27 +167,26 @@ visualization output is generated in synchronous fashion (i.e., while the simula
Catalyst can produce images/screenshots, compute statistical quantities, generate plots, and
extract derived information such as polygonal data or iso-surfaces to visualize geometry and/or
data.
\begin{figure}[h]
\begin{center}
\includegraphics[width=4in]{Images/fullcatalystworkflow.png}
\caption{\textit{In situ} workflow with various Catalyst outputs}
\includegraphics[width=5in]{Images/fullcatalystworkflow.png}
\caption{\textit{In situ} workflow with various Catalyst outputs.}
\label{fig:examplecatalystworkflow}
\end{center}
\end{figure}
Catalyst has been used by a variety of simulation codes. CTH, a shock physics code from Sandia,
has been instrumented to use Catalyst. Additionally, PHASTA from UC Boulder, Hydra-TH, MPAS-O,
XRAGE, NPIC and VPIC from LANL, Helios from the Army's Aeroflightdynamics Directorate, and
Albany and the Sierra simulation framework from Sandia, H3D from UCSD have all been
instrumented to use Catalyst and Code Saturne from EDF. Some example outputs are shown in
Catalyst has been used by a variety of simulation codes.
An arbitrary list of these codes includes PHASTA from UC Boulder, Hydra-TH, MPAS-O,
XRAGE, NPIC and VPIC from LANL, Helios from the Army's Aeroflightdynamics Directorate, and CTH,
Albany and the Sierra simulation framework from Sandia, H3D from UCSD, and Code Saturne from EDF
have all been instrumented to use Catalyst. Some example outputs are shown in
Figure~\ref{fig:catalystexampleoutput}.
\begin{figure}[h]
\begin{center}
\subfloat[PHASTA]{\includegraphics[width=3in]{Images/phasta.png}}
\subfloat[PHASTA]{\includegraphics[width=3in]{Images/phasta.png}}\,
\subfloat[Helios]{\includegraphics[width=3in]{Images/helios.png}\label{fig:helios}} \\
\subfloat[Code Saturne]{\includegraphics[width=3in]{Images/codesaturne.png}}
\subfloat[Code Saturne]{\includegraphics[width=3in]{Images/codesaturne.png}}\,
\subfloat[CTH]{\includegraphics[width=3in]{Images/cth.png}\label{fig:cth}}
\caption{Various results from simulation codes linked with ParaView Catalyst. Note that post-processing with different packages was performed with \protect\subref{fig:helios} and \protect\subref{fig:cth}.}
\label{fig:catalystexampleoutput}
......@@ -227,7 +226,7 @@ etc.
\item \url{www.paraview.org/paraview/resources/software.php} The main ParaView download
page. Useful for installing ParaView on local machines for creating Catalyst scripts and
viewing Catalyst output.
\item \url{catalyst.paraview.org} The main page for ParaView Catalyst.
\item \url{www.paraview.org/in-situ} The main page for ParaView Catalyst.
\item \url{paraview@paraview.org} The mailing list for general ParaView and Catalyst support.
\item \url{www.github.com/Kitware/ParaViewCatalystExampleCode} Example code for integrating a
simulation code with Catalyst as well as creating a variety of VTK data structures.
......
......@@ -31,9 +31,26 @@
\usepackage{subfig}
\usepackage{caption}
%\usepackage{subcaption}
\usepackage[pdftex]{color}
%\usepackage[pdftex]{color}
\usepackage{setspace}
% to use \textquotedbl for "
%\usepackage[T1]{fontenc}
\usepackage[framemethod=tikz]{mdframed} % tikz makes the border displayed last -- see http://tex.stackexchange.com/questions/124539/mdframed-missing-half-the-frame
\usepackage[]{minted}
\definecolor{lbcolor}{rgb}{0.9,0.9,0.9}
\BeforeBeginEnvironment{minted}{\begin{mdframed}[linecolor=black, topline=true, bottomline=true, backgroundcolor=lbcolor, leftline=true, rightline=true, userdefinedwidth=\textwidth]}
\AfterEndEnvironment{minted}{\end{mdframed}}
% for the itemized ``bullets''
\usepackage{relsize}
\renewcommand{\labelitemi}{$\bullet$}
\renewcommand{\labelitemii}{$\mathsmaller{\blacklozenge}$}
\renewcommand{\labelitemiii}{$\circ$}
\renewcommand{\labelitemiv}{$\mathsmaller{\lozenge}$}
% package for subfigures
%\usepackage{subfigure}
......@@ -48,61 +65,6 @@
% Program listings need to be typeset
\usepackage[final]{listings}
\lstloadlanguages{C++,XML}
\lstset{language=C++, captionpos=b, basicstyle=\footnotesize, commentstyle=\scriptsize, breaklines=true, showstringspaces=false, tabsize=2, xleftmargin=5mm, xrightmargin=5mm}
\lstnewenvironment{cpplst}[1][]
{\lstset{language=C++, captionpos=b, basicstyle=\footnotesize, commentstyle=\scriptsize, breaklines=true,xleftmargin=5mm, xrightmargin=5mm, #1}
\singlespacing}
{\doublespacing}
\definecolor{Code}{rgb}{0,0,0}
\definecolor{Decorators}{rgb}{0.5,0.5,0.5}
\definecolor{Numbers}{rgb}{0.5,0,0}
\definecolor{MatchingBrackets}{rgb}{0.25,0.5,0.5}
\definecolor{Keywords}{rgb}{0,0,1}
\definecolor{self}{rgb}{0,0,0}
\definecolor{Strings}{rgb}{0,0.63,0}
\definecolor{Comments}{rgb}{0,0.63,1}
\definecolor{Backquotes}{rgb}{0,0,0}
\definecolor{Classname}{rgb}{0,0,0}
\definecolor{FunctionName}{rgb}{0,0,0}
\definecolor{Operators}{rgb}{0,0,0}
\definecolor{Background}{rgb}{0.98,0.98,0.98}
\lstnewenvironment{python}[1][]{
\lstset{
numbers=left,
numberstyle=\footnotesize,
numbersep=1em,
xleftmargin=1em,
framextopmargin=2em,
framexbottommargin=2em,
showspaces=false,
showtabs=false,
showstringspaces=false,
frame=l,
tabsize=4,
% Basic
basicstyle=\ttfamily\small\setstretch{1},
backgroundcolor=\color{Background},
language=Python,
% Comments
commentstyle=\color{Comments}\slshape,
% Strings
stringstyle=\color{Strings},
morecomment=[s][\color{Strings}]{"""}{"""},
morecomment=[s][\color{Strings}]{'''}{'''},
% keywords
morekeywords={import,from,class,def,for,while,if,is,in,elif,else,not,and,or,print,break,continue,return,True,False,None,access,as,,del,except,exec,finally,global,import,lambda,pass,print,raise,try,assert},
keywordstyle={\color{Keywords}\bfseries},
% additional keywords
morekeywords={[2]@invariant},
keywordstyle={[2]\color{Decorators}\slshape},
emph={self},
emphstyle={\color{self}\slshape},
%
}}{}
\newcommand{\mytitle}{\textbf{ParaView Catalyst User's Guide}}
......@@ -128,13 +90,14 @@ emphstyle={\color{self}\slshape},
}
% Now to define several macros for commonly used abbreviations and such
\usepackage{xspace}
\newcommand{\fixme}[1]{\footnote{{\color{red}#1}}}
\newcommand{\note}[1]{{\small\color{blue}[#1]}}
\newcommand{\todo}[1]{{\large\color{red}[TODO: #1]}}
%\usepackage{xspace}
%\newcommand{\fixme}[1]{\footnote{{\color{red}#1}}}
%\newcommand{\note}[1]{{\small\color{blue}[#1]}}
%\newcommand{\todo}[1]{{\large\color{red}[TODO: #1]}}
%\newcommand{\todo}[1]{}
%\newcommand{\note}[1]{}
% formatting for C++
% formatting for making ``C++'' look nicely in the text sections
\def\Cplusplus{C\raisebox{0.5ex}{\tiny\textbf{++}} }
% Use the cite package to better organise citations
......
......@@ -182,7 +182,7 @@ in to have the reader act as the source for the pipeline.
For users that are comfortable programming in Python, we encourage them to modify the given
scripts as desired. The following information can be helpful for doing this:
\begin{itemize}
\item Sphinx generated ParaView Python API documentation at
\item Sphinx generated ParaView Python API documentation at\\
\url{www.paraview.org/ParaView3/Doc/Nightly/www/py-doc/index.html}.
\item Using the ParaView GUI trace functionality to determine how to create desired filters and
set their parameters. This is done with \menu{Start Trace} and \menu{Stop Trace} under the \menu{Tools}
......@@ -246,18 +246,69 @@ unstructured grid from a topologically regular grid. This is because the filter
using a compact grid data structure to a more general grid data structure.
We classify the filters into several categories, ordered from most memory efficient to least
memory efficient:
memory efficient and list some commonly used filters for each category:
\begin{enumerate}
\item Total shallow copy or output independent of input -- negligible memory used in creating a
filter's output.
\item Add field data -- the same grid is used but an extra variable is stored.
filter's output. The filters in this category are:
\begin{multicols}{3}
\begin{itemize}
\item Annotate Time
\item Append Attributes
\item Extract Block
\item Extract Datasets
\item Extract Level
\item Glyph
\item Group Datasets
\item Histogram
\item Integrate Variables
\item Normal Glyphs
\item Outline
\item Outline Corners
\item Plot Over Line
\item Probe Location
\end{itemize}
\end{multicols}
\item Add field data -- the same grid is used but an extra variable is stored. The filters in this category are:
\begin{multicols}{3}
\begin{itemize}
\item Block Scalars
\item Calculator\item Cell Data to Point Data\item Compute Derivatives\item Curvature\item Elevation
\item Generate Ids\item Generate Surface Normals\item Gradient\item Level Scalars\item Median\item Mesh Quality
\item Octree Depth Limit\item Octree Depth Scalars\item Point Data to Cell Data\item Process Id Scalars
\item Random Vectors\item Resample with Dataset\item Surface Flow\item Surface Vectors\item Transform
\item Warp (scalar)\item Warp (vector)
\end{itemize}
\end{multicols}
\item Topology changing, dimension reduction -- the output is a polygonal dataset but the output
cells are one or more dimensions less than the input cell dimensions.
cells are one or more dimensions less than the input cell dimensions. The filters in this category are:
\begin{multicols}{3}
\begin{itemize}
\item Cell Centers\item Contour\item Extract CTH Fragments\item Extract CTH Parts\item Extract Surface
\item Feature Edges\item Mask Points\item Outline (curvilinear)\item Slice\item Stream Tracer
\end{itemize}
\end{multicols}
\item Topology changing, moderate reduction -- reduces the total number of cells in the dataset
but outputs in either a polygonal or unstructured grid format.
but outputs in either a polygonal or unstructured grid format. The filters in this category are:
\begin{multicols}{3}
\begin{itemize}
\item Clip
\item Decimate
\item Extract Cells by Region
\item Extract Selection
\item Quadric Clustering
\item Threshold
\end{itemize}
\end{multicols}
\item Topology changing, no reduction -- does not reduce the number of cells in the dataset while
changing the topology of the dataset and outputs in either a polygonal or unstructured grid
format.
format. The filters in this category are:
\begin{multicols}{3}
\begin{itemize}
\item Append Datasets\item Append Geometry\item Clean\item Clean to Grid\item Connectivity\item D3\item Delaunay 2D/3D
\item Extract Edges\item Linear Extrusion\item Loop Subdivision\item Reflect\item Rotational Extrusion\item Shrink
\item Smooth\item Subdivide\item Tessellate\item Tetrahedralize\item Triangle Strips\item Triangulate
\end{itemize}
\end{multicols}
\end{enumerate}
When creating a pipeline, the filters should generally be ordered in this same fashion to limit
data explosion. For example, pipelines should be organized to reduce dimensionality early.
......@@ -265,29 +316,5 @@ Additionally, reduction is preferred over extraction (e.g. the Slice filter is p
filter). Extracting should only be done when reducing by an order of magnitude or more. When
outputting data extracts, subsampling (e.g. the Extract Subset filter or the Decimate filter) can
be used to reduce file size but caution should be used to make sure that the data reduction
doesn't hide any fine features. Below we categorize the common filters in ParaView.
\subsection{Total Shallow Copy or Output Independent of Input}
Annotate Time, Append Attributes, Extract Block, Extract Datasets, Extract Level, Glyph, Group
Datasets, Histogram, Integrate Variables, Normal Glyphs, Outline, Outline Corners, Plot Over
Line, Probe Location
\subsection{Add Field Data}
Block Scalars, Calculator, Cell Data to Point Data, Compute Derivatives, Curvature, Elevation,
Generate Ids, Generate Surface Normals, Gradient, Level Scalars, Median, Mesh Quality,
Octree Depth Limit, Octree Depth Scalars, Point Data to Cell Data, Process Id Scalars, Random
Vectors, Resample with Dataset, Surface Flow, Surface Vectors, Transform, Warp (scalar),
Warp (vector)
\subsection{Topology Changing, Dimension Reduction}
Cell Centers, Contour, Extract CTH Fragments, Extract CTH Parts, Extract Surface, Feature
Edges, Mask Points, Outline (curvilinear), Slice, Stream Tracer
\subsection{Topology Changing, Moderate Reduction}
Clip, Decimate, Extract Cells by Region, Extract Selection, Quadric Clustering, Threshold
\subsection{Topology Changing, No Reduction}
Append Datasets, Append Geometry, Clean, Clean to Grid, Connectivity, D3, Delaunay 2D/3D,
Extract Edges, Linear Extrusion, Loop Subdivision, Reflect, Rotational Extrusion, Shrink,
Smooth, Subdivide, Tessellate, Tetrahedralize, Triangle Strips, Triangulate
doesn't hide any fine features.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment