Commit 162f83be authored by Cory Quammen's avatar Cory Quammen

Many small formatting and grammar changes

parent 5b30e3e0
......@@ -38,18 +38,18 @@ the total span of time that the animation can cover. The current displayed time
is indicated both in the Time field at the top and with a thick, vertical,
draggable line within the table.
Along the left side of the \ui{Animation View} is an expandable list of the names of
the animation tracks (i.e., a particular object and property to animate). You
choose a data source and then a particular property of the data source in the
bottom row. To create an animation track with keyframes for that property, click
the \ui{+} on the left-hand side; this will create a new track. In the figure,
tracks already exist for \textit{SphereSource1}'s \ui{Phi Resolution} property and for the
camera's position. To delete a track, press the \ui{X} button. You can temporarily
disable a track by unchecking the check box on the right of the track. To enter
values for the property, double-click within the white area to the right of the
track name. This will bring up the Animation Keyframes dialog. Double-clicking
in the camera entry brings up a dialog like the one in
Figure~\ref{fig:EditingCameraTrack}.
Along the left side of the \ui{Animation View} is an expandable list of the
names of the animation tracks (i.e., a particular object and property to
animate). You choose a data source and then a particular property of the data
source in the bottom row. To create an animation track with keyframes for that
property, click the \ui{+} on the left-hand side; this will create a new track.
In the figure, tracks already exist for \textit{SphereSource1}'s \ui{Phi
Resolution} property and for the camera's position. To delete a track, press the
\ui{X} button. You can temporarily disable a track by unchecking the check box
on the right of the track. To enter values for the property, double-click within
the white area to the right of the track name. This will bring up the
\ui{Animation Keyframes} dialog. Double-clicking in the camera entry brings up a
dialog like the one in Figure~\ref{fig:EditingCameraTrack}.
\begin{figure}[htb]
\begin{center}
......@@ -105,13 +105,13 @@ start time. The animation runs for nearly the number of seconds specified by the
\ui{Duration (secs)} spinbox. In turn, the number of frames actually generated (or
rendered) depends on the time to generate (or render) each frame.
In \texttt{Snap To TimeSteps} mode, the number of frames in the animation is determined
by the number of time values in the data set being animated. This is the
animation mode used for \ParaView's default animations: playing through the time
values in a data set one after the other. Default animations are created by
\ParaView when a data set with time values is loaded; no action is required by
the user to create the animation. Note that using this mode when no time-varying
data is loaded will result in no animation at all.
In \texttt{Snap To TimeSteps} mode, the number of frames in the animation is
determined by the number of time values in the data set being animated. This is
the animation mode used for \ParaView's default animations: playing through the
time values in a data set one after the other. Default animations are created by
\ParaView when a data set with time values is loaded; no action is required to
create the animation. Note that using this mode when no time-varying data is
loaded will result in no animation at all.
In \texttt{Sequence} mode, the final item in the header is the \ui{No. Frames} spinbox. This
spinbox lets you pick the total number of frames for the animation. Similarly, in
......
......@@ -62,7 +62,7 @@ the labels and ticks will be updated based on visual cues.
Now to show a grid along the axes planes, alined with the ticks and labels, turn
on the \ui{Show Grid} checkbox, resulting in a visualization on the right.
By default, the gridded faces are always the fartest faces i.e. they stay behind
By default, the gridded faces are always the farthest faces i.e. they stay behind
the rendered geometry and keep on updating as you rotate the scene. To fix which
faces of the bounding-box are to be rendered, use the \ui{Faces To Render}
button (it's an advanced property, so you may have to search for it using the
......
One of the first things that any visualization tool user does when
opening a new dataset and looking at the mesh is to color the mesh with some
scalar data. Color mapping is a common visualization technique that maps data to
colo, and displays the colors in the rendered image. Of course, to map the
color, and displays the colors in the rendered image. Of course, to map the
data array to colors, we use a transfer function. A transfer function can also
be used to map the data array to opacity for rendering translucent surfaces or
for volume rendering. This chapter desribes the basics of mapping data arrays to
for volume rendering. This chapter describes the basics of mapping data arrays to
color and opacity.
\section{The basics}
Color mapping (which often also includes opacity mapping) goes by various names
including scalar mapping and pseudo-coloring. The basic principle entails mapping
data arrays to colors when rendering surface meshes or volumes. Since
data arrays can have arbitrary values and types, you may want to create a mapping
to define to which color a particular data value maps. This mapping is defined
using what are called \emph{color maps} or \emph{transfer functions}. Since such
mapping from data values to rendering primitives can be defined for not just
colors, but opacity values as well, we will use the more generic term
\emph{transfer functions}.
including scalar mapping and pseudo-coloring. The basic principle entails
mapping data arrays to colors when rendering surface meshes or volumes. Since
data arrays can have arbitrary values and types, you may want to define to which
color a particular data value maps. This mapping is defined using what are
called \emph{color maps} or \emph{transfer functions}. Since such mapping from
data values to rendering primitives can be defined for not just colors, but
opacity values as well, we will use the more generic term \emph{transfer
functions}.
Of course, there are cases when your data arrays indeed specify the
red-green-blue color values to use when rendering (i.e., not using a transfer
......@@ -100,7 +100,7 @@ display.RescaleTransferFunctionToDataRange(True)
\end{python}
The \py{ColorBy} function provided by the \py{simple} module ensures that the color
and opacity transfer functions are setup correctly for the selected array, which
and opacity transfer functions are set up correctly for the selected array, which
is using an existing one already associated with the array name or is creating a
new one. Passing \texttt{None} as the second argument to \py{ColorBy} will
display scalar coloring.
......@@ -142,7 +142,7 @@ default settings for the current color map.
The \icon{Images/pqSaveArray16.png} and \icon{Images/pqSave32.png}
buttons save the current color and opacity transfer function, with all
its properties, as the default transfer function. \ParaView will use
it next time it needs to setup a transfer function to color a new data
it next time it needs to set up a transfer function to color a new data
array. The \icon{Images/pqSaveArray16.png} button saves the transfer
function as default for an array of the same name while the
\icon{Images/pqSave32.png} button saves the transfer function as
......@@ -175,7 +175,7 @@ several things:
To map the data to color using a log scale, rather than a linear scale, check
the \ui{Use log scale when mapping data to colors}. It is assumed that the data
is in the non-zero, positive range. \ParaView will report errors and try to
automatically fix the range if it's ever invalid for log mapping.
automatically fix the range if it is ever invalid for log mapping.
Based on the user preference set in the \ui{Settings} dialog, \ParaView can
automatically rescale the transfer function every time when the user hits the
......@@ -186,7 +186,7 @@ you specify for the transfer function will remain unchanged.
\subsection{Transfer function editor}
Using the transfer function editors is pretty straight forward. Control points
Using the transfer function editors is pretty straightforward. Control points
in the opacity editor widget and the color editor widget are independent of each
other. To select a control point, click on it. When selected, the control point
is highlighted with a red circle and data value associated with the control
......@@ -335,14 +335,14 @@ current data ranges. You can do this as follows:
\begin{python}
>>> source = GetActiveSource()
# Update the pipeline, if hasn't been updated already.
# Update the pipeline, if it hasn't been updated already.
>>> source.UpdatePipeline()
# First, locate the display properties for the source of interest.
>>> display = GetDisplayProperties()
# Reset the color and opacity maps currently used by 'display' to
# using the range for the array 'display' is set to color with.
# use the range for the array 'display' is using for color mapping.
# This requires that the 'display' has been set to use scalar coloring
# using an array that is available in the data generated. If not, you will
# get errors.
......@@ -495,9 +495,9 @@ The \icon{Images/pqFilter32.png} and
annotations widget with unique discrete values from a data array, if
possible. Based on the number of distinct values present in the data
array, this may not yield any result (Instead, a warning message will
be shown). The data array comes either from the selected source object
be shown). The data array values come either from the selected source object
if you use the \icon{Images/pqFilter32.png} button or it comes from
one of visible pipeline objects if you use the
the visible pipeline objects if you use the
\icon{Images/pqFilterEyeball16.png} button.
\subsection{Annotations in \texttt{pvpython}}
......@@ -540,7 +540,7 @@ it is tedious and cumbersome.
Categorical color maps provide an elegant solution for
such cases. Instead of a continuous color transfer function, the user specifies a
set of discrete values and colors to use for those values. For any element where
the data value matches the values in the lookup table exactly, \ParaView renders
the data value matches the values in the lookup table exactly, \paraview renders
the specified color; otherwise, the NaN color is used.
The color legend or scalar bar also switches to a new mode where it renders
......@@ -557,7 +557,7 @@ annotations for each value in the lookup table.
\end{center}
\end{figure}
To tell \ParaView that the data array is to be treated as categories for coloring,
To tell \paraview that the data array is to be treated as categories for coloring,
check the \ui{Interpret Values As Categories} checkbox in the \ui{Color Map
Editor} panel. As soon as that's checked, the panel switches to categorical mode:
The \ui{Mapping Data} group is hidden, and the \ui{Annotations} group becomes a
......@@ -588,11 +588,6 @@ color map to set colors for the values added. If the preset has fewer colors tha
the annotated values, then the user may have to manually set the colors for those
extra annotations.
Also \icon{Images/pqFilter32.png} plays the same role as described in the previous
post: Fill the annotations widget with unique discrete values from the data
array, is possible. Based on the number of distinct values present in the data
array, this may not yield any result (instead, a warning message will be shown.)
\begin{commonerrors}
Categorical color maps are designed for data arrays with enumerations, which are
typically integer arrays. However, they can be used for arrays with floating
......
......@@ -168,9 +168,9 @@ change.
\section{Adding annotations}
You add annotations like scalar bar (or color legends), text, and cube-axes
You add annotations like color legends, text, and cube-axes
exactly as you would with a regular \ui{Render View}. As with other
properties, when added, these will show up in all of the internal views.
properties, annotations will show up in all of the internal views.
\begin{didyouknow}
You can use the \ui{Annotate Time} source or filter to show the data time or
......
......@@ -9,8 +9,8 @@ relevant information can be represented in these views.
Referring back to the visualization pipeline from
Section~\ref{sec:BasicsOfVisualization}, views are sinks that take in input data
but do not produce any data output. (I.e., one cannot connect other pipeline
modules such as filters to process the results in a view.) However,
but do not produce any data output (i.e., one cannot connect other pipeline
modules such as filters to process the results in a view). However,
views often provide mechanisms to save the results as images or in other formats
including PDF, VRML, and X3D.
......@@ -177,12 +177,11 @@ In Python, each tab that corresponds is referred to as a layout.
>>> layout.AssignView(locationId, view2)
\end{python}
\section{View properties}
Just like parameters on pipeline modules, such as readers and filters, views
provide parameters that can be used for customizing the visualization such as
changing the background color for rendering views and adding title texts for chart
views. These parameters are referred to as \ui{View Properties} and are
accessible from the \ui{Properties} panel in \paraview.
\section{View properties} Just like parameters on pipeline modules, such as
readers and filters, views provide parameters that can be used for customizing
the visualization such as changing the background color for rendering views and
adding title texts for chart views. These parameters are referred to as \ui{View
Properties} and are accessible from the \ui{Properties} panel in \paraview.
\subsection{View properties in \texttt{paraview}}
Similar to properties on pipeline modules like sources and readers, view
......@@ -209,14 +208,14 @@ the \ui{Apply} action. This way, you can set up the pipeline properties to your
liking and then trigger the potentially time consuming execution.
Since the visualization process in general focuses on \emph{reducing} data to
generate visual representations, the rendering (broadly speaking) is less time-intensive
than the actual data processing. Thus, changing properties that affect
rendering are not as compute-intensive as transforming the data itself. (E.g.,
generate visual representations, the rendering (broadly speaking) is less time-
intensive than the actual data processing. Thus, changing properties that affect
rendering are not as compute-intensive as transforming the data itself. For example,
changing the color on a surface mesh is not as expensive as generating the mesh
in the first place.) Hence, the need to \ui{Apply} such properties becomes
less relevant. At the same time, when changing display properties such as
opacity, you may want to see the result as you change the property to decide
on the final value. Hence, it is desirable to see the updates immediately.
in the first place. Hence, the need to \ui{Apply} such properties becomes less
relevant. At the same time, when changing display properties such as opacity,
you may want to see the result as you change the property to decide on the final
value. Hence, it is desirable to see the updates immediately.
Of course, you can always enable \ui{Auto Apply} to have the same immediate
update behavior for all properties on the \ui{Properties} panel.
......@@ -224,9 +223,9 @@ update behavior for all properties on the \ui{Properties} panel.
\subsection{View properties in \texttt{pvpython}}
In \pvpython, once you have access to the view, you can directly change view properties on the view
object. There are several ways to get access to
the view object.
In \pvpython, once you have access to the view, you can directly change view
properties on the view object. There are several ways to get access to the view
object.
\begin{python}
# 1. Save reference when a view is created
......@@ -303,7 +302,6 @@ To access display properties in \pvpython, you can use
\py{SetDisplayProperties} and \\* \py{GetDisplayProperty} methods.
\begin{python}
# Using SetDisplayProperties/GetDisplayProperties to access the display
# properties for the active source in the active view.
......@@ -838,7 +836,7 @@ however, programmatically change the camera as follows:
# that the view up vector is whatever was set via SetViewUp, and is not
# necessarily perpendicular to the direction of projection. The result is a
# horizontal rotation of the camera.
>>> camera.Aziumth(30)
>>> camera.Azimuth(30)
# Rotate the focal point about the view up vector, using the camera's position
# as the center of rotation. Note that the view up vector is whatever was set
......@@ -871,7 +869,7 @@ etc,. to explicitly place the camera in the scene.
# If ParallelProjection is set to True, then you'll need
# to specify parallel scalar as well i.e. the height of the viewport in
# world-coordinate distances. The default is 1. Note that the `scale'
# parameter works as an `inverse scale' larger numbers produce smaller
# parameter works as an `inverse scale' where larger numbers produce smaller
# images. This method has no effect in perspective projection mode.
>>> camera.SetParallelScale(1)
\end{python}
......@@ -908,7 +906,6 @@ Similar to view properties, display properties are accessible from the display
properties object or using the \py{SetDisplayProperties} function.
\begin{python}
>>> displayProperties = GetDisplayProperties(source, view)
# Both source and view are optional. If not specified, the active source
# and active view will be used.
......@@ -949,11 +946,11 @@ sample point, etc.
One of the most common ways of showing a line plot is to apply the \ui{Plot Over
Line} filter to any dataset. This will probe the dataset along the probe line
specified. You then plot the sampled values in the \ui{Line Chart View}.
Alternatively, if you have a tabular dataset (i.e. vtkTable), then you can
Alternatively, if you have a tabular dataset (i.e. \ui{vtkTable}), then you can
directly show the data in this view.
\begin{didyouknow}
You can plot any arbitrary dataset, even those not producing vtkTable outputs,
You can plot any arbitrary dataset, even those not producing \ui{vtkTable} outputs,
by using the \ui{Plot Data} filter. Remember, however, that for
extremely large datasets, while \ui{Render View} may use parallel rendering
strategies to improve performance and reduce memory requirements, chart views
......@@ -1058,11 +1055,11 @@ number less than or equal to 0 is undefined.
Display properties allow you to setup which series or data arrays are plotted in
this view. You start by picking the \ui{Attribute Type}. Select the attribute
type that has the arrays of interest. (E.g., if you are plotting arrays associated
with points, then you should pick \ui{Point Data}.) Arrays with different
associations cannot be plotted together. You may need to apply filters such as
\ui{Cell Data to Point Data} or \ui{Point Data to Cell Data} to convert arrays
between different associations for that.
type that has the arrays of interest. For example, if you are plotting arrays
associated with points, then you should pick \ui{Point Data}.) Arrays with
different associations cannot be plotted together. You may need to apply filters
such as \ui{Cell Data to Point Data} or \ui{Point Data to Cell Data} to convert
arrays between different associations for that.
\begin{center}
\includegraphics[width=0.5\linewidth]{Images/LineChartViewDisplayPropertiesXAxisParametersGroup.png}
......@@ -1115,8 +1112,8 @@ The following script demonstrates the typical usage:
<paraview.servermanager.Wavelet object at 0x1156fd810>
# We update the source so that when we create PlotOverLine filter
# it has input data available to determine good defaults. Otherwise,
# we will have to manually set the defaults up.
# it has input data available to determine good defaults. Otherwise,
# we will have to manually set up the defaults.
>>> UpdatePipeline()
# Now, create the PlotOverLine filter. It will be initialized using
......@@ -1195,7 +1192,7 @@ under the display properties, the \ui{Series Parameters} like \ui{Line Style} an
\begin{center}
\includegraphics[width=0.7\linewidth]{Images/PlotMatrixViewInParaView.png}
\caption{\paraview using \ui{Plot Matrix View} to generate a
scatter plot matrix to understand correlations between variables.}
scatter plot matrix to understand correlations between pairs of variables.}
\label{fig:PlotMatrixViewInParaView}
\end{center}
\end{figure}
......
This diff is collapsed.
......@@ -55,9 +55,9 @@ caution on unstructured data.
\end{compactitem}
\end{multicols}
Technically, the Ribbon and Tube filters should fall into this list. However, as
they only work on 1D cells in poly data, the input data is usually small and of
little concern.
Technically, the \ui{Ribbon} and \ui{Tube} filters should fall into this list.
However, as they only work on 1D cells in poly data, the input data is usually
small and of little concern.
This similar set of filters also outputs unstructured grids, but also tends to
reduce some of this data. Be aware though that this data reduction is often
......@@ -77,7 +77,7 @@ caution on unstructured data and extreme caution on structured data.
\end{compactitem}
\end{multicols}
Similar to the items in the preceding list, Extract Subset performs data
Similar to the items in the preceding list, \ui{Extract Subset} performs data
reduction on a structured dataset, but also outputs a structured dataset. So the
warning about creating new data still applies, but you do not have to worry
about converting to an unstructured grid.
......@@ -164,18 +164,19 @@ circumstances (although they may take a lot of time).
\end{multicols}
There are a few special case filters that do not fit well into any of the
previous classes. Some of the filters, currently Temporal Interpolator and
Particle Tracer, perform calculations based on how data changes over time. Thus,
these filters may need to load data for two or more instances of time, which can
double or more the amount of data needed in memory. The Temporal Cache filter
will also hold data for multiple instances of time. Keep in mind that some of
the temporal filters such as the Temporal Statistics and the filters that plot
over time may need to iteratively load all data from disk. Thus, it may take an
impractically long amount of time even if does not require any extra memory.
The Programmable Filter is also a special case that is impossible to classify.
Since this filter does whatever it is programmed to do, it can fall into any one
of these categories.
previous classes. Some of the filters, currently \ui{Temporal Interpolator} and
\ui{Particle Tracer}, perform calculations based on how data changes over time.
Thus, these filters may need to load data for two or more instances of time,
which can double or more the amount of data needed in memory. The \ui{Temporal
Cache} filter will also hold data for multiple instances of time. Keep in mind
that some of the temporal filters such as the Temporal Statistics and the
filters that plot over time may need to iteratively load all data from disk.
Thus, it may take an impractically long amount of time even if does not require
any extra memory.
The \ui{Programmable Filter} is also a special case that is impossible to
classify. Since this filter does whatever it is programmed to do, it can fall
into any one of these categories.
\subsection{Culling data}
......@@ -187,41 +188,44 @@ convert to a surface early on. Once you do that, you can apply other filters in
relative safety.
A very common visualization operation is to extract isosurfaces from a volume
using the Contour filter. The Contour filter usually outputs geometry much
smaller than its input. Thus, the Contour filter should be applied early if it
is to be used at all. Be careful when setting up the parameters to the Contour
filter because it still is possible for it to generate a lot of data. which can
happen if you specify many isosurface values. High frequencies such as noise
around an isosurface value can also cause a large, irregular surface to form.
Another way to peer inside of a volume is to perform a Slice on it. The Slice
filter will intersect a volume with a plane and allow you to see the data in the
volume where the plane intersects. If you know the relative location of an
interesting feature in your large dataset, slicing is a good way to view it.
If you have little a-priori knowledge of your data and would like to explore the
data without the long memory and processing time for the full dataset, you can
use the Extract Subset filter to subsample the data. The subsampled data can be
dramatically smaller than the original data and should still be well load
balanced. Of course, be aware that you may miss small features if the
subsampling steps over them and that once you find a feature you should go back
and visualize it with the full data set.
There are also several features that can pull out a subset of a volume: Clip,
Threshold, Extract Selection, and Extract Subset can all extract cells based on
some criterion. Be aware, however, that the extracted cells are almost never
well balanced; expect some processes to have no cells removed. All of these
filters, with the exception of Extract Subset, will convert structured data
types to unstructured grids. Therefore, they should not be used unless the
extracted cells are of at least an order of magnitude less than the source data.
using the Contour filter. The \ui{Contour} filter usually outputs geometry much
smaller than its input. Thus, the \ui{Contour} filter should be applied early if
it is to be used at all. Be careful when setting up the parameters to the
\ui{Contour} filter because it still is possible for it to generate a lot of
data which can happen if you specify many isosurface values. High frequencies
such as noise around an isosurface value can also cause a large, irregular
surface to form.
Another way to peer inside of a volume is to perform a \ui{Slice} on it. The
\ui{Slice} filter will intersect a volume with a plane and allow you to see the
data in the volume where the plane intersects. If you know the relative location
of an interesting feature in your large dataset, slicing is a good way to view
it.
If you have little \emph{a priori} knowledge of your data and would like to
explore the data without the long memory and processing time for the full
dataset, you can use the \ui{Extract Subset} filter to subsample the data. The
subsampled data can be dramatically smaller than the original data and should
still be well load balanced. Of course, be aware that you may miss small
features if the subsampling steps over them and that once you find a feature you
should go back and visualize it with the full data set.
There are also several features that can pull out a subset of a volume:
\ui{Clip}, \ui{Threshold}, \ui{Extract Selection}, and \ui{Extract Subset} can
all extract cells based on some criterion. Be aware, however, that the extracted
cells are almost never well balanced; expect some processes to have no cells
removed. All of these filters, with the exception of \ui{Extract Subset}, will
convert structured data types to unstructured grids. Therefore, they should not
be used unless the extracted cells are of at least an order of magnitude less
than the source data.
When possible, replace the use of a filter that extracts 3D data with one that
will extract 2D surfaces. For example, if you are interested in a plane through
the data, use the Slice filter rather than the Clip filter. If you are
the data, use the \ui{Slice} filter rather than the \ui{Clip} filter. If you are
interested in knowing the location of a region of cells containing a particular
range of values, consider using the Contour filter to generate surfaces at the
ends of the range rather than extract all of the cells with the Threshold
filter. Be aware that substituting filters can have an effect on downstream
filters. For example, running the Histogram filter after Threshold will have an
entirely different effect then running it after the roughly equivalent Contour
filter.
range of values, consider using the \ui{Contour} filter to generate surfaces at
the ends of the range rather than extract all of the cells with the
\ui{Threshold} filter. Be aware that substituting filters can have an effect on
downstream filters. For example, running the \ui{Histogram} filter after
\ui{Threshold} will have an entirely different effect then running it after the
roughly equivalent \ui{Contour} filter.
......@@ -112,12 +112,12 @@ process id, individual processes may be targeted. For example, this allows you t
quickly attach a debugger to a server process running on a remote cluster. If
the target rank is not on the same host as the client, then the command is
considered remote. Otherwise, it is considered local. Therefore, remote commands are
executed via ssh, while local commands are not. A list of command templates is
executed via \executable{ssh}, while local commands are not. A list of command templates is
maintained. In addition to a number of pre-defined command templates, you may
add templates or edit existing ones. The default templates allow you to:
\begin{compactenum}
\item Attach gdb to the selected process
\item Attach \executable{gdb} to the selected process
\item Run top on the host of the selected process
\item Send a signal to the selected process
\end{compactenum}
......@@ -132,20 +132,22 @@ The following tokens are available and may be used in command templates as neede
\begin{enumerate}
\item \emph{\$TERM\_EXEC\$} : The terminal program that will be used to execute
commands. On Unix systems, xterm is typically used. On Windows systems,
cmd.exe is typically used. If the program is not in the default path, then the
full path must be specified.
\executable{cmd.exe} is typically used. If the program is not in the default
path, then the full path must be specified.
\item \emph{\$TERM\_OPTS\$} : Command line arguments for the terminal program.
On Unix, these may be used to set the terminals window title, size, colors, and
so on.
\item \emph{\$SSH\_EXEC\$} : The program to use to execute remote commands. On
Unix, this is typically ssh. On Windows, one option is plink.exe. If the
program is not in the default path, then the full path must be specified.
Unix, this is typically \executable{ssh}. On Windows, one option is
\executable{plink.exe}. If the program is not in the default path, then the full
path must be specified.
\item \emph{\$FE\_URL\$} : Ssh URL to use when the remote processes are on
compute nodes that are not visible to the outside world. This token is used to
construct command templates where two ssh hops are made to execute the command.
construct command templates where two \executable{ssh} hops are made to execute
the command.
\item \emph{\$PV\_HOST\$} : The hostname where the selected process is running.
\item \emph{\$PV\_PID\$} : The process-id of the selected process.
......@@ -154,23 +156,24 @@ construct command templates where two ssh hops are made to execute the command.
Note: On Windows, the debugging tools found in Microsoft's SDK need to be
installed in addition to Visual Studio (e.g., windbg.exe). The ssh program
plink.exe for Windows doesn't parse ANSI escape codes that are used by Unix
shell programs. In general, the Windows-specific templates need some polishing.
\subsection{Stack trace signal handler}
The Process Group's context menu provides a back trace signal handler option.
When enabled, a signal handler is installed that will catch signals such as
SEGV, TERM, INT, and ABORT and that will print a stack trace before the process exits.
Once the signal handler is enabled, you may trigger a stack trace by explicitly
sending a signal. The stack trace signal handler can be used to collect
information about crashes or to trigger a stack trace during deadlocks when
it's not possible to ssh into compute nodes. Sites that restrict users'
ssh access to compute nodes often provide a way to signal running processes
from the login node. Note that this feature is only available on systems that
provide support for POSIX signals, and we currently only have implemented
stack trace for GNU-compatible compilers.
installed in addition to Visual Studio (e.g., \executable{windbg.exe}). The
\executable{ssh} program \executable{plink.exe} for Windows doesn't parse ANSI
escape codes that are used by Unix shell programs. In general, the Windows-
specific templates need some polishing.
\subsection{Stack trace signal handler} The Process Group's context menu
provides a back trace signal handler option. When enabled, a signal handler is
installed that will catch signals such as SEGV, TERM, INT, and ABORT and that
will print a stack trace before the process exits. Once the signal handler is
enabled, you may trigger a stack trace by explicitly sending a signal. The stack
trace signal handler can be used to collect information about crashes or to
trigger a stack trace during deadlocks when it's not possible to
\executable{ssh} into compute nodes. Sites that restrict users' \executable{ssh}
access to compute nodes often provide a way to signal running processes from the
login node. Note that this feature is only available on systems that provide
support for POSIX signals, and we currently only have implemented stack trace
for GNU-compatible compilers.
\section{Compilation and installation considerations}
......
......@@ -95,7 +95,7 @@ to learn more about customizing default property values.
The \ui{Properties} panel, by default, is set up to show the source, display, and
view properties on the same panel. You may, however, prefer to have each of these
sections in a separate dockable panel. You can indeed doso using the
sections in a separate dockable panel. You can indeed do so using the
\ui{Settings} dialog accessible from the \menu{Edit > Settings} menu.
On the \ui{General} tab search of the \texttt{properties panel} using the
......
......@@ -10,7 +10,7 @@ One of the primary focuses of \ParaView since its early development is
customizability. We wanted to enable developers and users to plug in their own
code to read custom file formats or to execute special algorithms for data
processing. Thus, an extensive plugin infrastructure was born, enabling
developers to write C++ code that be compiled into plugins that can be imported
developers to write C++ code that can be compiled into plugins that can be imported
into the \ParaView executables at runtime. While such plugins are immensely
flexible, the process can be tedious and overwhelming, since developers would
need to familiarize themselves with the large C++ APIs provided by VTK and
......@@ -22,7 +22,7 @@ were born.
With these programmable modules, you can write Python scripts that get executed
by \ParaView to generate and process data, just like the other C++ modules. Since
the programming environment is Python, it means that you have access to a plethora
of Python packages including NumPy and SciPy %\fixme{references}.
of Python packages including NumPy and SciPy. %\fixme{references}.
By using such
packages, along with the data processing API provided by the \ParaView, you can
quickly put together readers, and filters to extend and customize \ParaView.
......@@ -237,7 +237,7 @@ output.GetInformation().Set(output.DATA_TIME_STEP(), req_time)
This is similar to Section~\ref{sec:ReadingACSVFile}. Now, however, let's say the CSV has three
columns named ``X'', ``Y'' and ``Z'' that we want to treat as point coordinates and
produce a vtkPolyData with points instead of a vtkTable. For that, we first
produce a \ui{vtkPolyData} with points instead of a \ui{vtkTable}. For that, we first
ensure that \ui{Output DataSet Type} is set to \ui{vtkPolyData}. Next, we use
the following \ui{Script}:
......@@ -362,14 +362,14 @@ numPts = 80 # Points along Helix
length = 8.0 # Length of Helix
rounds = 3.0 # Number of times around
# Compute the point coorindates for the helix.
# Compute the point coordinates for the helix.
index = np.arange(0, numPts, dtype=np.int32)
scalars = index * rounds * 2 * math.pi / numPts
x = index * length / numPts;
y = np.sin(scalars)
z = np.cos(scalars)
# Create a (x,y,z) coorindates array and associate that with
# Create a (x,y,z) coordinates array and associate that with
# points to pass to the output dataset.
coordinates = algs.make_vector(x, y, z)
pts = vtk.vtkPoints()
......
......@@ -36,7 +36,7 @@
@misc{ColorInterpolationBlogPost,
title={{What is InterpolateScalarsBeforeMapping in VTK?}},
author={{Pat Marion}},
howpublished={\url{http://www.kitware.com/blog/home/post/414}}
howpublished={\url{https://blog.kitware.com/what-is-interpolatescalarsbeforemapping-in-vtk/}}
}
@misc{ParaViewDoxygen,
title={{ParaView API documentation}},
......@@ -99,7 +99,7 @@
@misc{numpy,
title={{NumPy}},
author={{Numpy developers}},
author={{NumPy developers}},
howpublished={\url{http://www.numpy.org/}}
}
@misc{dv3d,
......
......@@ -72,9 +72,9 @@ you want to use in your Python script, when tracing, to avoid runtime issues.
Views that render results (this includes almost all of the views, except
\ui{SpreadSheet View}) support saving images (or screenshots) in one of the
standard image formats (png, jpeg, tiff, bmp, ppm).
standard image formats (PNG, JPEG, TIFF, BMP, PPM).
Certain views also support exportings the results in several formats such as
pdf, x3d, and vrml.
PDF, X3D, and VRML.
\subsection{Saving screenshots}
......@@ -112,9 +112,9 @@ are as follows.
Default \ui{Scale fonts proportionally} tries to achieve WYSIWYG as long as
the aspect ratio is maintained. This is suitable for saving images targeted
for higher DPI (or PPI) display than your screen. \ui{Do not scale fonts}
may be used to avoid font scaling and keep their size in pixels same as
currently on the screen. This is suitable for saving images targeted for a
larger display with same pixel resolution.
may be used to avoid font scaling and keep their size in pixels the same as
what is currently on the screen. This is suitable for saving images targeted
for a larger display with same pixel resolution.
\item \ui{Override Color Palette}: You can change the color palette just for
saving the screenshot using this drop-down.
\item \ui{Stereo Mode}: This option lets you save the image using one of the
......@@ -232,9 +232,9 @@ with additional of a few animation-specific parameters. These are as follows:
timestep number.
\end{itemize}
On accepting this dialog, you will be able to choose the output file location and format.
The available file formats include avi and ogg (when available) video formats, as
well as image formats such png, jpg, and tif. If saving as
On accepting this dialog, you will be able to choose the output file location
and format. The available file formats include AVI and OGG (when available)
video formats, as well as image formats such PNG, JPeG, and TIFF. If saving as
images, \ParaView will generate a series of image files sequentially numbered
using the frame number as a suffix to the specified filename.
......@@ -257,7 +257,7 @@ same as the \py{SaveScreenshot} with additional parameters for the animation spe
Besides saving the results produced by your visualization setup, you can save
the state of the visualization pipeline itself, including all the pipeline
modules, views, their layout, and their properties. This is referred to as the
application state or, just, state. In \paraview, you can save the
\keyterm{application state}, or just {state}. In \paraview, you can save the
state using the \menu{File > Save State\ldots} menu option. Conversely, to load a saved
state file, you can use \menu{File > Load State\ldots}.
......
......@@ -302,8 +302,8 @@ operator. Options include the following:
\item \ui{is $<=$} matches all values lesser than or equal to the specified value
\item \ui{is min} matches the minimum value for the array for the current time step
\item \ui{is max} matches the maximum value for the array for the current time step
\item \ui{is <= mean} matches values lesser than or equal to the mean
\item \ui{is >= mean} matches values greater than or equal to the mean
\item \ui{is less than mean} matches values lesser than or equal to the mean
\item \ui{is greater than mean} matches values greater than or equal to the mean
\item \ui{is equal to mean with tolerance} matches values equal to the mean
within the specified tolerance
\end{compactitem}
......@@ -447,13 +447,13 @@ visualization similar to the one shown here.
Instead of using the view for defining the selection, you could have used the
\ui{Find Data} dialog. In that case, instead of being able to plot each element
over time, you will be plotting summaries for the selected subset over time. This
is essential since the selected subset can have a varying number of elements
over time. The summaries include quantities like min, max, and median of available
variables. You can make the filter always produce these statics alone (even when
the selection is created by selecting specific elements in a view) by checking
the \ui{Only Report Selection Statistics} property on the \ui{Properties} panel
for the \ui{Plot Selection Over Time} filter.
over time, you will be plotting summaries for the selected subset over time.
This is essential since the selected subset can have a varying number of
elements over time. The summaries include quantities like mininum, maximum, and
median of available variables. You can make the filter always produce these
statics alone (even when the selection is created by selecting specific elements
in a view) by checking the \ui{Only Report Selection Statistics} property on the
\ui{Properties} panel for the \ui{Plot Selection Over Time} filter.
\section{Freezing selections}
......
As with any large application, \paraview provides mechanisms to customize some
of its application behavior. These are referred to as \keyterm{application
settings} or, just, {settings}. Such settings can be changed using the \ui{Settings} dialog,
settings}. or just {settings}. Such settings can be changed using the \ui{Settings} dialog,
which is accessed from the \menu{Edit > Settings} (\menu{ParaView >
Preferences} on the Mac) menu. We have seen parts of this dialog earlier, e.g., in
Sections~\ref{sec:PropertiesPanelLayoutSettings},
......@@ -229,7 +229,7 @@ result after loading the \ui{Print} palette.}
\end{center}
\end{figure}
Now let's say you want to generate a image for printing. Typically, for
Now let's say you want to generate an image for printing. Typically, for
printing, you'd want the background color to be white, while the wireframes and
annotations to be colored black. To do that, one way is to go change each of the
colors for each each of the views, displays and cube-axes. You can imagine how
......
......@@ -257,7 +257,6 @@ that provides information for each of the arrays. This object gives us
methods to get data ranges, component counts, tuple counts, etc.
\begin{python}
# Let's get information about 'ACCL' array.
>>> arrayInfo = reader.PointData["ACCL"]
>>> arrayInfo.GetName()
......@@ -302,7 +301,6 @@ Here's a sample script to iterate over all point data arrays and print their
magnitude ranges:
\begin{python}
>>> def print_point_data_ranges(source):
... """Prints array ranges for all point arrays"""
... for arrayInfo in source.PointData:
......
......@@ -231,7 +231,7 @@ topology, an unstructured grid uses significantly more memory to represent its
mesh. Therefore, use an unstructured grid only when you cannot represent your
dataset as one of the above datasets. VTK supports a large number of cell types,
all of which can exist (heterogeneously) within one unstructured grid. The full
list of all cell types supported by VTK can be found in the file vtkCellType.h
list of all cell types supported by VTK can be found in the file \ui{vtkCellType.h}
in the VTK source code. Here is the list of cell types and their numeric values
as of when this document was written:
......