Commit 9cb05149 authored by Cory Quammen's avatar Cory Quammen Committed by Kitware Robot

Merge topic 'updates_for_5_4'

592400a7 Change formatting when refering to VTK objects
e8c30d67 Add missing space
e5765c93 Update figure images
39854491 Bump version to 5.4
241ff120 Add descriptions of color legend parameters
3020f611 Fixup: Escape underscores in Python code sections
7dc5d399 Update chart options
33d469ae Explain additional miscellaneous display properties
...
Acked-by: Kitware Robot's avatarKitware Robot <kwrobot@kitware.com>
Reviewed-by: Utkarsh Ayachit's avatarUtkarsh Ayachit <utkarsh.ayachit@kitware.com>
Merge-request: !64
parents 0a64f883 592400a7
Pipeline #61089 passed with stage
in 0 seconds
......@@ -12,7 +12,7 @@ animation, playback will be much faster because very little computation must be
done to generate the images. Also, the results of the animation can be saved to
image files (one image per animation frame) or to a movie file. The geometry
rendered at each frame can also be saved in \ParaView's PVD file format, which
can be loaded back into \ParaView as a time varying data set.
can be loaded back into \ParaView as a time varying dataset.
\section{Animation View.}
......@@ -38,18 +38,18 @@ the total span of time that the animation can cover. The current displayed time
is indicated both in the Time field at the top and with a thick, vertical,
draggable line within the table.
Along the left side of the \ui{Animation View} is an expandable list of the names of
the animation tracks (i.e., a particular object and property to animate). You
choose a data source and then a particular property of the data source in the
bottom row. To create an animation track with keyframes for that property, click
the \ui{+} on the left-hand side; this will create a new track. In the figure,
tracks already exist for \textit{SphereSource1}'s \ui{Phi Resolution} property and for the
camera's position. To delete a track, press the \ui{X} button. You can temporarily
disable a track by unchecking the check box on the right of the track. To enter
values for the property, double-click within the white area to the right of the
track name. This will bring up the Animation Keyframes dialog. Double-clicking
in the camera entry brings up a dialog like the one in
Figure~\ref{fig:EditingCameraTrack}.
Along the left side of the \ui{Animation View} is an expandable list of the
names of the animation tracks (i.e., a particular object and property to
animate). You choose a data source and then a particular property of the data
source in the bottom row. To create an animation track with keyframes for that
property, click the \ui{+} on the left-hand side; this will create a new track.
In the figure, tracks already exist for \textit{SphereSource1}'s \ui{Phi
Resolution} property and for the camera's position. To delete a track, press the
\ui{X} button. You can temporarily disable a track by unchecking the check box
on the right of the track. To enter values for the property, double-click within
the white area to the right of the track name. This will bring up the
\ui{Animation Keyframes} dialog. Double-clicking in the camera entry brings up a
dialog like the one in Figure~\ref{fig:EditingCameraTrack}.
\begin{figure}[htb]
\begin{center}
......@@ -105,25 +105,25 @@ start time. The animation runs for nearly the number of seconds specified by the
\ui{Duration (secs)} spinbox. In turn, the number of frames actually generated (or
rendered) depends on the time to generate (or render) each frame.
In \texttt{Snap To TimeSteps} mode, the number of frames in the animation is determined
by the number of time values in the data set being animated. This is the
animation mode used for \ParaView's default animations: playing through the time
values in a data set one after the other. Default animations are created by
\ParaView when a data set with time values is loaded; no action is required by
the user to create the animation. Note that using this mode when no time-varying
data is loaded will result in no animation at all.
In \texttt{Snap To TimeSteps} mode, the number of frames in the animation is
determined by the number of time values in the dataset being animated. This is
the animation mode used for \ParaView's default animations: playing through the
time values in a dataset one after the other. Default animations are created by
\ParaView when a dataset with time values is loaded; no action is required to
create the animation. Note that using this mode when no time-varying data is
loaded will result in no animation at all.
In \texttt{Sequence} mode, the final item in the header is the \ui{No. Frames} spinbox. This
spinbox lets you pick the total number of frames for the animation. Similarly, in
\texttt{Real Time} mode, the final line lets you choose the duration of the animation. In
\texttt{Snap To TimeSteps} mode, the total number of frames is dictated by the data set
\texttt{Snap To TimeSteps} mode, the total number of frames is dictated by the dataset
and, therefore, the spinbox is disabled.
The \ui{Time} entry-box shows the current animation time, which is the same as shown by a
vertical marker in this view. You can change the current animation time by
either entering a value in this box, if available, or by dragging the vertical
marker. The \ui{Start Time} and \ui{End Time} entry-boxes display the start and end times
for the animation. By default, when you load time varying data sets, the start
for the animation. By default, when you load time varying datasets, the start
and end times are automatically adjusted to cover the entire time range present
in the data. The lock check-buttons to the right of the \ui{Start Time} and \ui{End Time}
widgets will prevent this from happening, so that you can ensure that your
......
......@@ -62,7 +62,7 @@ the labels and ticks will be updated based on visual cues.
Now to show a grid along the axes planes, alined with the ticks and labels, turn
on the \ui{Show Grid} checkbox, resulting in a visualization on the right.
By default, the gridded faces are always the fartest faces i.e. they stay behind
By default, the gridded faces are always the farthest faces i.e. they stay behind
the rendered geometry and keep on updating as you rotate the scene. To fix which
faces of the bounding-box are to be rendered, use the \ui{Faces To Render}
button (it's an advanced property, so you may have to search for it using the
......
This diff is collapsed.
......@@ -168,9 +168,9 @@ change.
\section{Adding annotations}
You add annotations like scalar bar (or color legends), text, and cube-axes
You add annotations like color legends, text, and cube-axes
exactly as you would with a regular \ui{Render View}. As with other
properties, when added, these will show up in all of the internal views.
properties, annotations will show up in all of the internal views.
\begin{didyouknow}
You can use the \ui{Annotate Time} source or filter to show the data time or
......
......@@ -2,7 +2,7 @@
\begin{centering}
\includegraphics[width=8cm]{Images/Kitwarelogo.png}
Published by Kitware Inc. \copyright 2016\\
Published by Kitware Inc. \copyright 2017\\
All product names mentioned herein are the trademarks of their respective owners. \\
......
This diff is collapsed.
This diff is collapsed.
......@@ -55,9 +55,9 @@ caution on unstructured data.
\end{compactitem}
\end{multicols}
Technically, the Ribbon and Tube filters should fall into this list. However, as
they only work on 1D cells in poly data, the input data is usually small and of
little concern.
Technically, the \ui{Ribbon} and \ui{Tube} filters should fall into this list.
However, as they only work on 1D cells in poly data, the input data is usually
small and of little concern.
This similar set of filters also outputs unstructured grids, but also tends to
reduce some of this data. Be aware though that this data reduction is often
......@@ -77,7 +77,7 @@ caution on unstructured data and extreme caution on structured data.
\end{compactitem}
\end{multicols}
Similar to the items in the preceding list, Extract Subset performs data
Similar to the items in the preceding list, \ui{Extract Subset} performs data
reduction on a structured dataset, but also outputs a structured dataset. So the
warning about creating new data still applies, but you do not have to worry
about converting to an unstructured grid.
......@@ -164,18 +164,19 @@ circumstances (although they may take a lot of time).
\end{multicols}
There are a few special case filters that do not fit well into any of the
previous classes. Some of the filters, currently Temporal Interpolator and
Particle Tracer, perform calculations based on how data changes over time. Thus,
these filters may need to load data for two or more instances of time, which can
double or more the amount of data needed in memory. The Temporal Cache filter
will also hold data for multiple instances of time. Keep in mind that some of
the temporal filters such as the Temporal Statistics and the filters that plot
over time may need to iteratively load all data from disk. Thus, it may take an
impractically long amount of time even if does not require any extra memory.
The Programmable Filter is also a special case that is impossible to classify.
Since this filter does whatever it is programmed to do, it can fall into any one
of these categories.
previous classes. Some of the filters, currently \ui{Temporal Interpolator} and
\ui{Particle Tracer}, perform calculations based on how data changes over time.
Thus, these filters may need to load data for two or more instances of time,
which can double or more the amount of data needed in memory. The \ui{Temporal
Cache} filter will also hold data for multiple instances of time. Keep in mind
that some of the temporal filters such as the Temporal Statistics and the
filters that plot over time may need to iteratively load all data from disk.
Thus, it may take an impractically long amount of time even if does not require
any extra memory.
The \ui{Programmable Filter} is also a special case that is impossible to
classify. Since this filter does whatever it is programmed to do, it can fall
into any one of these categories.
\subsection{Culling data}
......@@ -187,41 +188,44 @@ convert to a surface early on. Once you do that, you can apply other filters in
relative safety.
A very common visualization operation is to extract isosurfaces from a volume
using the Contour filter. The Contour filter usually outputs geometry much
smaller than its input. Thus, the Contour filter should be applied early if it
is to be used at all. Be careful when setting up the parameters to the Contour
filter because it still is possible for it to generate a lot of data. which can
happen if you specify many isosurface values. High frequencies such as noise
around an isosurface value can also cause a large, irregular surface to form.
Another way to peer inside of a volume is to perform a Slice on it. The Slice
filter will intersect a volume with a plane and allow you to see the data in the
volume where the plane intersects. If you know the relative location of an
interesting feature in your large dataset, slicing is a good way to view it.
If you have little a-priori knowledge of your data and would like to explore the
data without the long memory and processing time for the full dataset, you can
use the Extract Subset filter to subsample the data. The subsampled data can be
dramatically smaller than the original data and should still be well load
balanced. Of course, be aware that you may miss small features if the
subsampling steps over them and that once you find a feature you should go back
and visualize it with the full data set.
There are also several features that can pull out a subset of a volume: Clip,
Threshold, Extract Selection, and Extract Subset can all extract cells based on
some criterion. Be aware, however, that the extracted cells are almost never
well balanced; expect some processes to have no cells removed. All of these
filters, with the exception of Extract Subset, will convert structured data
types to unstructured grids. Therefore, they should not be used unless the
extracted cells are of at least an order of magnitude less than the source data.
using the Contour filter. The \ui{Contour} filter usually outputs geometry much
smaller than its input. Thus, the \ui{Contour} filter should be applied early if
it is to be used at all. Be careful when setting up the parameters to the
\ui{Contour} filter because it still is possible for it to generate a lot of
data which can happen if you specify many isosurface values. High frequencies
such as noise around an isosurface value can also cause a large, irregular
surface to form.
Another way to peer inside of a volume is to perform a \ui{Slice} on it. The
\ui{Slice} filter will intersect a volume with a plane and allow you to see the
data in the volume where the plane intersects. If you know the relative location
of an interesting feature in your large dataset, slicing is a good way to view
it.
If you have little \emph{a priori} knowledge of your data and would like to
explore the data without the long memory and processing time for the full
dataset, you can use the \ui{Extract Subset} filter to subsample the data. The
subsampled data can be dramatically smaller than the original data and should
still be well load balanced. Of course, be aware that you may miss small
features if the subsampling steps over them and that once you find a feature you
should go back and visualize it with the full dataset.
There are also several features that can pull out a subset of a volume:
\ui{Clip}, \ui{Threshold}, \ui{Extract Selection}, and \ui{Extract Subset} can
all extract cells based on some criterion. Be aware, however, that the extracted
cells are almost never well balanced; expect some processes to have no cells
removed. All of these filters, with the exception of \ui{Extract Subset}, will
convert structured data types to unstructured grids. Therefore, they should not
be used unless the extracted cells are of at least an order of magnitude less
than the source data.
When possible, replace the use of a filter that extracts 3D data with one that
will extract 2D surfaces. For example, if you are interested in a plane through
the data, use the Slice filter rather than the Clip filter. If you are
the data, use the \ui{Slice} filter rather than the \ui{Clip} filter. If you are
interested in knowing the location of a region of cells containing a particular
range of values, consider using the Contour filter to generate surfaces at the
ends of the range rather than extract all of the cells with the Threshold
filter. Be aware that substituting filters can have an effect on downstream
filters. For example, running the Histogram filter after Threshold will have an
entirely different effect then running it after the roughly equivalent Contour
filter.
range of values, consider using the \ui{Contour} filter to generate surfaces at
the ends of the range rather than extract all of the cells with the
\ui{Threshold} filter. Be aware that substituting filters can have an effect on
downstream filters. For example, running the \ui{Histogram} filter after
\ui{Threshold} will have an entirely different effect then running it after the
roughly equivalent \ui{Contour} filter.
ParaView/Images/VCRAndTimeControls.png

88.7 KB | W: | H:

ParaView/Images/VCRAndTimeControls.png

73.3 KB | W: | H:

ParaView/Images/VCRAndTimeControls.png
ParaView/Images/VCRAndTimeControls.png
ParaView/Images/VCRAndTimeControls.png
ParaView/Images/VCRAndTimeControls.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -530,12 +530,12 @@ Remember to hit the \ui{Enter} or \ui{Return} key after every command to execute
it. Any Python interpreter will not execute the command until \ui{Enter} is hit.
\end{commonerrors}
If the module is loaded correctly, \pvpython will respond with a text
output, which is the \ParaView version number.
If the module is loaded correctly, \pvpython will present a prompt for the next
command.
\begin{python}
>>> from paraview.simple import *
paraview version 4.2.0
>>>
\end{python}
You can consider this as in the same state as when \paraview was
......
......@@ -112,12 +112,12 @@ process id, individual processes may be targeted. For example, this allows you t
quickly attach a debugger to a server process running on a remote cluster. If
the target rank is not on the same host as the client, then the command is
considered remote. Otherwise, it is considered local. Therefore, remote commands are
executed via ssh, while local commands are not. A list of command templates is
executed via \executable{ssh}, while local commands are not. A list of command templates is
maintained. In addition to a number of pre-defined command templates, you may
add templates or edit existing ones. The default templates allow you to:
\begin{compactenum}
\item Attach gdb to the selected process
\item Attach \executable{gdb} to the selected process
\item Run top on the host of the selected process
\item Send a signal to the selected process
\end{compactenum}
......@@ -132,20 +132,22 @@ The following tokens are available and may be used in command templates as neede
\begin{enumerate}
\item \emph{\$TERM\_EXEC\$} : The terminal program that will be used to execute
commands. On Unix systems, xterm is typically used. On Windows systems,
cmd.exe is typically used. If the program is not in the default path, then the
full path must be specified.
\executable{cmd.exe} is typically used. If the program is not in the default
path, then the full path must be specified.
\item \emph{\$TERM\_OPTS\$} : Command line arguments for the terminal program.
On Unix, these may be used to set the terminals window title, size, colors, and
so on.
\item \emph{\$SSH\_EXEC\$} : The program to use to execute remote commands. On
Unix, this is typically ssh. On Windows, one option is plink.exe. If the
program is not in the default path, then the full path must be specified.
Unix, this is typically \executable{ssh}. On Windows, one option is
\executable{plink.exe}. If the program is not in the default path, then the full
path must be specified.
\item \emph{\$FE\_URL\$} : Ssh URL to use when the remote processes are on
compute nodes that are not visible to the outside world. This token is used to
construct command templates where two ssh hops are made to execute the command.
construct command templates where two \executable{ssh} hops are made to execute
the command.
\item \emph{\$PV\_HOST\$} : The hostname where the selected process is running.
\item \emph{\$PV\_PID\$} : The process-id of the selected process.
......@@ -154,23 +156,24 @@ construct command templates where two ssh hops are made to execute the command.
Note: On Windows, the debugging tools found in Microsoft's SDK need to be
installed in addition to Visual Studio (e.g., windbg.exe). The ssh program
plink.exe for Windows doesn't parse ANSI escape codes that are used by Unix
shell programs. In general, the Windows-specific templates need some polishing.
\subsection{Stack trace signal handler}
The Process Group's context menu provides a back trace signal handler option.
When enabled, a signal handler is installed that will catch signals such as
SEGV, TERM, INT, and ABORT and that will print a stack trace before the process exits.
Once the signal handler is enabled, you may trigger a stack trace by explicitly
sending a signal. The stack trace signal handler can be used to collect
information about crashes or to trigger a stack trace during deadlocks when
it's not possible to ssh into compute nodes. Sites that restrict users'
ssh access to compute nodes often provide a way to signal running processes
from the login node. Note that this feature is only available on systems that
provide support for POSIX signals, and we currently only have implemented
stack trace for GNU-compatible compilers.
installed in addition to Visual Studio (e.g., \executable{windbg.exe}). The
\executable{ssh} program \executable{plink.exe} for Windows doesn't parse ANSI
escape codes that are used by Unix shell programs. In general, the Windows-
specific templates need some polishing.
\subsection{Stack trace signal handler} The Process Group's context menu
provides a back trace signal handler option. When enabled, a signal handler is
installed that will catch signals such as SEGV, TERM, INT, and ABORT and that
will print a stack trace before the process exits. Once the signal handler is
enabled, you may trigger a stack trace by explicitly sending a signal. The stack
trace signal handler can be used to collect information about crashes or to
trigger a stack trace during deadlocks when it's not possible to
\executable{ssh} into compute nodes. Sites that restrict users' \executable{ssh}
access to compute nodes often provide a way to signal running processes from the
login node. Note that this feature is only available on systems that provide
support for POSIX signals, and we currently only have implemented stack trace
for GNU-compatible compilers.
\section{Compilation and installation considerations}
......
......@@ -218,7 +218,7 @@
%
%Abstracting away the location where rendering takes place opens up many
%possibilities. First, it opens up the possibility to parallelize the job of
%rendering to make it possible to render huge data sets at interactive rates.
%rendering to make it possible to render huge datasets at interactive rates.
%Rendering is done in the parallel Render Server component, which may be part of,
%or separate from, the parallel Data Server component. In the next section, we
%describe how parallel rendering works and explain the controls you have over it.
......
......@@ -84,10 +84,10 @@ visualization pipeline is updated does the server need to deliver
updated geometries to the client.
%One of the main purposes of \ParaView is to allow you to create
%visualizations of large data sets that reside on remote systems
%visualizations of large datasets that reside on remote systems
%without first bringing the data to a local machine. Transferring the
%data is often slow and wasteful of disk and network resources, and the
%visualization of large data sets can easily overwhelm the processing
%visualization of large datasets can easily overwhelm the processing
%and especially memory resources of even high-performance
%workstations. To overcome these challenges, \ParaView enables processing
%data on remote resources closer to where the data resides.
......@@ -493,7 +493,7 @@ to ghost cells.
\end{inlinefig}
In this example, we see that a single level of ghost cells nearly
replicates the entire data set on all processes. We have thus removed any
replicates the entire dataset on all processes. We have thus removed any
advantage we had with parallel processing. Because ghost cells are used so
frequently, random partitioning is not used in ParaView.
......@@ -657,7 +657,7 @@ If a data reader or source is not ``parallel aware'', you can still get
the benefits of spreading the data among processing cores by using the
\ui{D3} filter. This filter partitions a dataset into convex regions
and transfers each region to a different processing core. To see an
example of how D3 partitions a data set, create a \menu{Source > Wavelet}
example of how D3 partitions a dataset, create a \menu{Source > Wavelet}
while \paraview is still connected to the \pvserver. Next, select
\menu{Filters > Alphabetical > D3} and click \ui{Apply}. The output of \ui{D3}
will not initially appear different from the original wavelet source.
......
......@@ -279,11 +279,11 @@ considered advanced.
\end{itemize}
\item[\ui{Image Compression}]~
\begin{itemize}
\item Before images are shipped from server to client, they optionally
can be compressed using one of two compression algorithms:
Squirt\keyword{Squirt} or Zlib\keyword{Zlib}. To make the compression more
effective, either algorithm can reduce the color resolution of the
image before compression. The sliders determine the amount of color
\item Before images are shipped from server to client, they can optionally
be compressed using one of three available compression algorithms:
LZ4\keyword{LZ4}, Squirt\keyword{Squirt}, or Zlib\keyword{Zlib}. To make the
compression more effective, either algorithm can reduce the color resolution
of the image before compression. The sliders determine the amount of color
bits saved. Full color resolution is always used during a still
render. \icon{Images/pqAdvanced26.png}
\item Suggested image compression presets are provided for several common
......@@ -310,7 +310,7 @@ should follow.
for feeding the GPUs fast enough. However, if you do not have GPUs,
these rendering structures do not help much.
\item If there is a long pause before the first interactive render of a
particular data set, it might be the creation of the decimated
particular dataset, it might be the creation of the decimated
geometry. Try using an outline instead of decimated geometry for
interaction. You could also try lowering the factor of the decimation to
0 to create smaller geometry.
......
......@@ -95,7 +95,7 @@ to learn more about customizing default property values.
The \ui{Properties} panel, by default, is set up to show the source, display, and
view properties on the same panel. You may, however, prefer to have each of these
sections in a separate dockable panel. You can indeed doso using the
sections in a separate dockable panel. You can indeed do so using the
\ui{Settings} dialog accessible from the \menu{Edit > Settings} menu.
On the \ui{General} tab search of the \texttt{properties panel} using the
......
......@@ -10,7 +10,7 @@ One of the primary focuses of \ParaView since its early development is
customizability. We wanted to enable developers and users to plug in their own
code to read custom file formats or to execute special algorithms for data
processing. Thus, an extensive plugin infrastructure was born, enabling
developers to write C++ code that be compiled into plugins that can be imported
developers to write C++ code that can be compiled into plugins that can be imported
into the \ParaView executables at runtime. While such plugins are immensely
flexible, the process can be tedious and overwhelming, since developers would
need to familiarize themselves with the large C++ APIs provided by VTK and
......@@ -22,7 +22,7 @@ were born.
With these programmable modules, you can write Python scripts that get executed
by \ParaView to generate and process data, just like the other C++ modules. Since
the programming environment is Python, it means that you have access to a plethora
of Python packages including NumPy and SciPy %\fixme{references}.
of Python packages including NumPy and SciPy. %\fixme{references}.
By using such
packages, along with the data processing API provided by the \ParaView, you can
quickly put together readers, and filters to extend and customize \ParaView.
......@@ -237,7 +237,7 @@ output.GetInformation().Set(output.DATA_TIME_STEP(), req_time)
This is similar to Section~\ref{sec:ReadingACSVFile}. Now, however, let's say the CSV has three
columns named ``X'', ``Y'' and ``Z'' that we want to treat as point coordinates and
produce a vtkPolyData with points instead of a vtkTable. For that, we first
produce a \ui{vtkPolyData} with points instead of a \ui{vtkTable}. For that, we first
ensure that \ui{Output DataSet Type} is set to \ui{vtkPolyData}. Next, we use
the following \ui{Script}:
......@@ -362,14 +362,14 @@ numPts = 80 # Points along Helix
length = 8.0 # Length of Helix
rounds = 3.0 # Number of times around
# Compute the point coorindates for the helix.
# Compute the point coordinates for the helix.
index = np.arange(0, numPts, dtype=np.int32)
scalars = index * rounds * 2 * math.pi / numPts
x = index * length / numPts;
y = np.sin(scalars)
z = np.cos(scalars)
# Create a (x,y,z) coorindates array and associate that with
# Create a (x,y,z) coordinates array and associate that with
# points to pass to the output dataset.
coordinates = algs.make_vector(x, y, z)
pts = vtk.vtkPoints()
......@@ -543,38 +543,40 @@ output.ShallowCopy(B.VTKObject)
output.PointData.append(mask, "labels")
\end{python}
\subsection{Coloring points}
Let's consider a case where you have three scalar colors (``R'', ``G'', and ``B'')
available on the point data. Let's say each of these has values in the range
$[0,1]$, and we want to use these for coloring the points directly without using
a color map.
Remember, to not use a color map, you uncheck \ui{Map Scalars} in the \ui{Display} properties
section on the \ui{Properties} panel. However, the array being color with needs to be
an unsigned-char array with values in the range $[0, 255]$ for each component. So,
we'll need to convert the three scalar arrays into a single vector and then scale it
too. This can be done as follows:
\begin{python}
# Code for 'Script'
from vtk.numpy_interface import algorithms as algs
import numpy as np
r = inputs[0].PointData["R"]
g = inputs[0].PointData["G"]
b = inputs[0].PointData["B"]
# combine components into a single array.
rgb = algs.make_vector(r, g, b)
# now scale and convert the type to uint8 ==> unsigned char
rgb = np.asarray(rgb * 255.0, dtype=np.uint8)
# Add the array
output.PointData.append(rgb, "RGBColors")
\end{python}
As before, this will work just fine for composite datasets too, without having to
iterate over the blocks in the composite dataset explicitly.
% The following subsection is no longer needed because you can indeed color
% points by floats and double arrays.
% \subsection{Coloring points}
%
% Let's consider a case where you have three scalar colors (``R'', ``G'', and ``B'')
% available on the point data. Let's say each of these has values in the range
% $[0,1]$, and we want to use these for coloring the points directly without using
% a color map.
%
% Remember, to not use a color map, you uncheck \ui{Map Scalars} in the \ui{Display} properties
% section on the \ui{Properties} panel. However, the array being color with needs to be
% an unsigned-char array with values in the range $[0, 255]$ for each component. So,
% we'll need to convert the three scalar arrays into a single vector and then scale it
% too. This can be done as follows:
% \begin{python}
% # Code for 'Script'
% from vtk.numpy_interface import algorithms as algs
% import numpy as np
% r = inputs[0].PointData["R"]
% g = inputs[0].PointData["G"]
% b = inputs[0].PointData["B"]
% # combine components into a single array.
% rgb = algs.make_vector(r, g, b)
% # now scale and convert the type to uint8 ==> unsigned char
% rgb = np.asarray(rgb * 255.0, dtype=np.uint8)
% # Add the array
% output.PointData.append(rgb, "RGBColors")
% \end{python}
% As before, this will work just fine for composite datasets too, without having to
% iterate over the blocks in the composite dataset explicitly.
......@@ -36,7 +36,7 @@
@misc{ColorInterpolationBlogPost,
title={{What is InterpolateScalarsBeforeMapping in VTK?}},
author={{Pat Marion}},
howpublished={\url{http://www.kitware.com/blog/home/post/414}}
howpublished={\url{https://blog.kitware.com/what-is-interpolatescalarsbeforemapping-in-vtk/}}
}
@misc{ParaViewDoxygen,
title={{ParaView API documentation}},
......@@ -99,7 +99,7 @@
@misc{numpy,
title={{NumPy}},
author={{Numpy developers}},
author={{NumPy developers}},
howpublished={\url{http://www.numpy.org/}}
}
@misc{dv3d,
......
......@@ -72,9 +72,9 @@ you want to use in your Python script, when tracing, to avoid runtime issues.
Views that render results (this includes almost all of the views, except
\ui{SpreadSheet View}) support saving images (or screenshots) in one of the
standard image formats (png, jpeg, tiff, bmp, ppm).
standard image formats (PNG, JPEG, TIFF, BMP, PPM).
Certain views also support exportings the results in several formats such as
pdf, x3d, and vrml.
PDF, X3D, and VRML.
\subsection{Saving screenshots}
......@@ -112,9 +112,9 @@ are as follows.
Default \ui{Scale fonts proportionally} tries to achieve WYSIWYG as long as
the aspect ratio is maintained. This is suitable for saving images targeted
for higher DPI (or PPI) display than your screen. \ui{Do not scale fonts}
may be used to avoid font scaling and keep their size in pixels same as
currently on the screen. This is suitable for saving images targeted for a
larger display with same pixel resolution.
may be used to avoid font scaling and keep their size in pixels the same as
what is currently on the screen. This is suitable for saving images targeted
for a larger display with same pixel resolution.
\item \ui{Override Color Palette}: You can change the color palette just for
saving the screenshot using this drop-down.
\item \ui{Stereo Mode}: This option lets you save the image using one of the
......@@ -232,9 +232,9 @@ with additional of a few animation-specific parameters. These are as follows:
timestep number.
\end{itemize}
On accepting this dialog, you will be able to choose the output file location and format.
The available file formats include avi and ogg (when available) video formats, as
well as image formats such png, jpg, and tif. If saving as
On accepting this dialog, you will be able to choose the output file location
and format. The available file formats include AVI and OGG (when available)
video formats, as well as image formats such PNG, JPeG, and TIFF. If saving as
images, \ParaView will generate a series of image files sequentially numbered
using the frame number as a suffix to the specified filename.
......@@ -257,7 +257,7 @@ same as the \py{SaveScreenshot} with additional parameters for the animation spe
Besides saving the results produced by your visualization setup, you can save
the state of the visualization pipeline itself, including all the pipeline
modules, views, their layout, and their properties. This is referred to as the
application state or, just, state. In \paraview, you can save the
\keyterm{application state}, or just {state}. In \paraview, you can save the
state using the \menu{File > Save State\ldots} menu option. Conversely, to load a saved
state file, you can use \menu{File > Load State\ldots}.
......
......@@ -302,8 +302,8 @@ operator. Options include the following:
\item \ui{is $<=$} matches all values lesser than or equal to the specified value
\item \ui{is min} matches the minimum value for the array for the current time step
\item \ui{is max} matches the maximum value for the array for the current time step
\item \ui{is <= mean} matches values lesser than or equal to the mean
\item \ui{is >= mean} matches values greater than or equal to the mean
\item \ui{is less than mean} matches values lesser than or equal to the mean
\item \ui{is greater than mean} matches values greater than or equal to the mean
\item \ui{is equal to mean with tolerance} matches values equal to the mean
within the specified tolerance
\end{compactitem}
......@@ -447,13 +447,13 @@ visualization similar to the one shown here.
Instead of using the view for defining the selection, you could have used the
\ui{Find Data} dialog. In that case, instead of being able to plot each element
over time, you will be plotting summaries for the selected subset over time. This
is essential since the selected subset can have a varying number of elements
over time. The summaries include quantities like min, max, and median of available
variables. You can make the filter always produce these statics alone (even when
the selection is created by selecting specific elements in a view) by checking
the \ui{Only Report Selection Statistics} property on the \ui{Properties} panel
for the \ui{Plot Selection Over Time} filter.
over time, you will be plotting summaries for the selected subset over time.
This is essential since the selected subset can have a varying number of
elements over time. The summaries include quantities like mininum, maximum, and
median of available variables. You can make the filter always produce these
statics alone (even when the selection is created by selecting specific elements
in a view) by checking the \ui{Only Report Selection Statistics} property on the
\ui{Properties} panel for the \ui{Plot Selection Over Time} filter.
\section{Freezing selections}
......
As with any large application, \paraview provides mechanisms to customize some
of its application behavior. These are referred to as \keyterm{application
settings} or, just, {settings}. Such settings can be changed using the \ui{Settings} dialog,
settings}. or just {settings}. Such settings can be changed using the \ui{Settings} dialog,
which is accessed from the \menu{Edit > Settings} (\menu{ParaView >
Preferences} on the Mac) menu. We have seen parts of this dialog earlier, e.g., in
Sections~\ref{sec:PropertiesPanelLayoutSettings},
......@@ -229,7 +229,7 @@ result after loading the \ui{Print} palette.}
\end{center}
\end{figure}
Now let's say you want to generate a image for printing. Typically, for
Now let's say you want to generate an image for printing. Typically, for
printing, you'd want the background color to be white, while the wireframes and
annotations to be colored black. To do that, one way is to go change each of the
colors for each each of the views, displays and cube-axes. You can imagine how
......
......@@ -38,7 +38,7 @@
}{%
{\noindent\Huge\bfseries The ParaView Guide}\\[2\baselineskip] % Title
}
{\large \textit{Updated for ParaView version 5.0}}\\[2\baselineskip] % Tagline or further description
{\large \textit{Updated for ParaView version 5.4}}\\[2\baselineskip] % Tagline or further description
{\Large Utkarsh Ayachit}\\[8\baselineskip] % Author name
% {\Large Who else?}\\[\baselineskip] % Author name
% {\Large Who else?}\\[\baselineskip] % Author name
......
......@@ -257,7 +257,6 @@ that provides information for each of the arrays. This object gives us
methods to get data ranges, component counts, tuple counts, etc.
\begin{python}