Internal filter used by (filters, ProbeLine). The Plot
Over Line filter samples the data set attributes of the current data set
at the points along a line. The values of the point-centered variables
along that line will be displayed in an XY Plot. This filter uses
interpolation to determine the values at the selected point, whether or
not it lies at an input point. The Probe filter operates on any type of
data and produces polygonal output (a line).
This property specifies the dataset from which to obtain
probe values.
This property specifies the dataset whose geometry will
be used in determining positions to probe.
When dealing with composite datasets, partial arrays are
common i.e. data-arrays that are not available in all of the blocks. By
default, this filter only passes those point and cell data-arrays that
are available in all the blocks i.e. partial array are removed. When
PassPartialArrays is turned on, this behavior is changed to take a
union of all arrays present thus partial arrays are passed as well.
However, for composite dataset input, this filter still produces a
non-composite output. For all those locations in a block of where a
particular data array is missing, this filter uses vtkMath::Nan() for
double and float arrays, while 0 for all other types of arrays i.e int,
char etc.
When set the input's cell data arrays are shallow copied to the output.
When set the input's point data arrays are shallow copied to the output.
Set whether to pass the field-data arrays from the Input i.e. the input
providing the geometry to the output. On by default.
Set whether to compute the tolerance or to use a user provided
value. On by default.
Set the tolerance to use for
vtkDataSet::FindCell
This proxy provides UI for selecting an existing pipeline connection.
Allow the filter to execute successful, producing an empty polydata,
when the input is not specified.
vtkAppendArcLength is used for filter such as
plot-over-line. In such cases, we need to add an attribute
array that is the arc_length over the length of the probed
line. That's when vtkAppendArcLength can be used. It adds
a new point-data array named "arc_length" with the
computed arc length for each of the polylines in the
input. For all other cell types, the arc length is set to
0.
The input.
A simple pass-through filter that doesn't transform data in any way.
This property specifies the input to the filter.
Takes in an input data object and a filename. Opens the file
and adds any arrays it sees there to the input data.
The input.
This property specifies the file to read to get arrays
A convenient way to reload the data file
Adds a new cell data array containing the number of faces per cell.
The input.
This is the name of the array in the output containing the face counts.
Adds a new cell data array containing the number of vertices per cell.
The input.
This is the name of the array in the output containing the vertex counts.
Apply to any source. Gui allows manual selection of desired annotation options.
If the source is a file, can display the filename.
Set the input of the filter.
Toggle User Name Visibility.
Toggle System Name Visibility.
Toggle Date/Time Visibility.
Toggle File Name Visibility.
Toggle Show Full File Path.
Annotation of file name.
This property specifies the input for this
filter.
This filter
creates a cut-plane of the
This property specifies the input for this
filter.
This property specifies whether the ParaView's generic
dataset cutter is used instead of the specialized AMR
cutter.
Set maximum slice resolution.
This
filter allows the user to specify a Region of Interest(ROI)
within the AMR data-set and extract it as a uniform
grid.
This property specifies the input for this
filter.
This property specifies whether the resampling filter
will operate in demand-driven mode or not.
This property specifies whether the solution will be
transferred to the nodes of the extracted region or the
cells.
Set the number of subdivisions for recursive coordinate
bisection.
Sets the number of samples in each
dimension
This property sets the minimum 3-D coordinate location
by which the particles will be filtered out.
This property sets the minimum 3-D coordinate location
by which the particles will be filtered out.
This filter slices AMR
data.
This property specifies the input for this
filter.
Set maximum slice resolution.
Set's the offset from the origin of the
data-set
This property sets the normal of the
slice.
The Image Shrink
filter reduces the size of an image/volume dataset by
subsampling it (i.e., extracting every nth pixel/voxel in
integer multiples). The subsampling rate can be set
separately for each dimension of the
image/volume.
This property specifies the input to the Image Shrink
filter.
The value of this property indicates the amount by which
to shrink along each axis.
If the value of this property is 1, an average of
neighborhood scalar values will be used as the output scalar value for
each output point. If its value is 0, only subsampling will be
performed, and the original scalar values at the points will be
retained.
The Surface Vectors filter is used for 2D data sets. It
constrains vectors to lie in a surface by removing
components of the vectors normal to the local
surface.
This property specifies the input to the Surface Vectors
filter.
This property specifies the name of the input vector
array to process.
This property specifies whether the vectors will be
parallel or perpendicular to the surface. If the value is set to
PerpendicularScale (2), then the output will contain a scalar array
with the dot product of the surface normal and the vector at each
point.
The Integrate Attributes filter integrates point and cell
data over lines and surfaces. It also computes length of
lines, area of surface, or volume.
This property specifies the input to the Integrate
Attributes filter.
This property specifies if the output data will be divided by the
volume/area computed for the integrated cells. If it is on, then each
value in the output cell data will be divided by the area/volume.
The flow integration filter integrates the dot product of
a point flow vector field and surface normal. It computes
the net flow across the 2D surface. It operates on any
type of dataset and produces an unstructured grid
output.
This property specifies the input to the Surface Flow
filter.
The value of this property specifies the name of the
input vector array containing the flow vector field.
The Append Attributes filter takes multiple input data
sets with the same geometry and merges their point and
cell attributes to produce a single output containing all
the point and cell attributes of the inputs. Any inputs
without the same number of points and cells as the first
input are ignored. The input data sets must already be
collected together, either as a result of a reader that
loads multiple parts (e.g., EnSight reader) or because the
Group Parts filter has been run to form a collection of
data sets.
This property specifies the input to the Append
Attributes filter.
The Append
Geometry filter operates on multiple polygonal data sets.
It merges their geometry into a single data set. Only the
point and cell attributes that all of the input data sets
have in common will appear in the output.
Set the input to the Append Geometry
filter.
The Append
Datasets filter operates on multiple data sets of any type
(polygonal, structured, etc.). It merges their geometry
into a single data set. Only the point and cell attributes
that all of the input data sets have in common will appear
in the output. The input data sets must already be
collected together, either as a result of a reader that
loads multiple parts (e.g., EnSight reader) or because the
Group Parts filter has been run to form a collection of
data sets.
This property specifies the datasets to be merged into a
single dataset by the Append Datasets filter.
The Cell Centers
filter places a point at the center of each cell in the
input data set. The center computed is the parametric
center of the cell, not necessarily the geometric or
bounding box center. The cell attributes of the input will
be associated with these newly created points of the
output. You have the option of creating a vertex cell per
point in the output. This is useful because vertex cells
are rendered, but points are not. The points themselves
could be used for placing glyphs (using the Glyph filter).
The Cell Centers filter takes any type of data set as
input and produces a polygonal data set as
output.
This property specifies the input to the Cell Centers
filter.
If set to 1, a vertex cell will be generated per point
in the output. Otherwise only points will be generated.
The Cell
Data to Point Data filter averages the values of the cell
attributes of the cells surrounding a point to compute
point attributes. The Cell Data to Point Data filter
operates on any type of data set, and the output data set
is of the same type as the input.
This property specifies the input to the Cell Data to
Point Data filter.
If this property is set to 1, then the input cell data
is passed through to the output; otherwise, only the generated point
data will be available in the output.
If the value of this property is set to 1, this filter
will request ghost levels so that the values at boundary points match
across processes. NOTE: Enabling this option might cause multiple
executions of the data source because more information is needed to
remove internal surfaces.
This filter generates scalars using cell and point ids.
That is, the point attribute data scalars are generated
from the point ids, and the cell attribute data scalars or
field data are generated from the the cell
ids.
This property specifies the input to the Cell Data to
Point Data filter.
The name of the array that will contain
ids.
The Clean filter
takes polygonal data as input and generates polygonal data
as output. This filter can merge duplicate points, remove
unused points, and transform degenerate cells into their
appropriate forms (e.g., a triangle is converted into a
line if two of its points are merged).
Set the input to the Clean filter.
If this property is set to 1, the whole data set will be
processed at once so that cleaning the data set always produces the
same results. If it is set to 0, the data set can be processed one
piece at a time, so it is not necessary for the entire data set to fit
into memory; however the results are not guaranteed to be the same as
they would be if the Piece invariant option was on. Setting this option
to 0 may produce seams in the output dataset when ParaView is run in
parallel.
If merging nearby points (see PointMerging property) and
not using absolute tolerance (see ToleranceIsAbsolute property), this
property specifies the tolerance for performing merging as a fraction
of the length of the diagonal of the bounding box of the input data
set.
If merging nearby points (see PointMerging property) and
using absolute tolerance (see ToleranceIsAbsolute property), this
property specifies the tolerance for performing merging in the spatial
units of the input data set.
This property determines whether to use absolute or
relative (a percentage of the bounding box) tolerance when performing
point merging.
If this property is set to 1, degenerate lines (a "line"
whose endpoints are at the same spatial location) will be converted to
points.
If this property is set to 1, degenerate polygons (a
"polygon" with only two distinct point coordinates) will be converted
to lines.
If this property is set to 1, degenerate triangle strips
(a triangle "strip" containing only one triangle) will be converted to
triangles.
If this property is set to 1, then points will be merged
if they are within the specified Tolerance or AbsoluteTolerance (see
the Tolerance and AbsoluteTolerance properties), depending on the value
of the ToleranceIsAbsolute property. (See the ToleranceIsAbsolute
property.) If this property is set to 0, points will not be
merged.
The Clean to Grid filter merges
points that are exactly coincident. It also converts the
data set to an unstructured grid. You may wish to do this
if you want to apply a filter to your data set that is
available for unstructured grids but not for the initial
type of your data set (e.g., applying warp vector to
volumetric data). The Clean to Grid filter operates on any
type of data set.
This property specifies the input to the Clean to Grid
filter.
Merges degenerate cells. Assumes
the input grid does not contain duplicate points. You may
want to run vtkCleanUnstructuredGrid first to assert it.
If duplicated cells are found they are removed in the
output. The filter also handles the case, where a cell may
contain degenerate nodes (i.e. one and the same node is
referenced by a cell more than once).
This property specifies the input to the Clean Cells to
Grid filter.
Delaunay2D is a filter that constructs a 2D Delaunay
triangulation from a list of input points. These points
may be represented by any dataset of type vtkPointSet and
subclasses. The output of the filter is a polygonal
dataset containing a triangle mesh. The 2D Delaunay
triangulation is defined as the triangulation that
satisfies the Delaunay criterion for n-dimensional
simplexes (in this case n=2 and the simplexes are
triangles). This criterion states that a circumsphere of
each simplex in a triangulation contains only the n+1
defining points of the simplex. In two dimensions, this
translates into an optimal triangulation. That is, the
maximum interior angle of any triangle is less than or
equal to that of any possible triangulation. Delaunay
triangulations are used to build topological structures
from unorganized (or unstructured) points. The input to
this filter is a list of points specified in 3D, even
though the triangulation is 2D. Thus the triangulation is
constructed in the x-y plane, and the z coordinate is
ignored (although carried through to the output). You can
use the option ProjectionPlaneMode in order to compute the
best-fitting plane to the set of points, project the
points and that plane and then perform the triangulation
using their projected positions and then use it as the
plane in which the triangulation is performed. The
Delaunay triangulation can be numerically sensitive in
some cases. To prevent problems, try to avoid injecting
points that will result in triangles with bad aspect
ratios (1000:1 or greater). In practice this means
inserting points that are "widely dispersed", and enables
smooth transition of triangle sizes throughout the mesh.
(You may even want to add extra points to create a better
point distribution.) If numerical problems are present,
you will see a warning message to this effect at the end
of the triangulation process. Warning: Points arranged on
a regular lattice (termed degenerate cases) can be
triangulated in more than one way (at least according to
the Delaunay criterion). The choice of triangulation (as
implemented by this algorithm) depends on the order of the
input points. The first three points will form a triangle;
other degenerate points will not break this triangle.
Points that are coincident (or nearly so) may be discarded
by the algorithm. This is because the Delaunay
triangulation requires unique input points. The output of
the Delaunay triangulation is supposedly a convex hull. In
certain cases this implementation may not generate the
convex hull.
This property specifies the input dataset to the
Delaunay 2D filter.
This property determines type of projection plane to use
in performing the triangulation.
The value of this property controls the output of this
filter. For a non-zero alpha value, only edges or triangles contained
within a sphere centered at mesh vertices will be output. Otherwise,
only triangles will be output.
This property specifies a tolerance to control
discarding of closely spaced points. This tolerance is specified as a
fraction of the diagonal length of the bounding box of the
points.
This property is a multiplier to control the size of the
initial, bounding Delaunay triangulation.
If this property is set to 1, bounding triangulation
points (and associated triangles) are included in the output. These are
introduced as an initial triangulation to begin the triangulation
process. This feature is nice for debugging output.
Delaunay3D is a filter that constructs
a 3D Delaunay triangulation from a list of input points. These points may be
represented by any dataset of type vtkPointSet and subclasses. The output of
the filter is an unstructured grid dataset. Usually the output is a tetrahedral
mesh, but if a non-zero alpha distance value is specified (called the "alpha"
value), then only tetrahedra, triangles, edges, and vertices lying within the
alpha radius are output. In other words, non-zero alpha values may result in
arbitrary combinations of tetrahedra, triangles, lines, and vertices. (The
notion of alpha value is derived from Edelsbrunner's work on "alpha shapes".)
The 3D Delaunay triangulation is defined as the triangulation that satisfies
the Delaunay criterion for n-dimensional simplexes (in this case n=3 and the
simplexes are tetrahedra). This criterion states that a circumsphere of each
simplex in a triangulation contains only the n+1 defining points of the
simplex. (See text for more information.) While in two dimensions this
translates into an "optimal" triangulation, this is not true in 3D, since a
measurement for optimality in 3D is not agreed on. Delaunay triangulations are
used to build topological structures from unorganized (or unstructured) points.
The input to this filter is a list of points specified in 3D. (If you wish to
create 2D triangulations see Delaunay2D.) The output is an unstructured grid.
The Delaunay triangulation can be numerically sensitive. To prevent problems,
try to avoid injecting points that will result in triangles with bad aspect
ratios (1000:1 or greater). In practice this means inserting points that are
"widely dispersed", and enables smooth transition of triangle sizes throughout
the mesh. (You may even want to add extra points to create a better point
distribution.) If numerical problems are present, you will see a warning
message to this effect at the end of the triangulation process. Warning: Points
arranged on a regular lattice (termed degenerate cases) can be triangulated in
more than one way (at least according to the Delaunay criterion). The choice of
triangulation (as implemented by this algorithm) depends on the order of the
input points. The first four points will form a tetrahedron; other degenerate
points (relative to this initial tetrahedron) will not break it. Points that
are coincident (or nearly so) may be discarded by the algorithm. This is
because the Delaunay triangulation requires unique input points. You can
control the definition of coincidence with the "Tolerance" instance variable.
The output of the Delaunay triangulation is supposedly a convex hull. In
certain cases this implementation may not generate the convex hull. This
behavior can be controlled by the Offset instance variable. Offset is a
multiplier used to control the size of the initial triangulation. The larger
the offset value, the more likely you will generate a convex hull; and the more
likely you are to see numerical problems. The implementation of this algorithm
varies from the 2D Delaunay algorithm (i.e., Delaunay2D) in an important way.
When points are injected into the triangulation, the search for the enclosing
tetrahedron is quite different. In the 3D case, the closest previously inserted
point point is found, and then the connected tetrahedra are searched to find
the containing one. (In 2D, a "walk" towards the enclosing triangle is
performed.) If the triangulation is Delaunay, then an enclosing tetrahedron
will be found. However, in degenerate cases an enclosing tetrahedron may not be
found and the point will be rejected.
This property specifies the input dataset to the
Delaunay 3D filter.
This property specifies the alpha (or distance) value to
control the output of this filter. For a non-zero alpha value, only
edges, faces, or tetra contained within the circumsphere (of radius
alpha) will be output. Otherwise, only tetrahedra will be
output.
This property specifies a tolerance to control
discarding of closely spaced points. This tolerance is specified as a
fraction of the diagonal length of the bounding box of the
points.
This property specifies a multiplier to control the size
of the initial, bounding Delaunay triangulation.
This boolean controls whether bounding triangulation
points (and associated triangles) are included in the output. (These
are introduced as an initial triangulation to begin the triangulation
process. This feature is nice for debugging output.)
This boolean controls whether tetrahedrons which satisfy
the alpha criterion output when alpha is non-zero.
This boolean controls whether triangles which satisfy
the alpha criterion output when alpha is non-zero.
This boolean controls whether lines which satisfy the
alpha criterion output when alpha is non-zero.
This boolean controls whether vertices which satisfy the
alpha criterion are output when alpha is non-zero.
The Connectivity
filter assigns a region id to connected components of the
input data set. (The region id is assigned as a point
scalar value.) This filter takes any data set type as
input and produces unstructured grid
output.
This property specifies the input to the Connectivity
filter.
Controls the extraction of connected
surfaces.
Controls the coloring of the connected
regions.
Specifies the point to use in closest point mode.
The Crop filter
extracts an area/volume of interest from a 2D image or a
3D volume by allowing the user to specify the minimum and
maximum extents of each dimension of the data. Both the
input and output of this filter are uniform rectilinear
data.
This property specifies the input to the Crop
filter.
This property gives the minimum and maximum point index
(extent) in each dimension for the output dataset.
The
Curvature filter computes the curvature at each point in a
polygonal data set. This filter supports both Gaussian and
mean curvatures. ; the type can be selected from the
Curvature type menu button.
This property specifies the input to the Curvature
filter.
If this property is set to 1, the mean curvature
calculation will be inverted. This is useful for meshes with
inward-pointing normals.
This property specifies which type of curvature to
compute.
The Decimate filter reduces the number of triangles in a
polygonal data set. Because this filter only operates on
triangles, first run the Triangulate filter on a dataset
that contains polygons other than
triangles.
This property specifies the input to the Decimate
filter.
This property specifies the desired reduction in the
total number of polygons in the output dataset. For example, if the
TargetReduction value is 0.9, the Decimate filter will attempt to
produce an output dataset that is 10% the size of the
input.)
If this property is set to 1, decimation will not split
the dataset or produce holes, but it may keep the filter from reaching
the reduction target. If it is set to 0, better reduction can occur
(reaching the reduction target), but holes in the model may be
produced.
The value of this property is used in determining where
the data set may be split. If the angle between two adjacent triangles
is greater than or equal to the FeatureAngle value, then their boundary
is considered a feature edge where the dataset can be
split.
If this property is set to 1, then vertices on the
boundary of the dataset can be removed. Setting the value of this
property to 0 preserves the boundary of the dataset, but it may cause
the filter not to reach its reduction target.
Decimate Polyline is a filter to reduce the number of lines in a
polyline. The algorithm functions by evaluating an error metric for each
vertex (i.e., the distance of the vertex to a line defined from the two
vertices on either side of the vertex). Then, these vertices are placed
into a priority queue, and those with smaller errors are deleted first.
The decimation continues until the target reduction is reached. While the
filter will not delete end points, it will decimate closed loops down to a
single line, thereby changing topology.
As this filter works on polylines, you may need to call Triangle Strips before calling
this filter.
This property specifies the input to the Decimate Polyline
filter.
This property specifies the desired reduction in the
total number of lines in the output dataset. For example, if the
TargetReduction value is 0.9, the Decimate Polyline filter will attempt to
produce an output dataset that is 10% the size of the
input.
Set the largest decimation error that is allowed during the decimation
process. This may limit the maximum reduction that may be achieved. The
maximum error is specified as a fraction of the maximum length of
the input data bounding box.
The Elevation filter generates point scalar values for an
input dataset along a specified direction vector. The
Input menu allows the user to select the data set to which
this filter will be applied. Use the Scalar range entry
boxes to specify the minimum and maximum scalar value to
be generated. The Low Point and High Point define a line
onto which each point of the data set is projected. The
minimum scalar value is associated with the Low Point, and
the maximum scalar value is associated with the High
Point. The scalar value for each point in the data set is
determined by the location along the line to which that
point projects.
This property specifies the input dataset to the
Elevation filter.
This property determines the range into which scalars
will be mapped.
This property defines one end of the direction vector
(small scalar values).
This property defines the other end of the direction
vector (large scalar values).
The Extract
Surface filter extracts the polygons forming the outer
surface of the input dataset. This filter operates on any
type of data and produces polygonal data as
output.
This property specifies the input to the Extract Surface
filter.
If the value of this property is set to 1, internal
surfaces along process boundaries will be removed. NOTE: Enabling this
option might cause multiple executions of the data source because more
information is needed to remove internal surfaces.
If the input is an unstructured grid with nonlinear
faces, this parameter determines how many times the face is subdivided
into linear faces. If 0, the output is the equivalent of its linear
counterpart (and the midpoints determining the nonlinear interpolation
are discarded). If 1, the nonlinear face is triangulated based on the
midpoints. If greater than 1, the triangulated pieces are recursively
subdivided to reach the desired subdivision. Setting the value to
greater than 1 may cause some point data to not be passed even if no
quadratic faces exist. This option has no effect if the input is not an
unstructured grid.
The Extract
Surface filter extracts the polygons forming the outer
surface of the input dataset. This filter operates on any
type of data and produces polygonal data as
output.
This property specifies the input to the Extract Surface
filter.
If the value of this property is set to 1, internal
surfaces along process boundaries will be removed. NOTE: Enabling this
option might cause multiple executions of the data source because more
information is needed to remove internal surfaces.
If the input is an unstructured grid with nonlinear
faces, this parameter determines how many times the face is subdivided
into linear faces. If 0, the output is the equivalent of its linear
counterpart (and the midpoints determining the nonlinear interpolation
are discarded). If 1, the nonlinear face is triangulated based on the
midpoints. If greater than 1, the triangulated pieces are recursively
subdivided to reach the desired subdivision. Setting the value to
greater than 1 may cause some point data to not be passed even if no
quadratic faces exist. This option has no effect if the input is not an
unstructured grid.
This property specifies the name of the material
array for generating parts.
If the value of this property is set to 1 (the default),
surfaces along the boundary are 1 layer thick. Otherwise there is
a surface for the material on each side.
This the name of the input material property field data array
This the name of the input and output material id field data array
This the name of the output material ancestry id field data array
This the name of the input and output interface id field data array
The Calculator filter computes a new data array or new point
coordinates as a function of existing scalar or vector arrays. If
point-centered arrays are used in the computation of a new data array,
the resulting array will also be point-centered. Similarly,
computations using cell-centered arrays will produce a new
cell-centered array. If the function is computing point coordinates,
the result of the function must be a three-component vector.
The Calculator interface operates similarly to a scientific
calculator. In creating the function to evaluate, the standard order
of operations applies. Each of the calculator functions is described
below. Unless otherwise noted, enclose the operand in parentheses
using the ( and ) buttons.
- Clear: Erase the current function (displayed in the read-only text
box above the calculator buttons).
- /: Divide one scalar by another. The operands for this function are
not required to be enclosed in parentheses.
- *: Multiply two scalars, or multiply a vector by a scalar (scalar multiple).
The operands for this function are not required to be enclosed in parentheses.
- -: Negate a scalar or vector (unary minus), or subtract one scalar or vector
from another. The operands for this function are not required to be enclosed
in parentheses.
- +: Add two scalars or two vectors. The operands for this function are not
required to be enclosed in parentheses.
- sin: Compute the sine of a scalar. cos: Compute the cosine of a scalar.
- tan: Compute the tangent of a scalar.
- asin: Compute the arcsine of a scalar.
- acos: Compute the arccosine of a scalar.
- atan: Compute the arctangent of a scalar.
- sinh: Compute the hyperbolic sine of a scalar.
- cosh: Compute the hyperbolic cosine of a scalar.
- tanh: Compute the hyperbolic tangent of a scalar.
- min: Compute minimum of two scalars.
- max: Compute maximum of two scalars.
- x^y: Raise one scalar to the power of another scalar. The operands for
this function are not required to be enclosed in parentheses.
- sqrt: Compute the square root of a scalar.
- e^x: Raise e to the power of a scalar.
- log: Compute the logarithm of a scalar (deprecated. same as log10).
- log10: Compute the logarithm of a scalar to the base 10.
- ln: Compute the logarithm of a scalar to the base 'e'.
- ceil: Compute the ceiling of a scalar. floor: Compute the floor of a scalar.
- abs: Compute the absolute value of a scalar.
- v1.v2: Compute the dot product of two vectors. The operands for this
function are not required to be enclosed in parentheses.
- cross: Compute cross product of two vectors.
- mag: Compute the magnitude of a vector.
- norm: Normalize a vector.
The operands are described below. The digits 0 - 9 and the decimal
point are used to enter constant scalar values. **iHat**, **jHat**,
and **kHat** are vector constants representing unit vectors in the X,
Y, and Z directions, respectively. The scalars menu lists the names of
the scalar arrays and the components of the vector arrays of either
the point-centered or cell-centered data. The vectors menu lists the
names of the point-centered or cell-centered vector arrays. The
function will be computed for each point (or cell) using the scalar or
vector value of the array at that point (or cell). The filter operates
on any type of data set, but the input data set must have at least one
scalar or vector array. The arrays can be either point-centered or
cell-centered. The Calculator filter's output is of the same data set
type as the input.
The output array type can be specified as an advanced option with the default
being of a vtkDoubleArray.
This property specifies the input dataset to the
Calculator filter. The scalar and vector variables may be chosen from
this dataset's arrays.
This property determines whether the computation is to
be performed on point-centered or cell-centered data.
The value of this property determines whether the
results of this computation should be used as point coordinates or as a
new array.
Set whether to output results as point/cell
normals. Outputting as normals is only valid with vector
results. Point or cell normals are selected using
AttributeMode.
Set whether to output results as point/cell
texture coordinates. Point or cell texture coordinates are
selected using AttributeMode. 2-component texture coordinates
cannot be generated at this time.
This property contains the name for the output array
containing the result of this computation.
This property contains the equation for computing the new
array.
This property determines whether invalid values in the
computation will be replaced with a specific value. (See the
ReplacementValue property.)
If invalid values in the computation are to be replaced
with another value, this property contains that value.
This property determines what array type to output.
The default is a vtkDoubleArray.
The Feature Edges filter extracts various subsets of edges
from the input data set. This filter operates on polygonal
data and produces polygonal output.
This property specifies the input to the Feature Edges
filter.
If the value of this property is set to 1, boundary
edges will be extracted. Boundary edges are defined as lines cells or
edges that are used by only one polygon.
If the value of this property is set to 1, feature edges
will be extracted. Feature edges are defined as edges that are used by
two polygons whose dihedral angle is greater than the feature angle.
(See the FeatureAngle property.) Toggle whether to extract feature
edges.
If the value of this property is set to 1, non-manifold
edges will be extracted. Non-manifold edges are defined as edges that
are use by three or more polygons.
If the value of this property is set to 1, manifold
edges will be extracted. Manifold edges are defined as edges that are
used by exactly two polygons.
If the value of this property is set to 1, then the
extracted edges are assigned a scalar value based on the type of the
edge.
The value of this property is used to define a feature
edge. If the surface normal between two adjacent triangles is at least
as large as this Feature Angle, a feature edge exists. (See the
FeatureEdges property.)
Filter used to explicitly request a specific time from the pipeline.
This property specifies the input to the ForceTime filter.
If set to 0, this filter will do nothing, only shallow copy the
input to the output. If set to 1, this filter will always request the
ForcedTime to the pipeline, ignoring time requests.
This property specifies the time to request.
If the IgnorePipelineTime property is set, then this value will override any time request in the VTK pipeline.
Filter used to perform an operation on a data array at 2 different timesteps.
This property specifies the input to the TemporalArrayOperator filter.
This property lists the name of the array from which to
use.
This property specifies the timestep index to use in the first part of the comparison computation
The property determines the operation to compute
between the first and second timestep data.
This property specifies the timestep index to use in the second part of the comparison computation
This property set the suffix to be append to the output array name.
If empty, output will be suffixed with '_' and the operation type.
The Gradient filter
computes the gradient vector at each point in an image or
volume. This filter uses central differences to compute
the gradients. The Gradient filter operates on uniform
rectilinear (image) data and produces image data
output.
This property specifies the input to the Gradient
filter.
This property lists the name of the array from which to
compute the gradient.
This property indicates whether to compute the gradient
in two dimensions or in three. If the gradient is being computed in two
dimensions, the X and Y dimensions are used.
The Gradient (Unstructured) filter estimates the gradient
vector at each point or cell. It operates on any type of
vtkDataSet, and the output is the same type as the input.
If the dataset is a vtkImageData, use the Gradient filter
instead; it will be more efficient for this type of
dataset.
This property specifies the input to the Gradient
(Unstructured) filter.
This property lists the name of the scalar array from
which to compute the gradient.
When this flag is on, the gradient filter will compute
the gradient of the input array.
This property provides a name for the output array
containing the gradient vectors.
When this flag is on, the gradient filter will provide a
less accurate (but close) algorithm that performs fewer derivative
calculations (and is therefore faster). The error contains some
smoothing of the output data and some possible errors on the boundary.
This parameter has no effect when performing the gradient of cell
data or when the input grid is not a vtkUnstructuredGrid.
When this flag is on, the gradient filter will compute
the divergence of a 3 component array.
This property provides a name for the output array
containing the divergence vector.
When this flag is on, the gradient filter will compute
the vorticity/curl of a 3 component array.
This property provides a name for the output array
containing the vorticity vector.
When this flag is on, the gradient filter will compute
the Q-criterion of a 3 component array.
This property provides a name for the output array
containing Q criterion.
Specify which dimensions of cells should be used
when computing gradient quantities. Default is to use
the dataset's maximum cell dimension.
Specify what value to use for when the gradient quantities at a
point can't be computed with the selected **ContributingCellOption**.
The Gradient
Magnitude filter computes the magnitude of the gradient
vector at each point in an image or volume. This filter
operates on uniform rectilinear (image) data and produces
image data output.
This property specifies the input to the Gradient
Magnitude filter.
This property indicates whether to compute the gradient
magnitude in two or three dimensions. If computing the gradient
magnitude in 2D, the gradients in X and Y are used for computing the
gradient magnitude.
This
filter works on multiblock unstructured grid inputs and
also works in parallel. It Ignores any cells with a cell
data Status value of 0. It performs connectivity to
distict fragments separately. It then integrates
attributes of the fragments.
This property specifies the input of the
filter.
Extracts material fragments from multiblock vtkRectilinearGrid datasets
based on the selected volume fraction array(s) and a fraction isovalue
and integrates the associated attributes.
This property specifies the input of the
filter.
This property specifies the name(s) of the volume
fraction array(s) for generating parts.
This property specifies the name(s) of the volume
fraction array(s) for generating parts.
This property specifies the name(s) of the volume
fraction array(s) for generating parts.
The value of this property is the volume fraction value
for the surface.
This property specifies the input of the
filter.
This property specifies the cell arrays from which the
clip filter will compute clipped cells.
This property specifies the values at which to compute
the isosurface.
If this property is on, internal tetrahedra are
decimation
If this property is off, each process executes
independently.
Use more memory to merge points on the boundaries of
blocks.
This property specifies the input of the
filter.
This property specifies the cell arrays from which the
contour filter will compute contour cells.
This property specifies the values at which to compute
the isosurface.
If this property is on, the the boundary of the data set
is capped.
If this property is on, a transition mesh between levels
is created.
If this property is off, each process executes
independently.
A simple test to see if ghost values are already set
properly.
Use triangles instead of quads on capping
surfaces.
Use more memory to merge points on the boundaries of
blocks.
Combines the running of
AMRContour, AMRFragmentIntegration, AMRDualContour and ExtractCTHParts
This property specifies the volume input of the
filter.
This property specifies the cell arrays from which the
analysis will determine fragments
This property specifies the cell arrays from which the
analysis will determine fragment mass
This property specifies the cell arrays from which the
analysis will determine volume weighted average values
This property specifies the cell arrays from which the
analysis will determine mass weighted average values
This property specifies the values at which to compute
the isosurface.
Whether or not to extract a surface from this data
Whether the extracted surface should be watertight or not
Whether or not to integrate fragments in this data
This property specifies the volume input of the
filter.
This property specifies the cell arrays from which the
analysis will determine fragments
This property specifies the cell arrays from which the
analysis will determine fragment mass
This property specifies the cell arrays from which the
analysis will determine volume weighted average values
This property specifies the cell arrays from which the
analysis will determine mass weighted average values
This property specifies the volume input of the
filter.
This property specifies the cell arrays from which the
analysis will determine fragments
This property specifies the values at which to compute
the isosurface.
Resolve the fragments between blocks.
Propagate regionIds into the ghosts.
This property specifies the input of the
filter.
The Linear
Extrusion filter creates a swept surface by translating
the input dataset along a specified vector. This filter is
intended to operate on 2D polygonal data. This filter
operates on polygonal data and produces polygonal data
output.
This property specifies the input to the Linear
Extrusion filter.
The value of this property determines the distance along
the vector the dataset will be translated. (A scale factor of 0.5 will
move the dataset half the length of the vector, and a scale factor of 2
will move it twice the vector's length.)
The value of this property indicates the X, Y, and Z
components of the vector along which to sweep the input
dataset.
The value of this property indicates whether to cap the
ends of the swept surface. Capping works by placing a copy of the input
dataset on either end of the swept surface, so it behaves properly if
the input is a 2D surface composed of filled polygons. If the input
dataset is a closed solid (e.g., a sphere), then if capping is on
(i.e., this property is set to 1), two copies of the data set will be
displayed on output (the second translated from the first one along the
specified vector). If instead capping is off (i.e., this property is
set to 0), then an input closed solid will produce no
output.
The value of this property determines whether the output
will be the same regardless of the number of processors used to compute
the result. The difference is whether there are internal polygonal
faces on the processor boundaries. A value of 1 will keep the results
the same; a value of 0 will allow internal faces on processor
boundaries.
The Loop Subdivision filter increases the granularity of a
polygonal mesh. It works by dividing each triangle in the
input into four new triangles. It is named for Charles
Loop, the person who devised this subdivision scheme. This
filter only operates on triangles, so a data set that
contains other types of polygons should be passed through
the Triangulate filter before applying this filter to it.
This filter only operates on polygonal data (specifically
triangle meshes), and it produces polygonal
output.
This property specifies the input to the Loop
Subdivision filter.
Set the number of subdivision iterations to perform.
Each subdivision divides single triangles into four new
triangles.
The Mask Points
filter reduces the number of points in the dataset. It
operates on any type of dataset, but produces only points
/ vertices as output.
This property specifies the input to the Mask Points
filter.
The value of this property specifies that every
OnStride-th points will be retained in the output when not using Random
(the skip or stride size for point ids). (For example, if the on ratio
is 3, then the output will contain every 3rd point, up to the the
maximum number of points.)
The value of this property indicates the maximum number
of points in the output dataset.
When this is off, the maximum number of points is taken
per processor when running in parallel (total number of points = number
of processors * maximum number of points). When this is on, the maximum
number of points is proportionally distributed across processors
depending on the number of points per processor
("total number of points" is the same as "maximum number of points"
maximum number of points per processor = number of points on a processor
* maximum number of points / total number of points across all processors
).
The value of this property indicates the starting point
id in the ordered list of input points from which to start
masking.
If the value of this property is set to true, then the
points in the output will be randomly selected from the input in
various ways set by Random Mode; otherwise this filter will subsample
point ids regularly.
Randomized Id Strides picks points with random id
increments starting at Offset (the output probably isn't a
statistically random sample). Random Sampling generates a statistically
random sample of the input, ignoring Offset (fast - O(sample size)).
Spatially Stratified Random Sampling is a variant of random sampling
that splits the points into equal sized spatial strata before randomly
sampling (slow - O(N log N)).
This property specifies whether to generate vertex cells
as the topography of the output. If set to 1, the geometry (vertices)
will be displayed in the rendering window; otherwise no geometry will
be displayed.
Tell filter to only generate one vertex per cell instead
of multiple vertices in one cell.
The Median filter operates on uniform rectilinear (image
or volume) data and produces uniform rectilinear output.
It replaces the scalar value at each pixel / voxel with
the median scalar value in the specified surrounding
neighborhood. Since the median operation removes outliers,
this filter is useful for removing high-intensity,
low-probability noise (shot noise).
This property specifies the input to the Median
filter.
The value of this property lists the name of the scalar
array to use in computing the median.
The value of this property specifies the number of
pixels/voxels in each dimension to use in computing the median to
assign to each pixel/voxel. If the kernel size in a particular
dimension is 1, then the median will not be computed in that
direction.
This filter
creates a new cell array containing a geometric measure of
each cell's fitness. Different quality measures can be
chosen for different cell shapes. Supported shapes include linear
triangles, quadrilaterals, tetrahedra, and hexahedra. For
other shapes, a value of 0 is assigned.
This property specifies the input to the Mesh Quality
filter.
This property indicates which quality measure will be
used to evaluate triangle quality. The radius ratio is the size of a
circle circumscribed by a triangle's 3 vertices divided by the size of
a circle tangent to a triangle's 3 edges. The edge ratio is the ratio
of the longest edge length to the shortest edge length.
This property indicates which quality measure will be
used to evaluate quadrilateral quality.
This property indicates which quality measure will be
used to evaluate tetrahedral quality. The radius ratio is the size of a
sphere circumscribed by a tetrahedron's 4 vertices divided by the size
of a circle tangent to a tetrahedron's 4 faces. The edge ratio is the
ratio of the longest edge length to the shortest edge length. The
collapse ratio is the minimum ratio of height of a vertex above the
triangle opposite it divided by the longest edge of the opposing
triangle across all vertex/triangle pairs.
This property indicates which quality measure will be
used to evaluate hexahedral quality.
This property specifies the input to the Cell Validation filter.
This filter computes sizes for 0D (1 for vertex and number of points in for polyvertex), 1D (length), 2D (area)
and 3D (volume) cells. ComputeVertexCount, ComputeLength, ComputeArea and ComputeVolume options can be used to specify what dimension
cells to compute for. The values are placed in a cell data array named ArrayName. The ComputeSum option will give a summation of the
computed cell sizes for a vtkDataSet and for composite datasets will contain a sum of the underlying blocks in the top-level block.
This property specifies the input to the Cell Size filter.
Specify whether or not to compute the number of points in 0D cells.
Specify the name of the array to store the 0D cell vertex count and optionally the field data vertex count sum.
Specify whether or not to compute the length of 1D cells.
Specify the name of the array to store the 1D cell length and optionally the field data length sum.
Specify whether or not to compute the area of 2D cells.
Specify the name of the array to store 2D cell area and optionally the field data area sum.
Specify whether or not to compute the volume of 3D cells .
Specify the name of the array to store 3D cell volume and optionally the field data volume sum.
Specify whether or not to sum the computed sizes of cells in datasets. The result is stored in field data.
This filter
generates surface normals at the points of the input
polygonal dataset to provide smooth shading of the
dataset. The resulting dataset is also polygonal. The
filter works by calculating a normal vector for each
polygon in the dataset and then averaging the normals at
the shared points.
This property specifies the input to the Normals
Generation filter.
The value of this property defines a feature edge. If
the surface normal between two adjacent triangles is at least as large
as this Feature Angle, a feature edge exists. If Splitting is on,
points are duplicated along these feature edges. (See the Splitting
property.)
This property controls the splitting of sharp edges. If
sharp edges are split (property value = 1), then points are duplicated
along these edges, and separate normals are computed for both sets of
points to give crisp (rendered) surface definition.
The value of this property controls whether consistent
polygon ordering is enforced. Generally the normals for a data set
should either all point inward or all point outward. If the value of
this property is 1, then this filter will reorder the points of cells
that whose normal vectors are oriented the opposite direction from the
rest of those in the data set.
If the value of this property is 1, this filter will
reverse the normal direction (and reorder the points accordingly) for
all polygons in the data set; this changes front-facing polygons to
back-facing ones, and vice versa. You might want to do this if your
viewing position will be inside the data set instead of outside of
it.
Turn on/off traversal across non-manifold edges. Not
traversing non-manifold edges will prevent problems where the
consistency of polygonal ordering is corrupted due to topological
loops.
This filter computes the normals at the points in the
data set. In the process of doing this it computes polygon normals too.
If you want these normals to be passed to the output of this filter,
set the value of this property to 1.
Turn this option to to produce the same results
regardless of the number of processors used (i.e., avoid seams along
processor boundaries). Turn this off if you do want to process ghost
levels and do not mind seams.
The Outline filter
generates an axis-aligned bounding box for the input
dataset. This filter operates on any type of dataset and
produces polygonal output.
This property specifies the input to the Outline
filter.
The
Outline Corners filter generates the corners of a bounding
box for the input dataset. This filter operates on any
type of dataset and produces polygonal
output.
This property specifies the input to the Outline Corners
filter.
The value of this property sets the size of the corners
as a percentage of the length of the corresponding bounding box
edge.
The
Process Id Scalars filter assigns a unique scalar value to
each piece of the input according to which processor it
resides on. This filter operates on any type of data when
ParaView is run in parallel. It is useful for determining
whether your data is load-balanced across the processors
being used. The output data set type is the same as that
of the input.
This property specifies the input to the Process Id
Scalars filter.
The value of this property determines whether to use
random id values for the various pieces. If set to 1, the unique value
per piece will be chosen at random; otherwise the unique value will
match the id of the process.
The Point
Data to Cell Data filter averages the values of the point
attributes of the points of a cell to compute cell
attributes. This filter operates on any type of dataset,
and the output dataset is the same type as the
input.
This property specifies the input to the Point Data to
Cell Data filter.
The value of this property controls whether the input
point data will be passed to the output. If set to 1, then the input
point data is passed through to the output; otherwise, only generated
cell data is placed into the output.
Control whether the input point data is to be
treated as categorical. If the data is categorical, then the
resultant cell data will be determined by a majority rules
vote, with ties going to the smaller value.
"Create scalar/vector data arrays interpolated to quadrature
points."
This property specifies the input of the filter.
Specifies the offset array from which we interpolate
values to quadrature points.
"Create a point set with data at quadrature
points."
This property specifies the input of the filter.
Specifies the offset array from which we generate
quadrature points.
Generate quadrature scheme dictionaries in data sets that do not have
them.
This property specifies the input of the
filter.
The Quadric
Clustering filter produces a reduced-resolution polygonal
approximation of the input polygonal dataset. This filter
is the one used by ParaView for computing LODs. It uses
spatial binning to reduce the number of points in the data
set; points that lie within the same spatial bin are
collapsed into one representative point.
This property specifies the input to the Quadric
Clustering filter.
This property specifies the number of bins along the X,
Y, and Z axes of the data set.
If the value of this property is set to 1, the
representative point for each bin is selected from one of the input
points that lies in that bin; the input point that produces the least
error is chosen. If the value of this property is 0, the location of
the representative point is calculated to produce the least error
possible for that bin, but the point will most likely not be one of the
input points.
If this property is set to 1, feature edge quadrics will
be used to maintain the boundary edges along processor
divisions.
If this property is set to 1, feature point quadrics
will be used to maintain the boundary points along processor
divisions.
If this property is set to 1, the cell data from the
input will be copied to the output.
If this property is set to 1, triangles completely
contained in a spatial bin will be included in the computation of the
bin's quadrics. When this property is set to 0, the filters operates
faster, but the resulting surface may not be as
well-behaved.
The Random Attributes filter creates random attributes
including scalars and vectors. These attributes can be
generated as point data or cell data. The generation of each
component is normalized between a user-specified minimum and
maximum value.
This filter provides that capability to specify the data type
of the attributes and the range for each of the components.
This property specifies the input to the Random Scalars
filter.
Specify the type of array to create (all components of this
array are of this type). This holds true for all arrays that
are created.
Set the range values (minimum and maximum) for
each component. This applies to all data that is
generated.
Indicate that the generated attributes are
constant within a block. This can be used to highlight
blocks in a composite dataset.
Indicate that point scalars are to be
generated.
Indicate that point vectors are to be
generated.
Indicate that point scalars are to be
generated.
Indicate that point vectors are to be
generated.
The Random
Vectors filter generates a point-centered array of random
vectors. It uses a random number generator to determine
the components of the vectors. This filter operates on any
type of data set, and the output data set will be of the
same type as the input.
This property specifies the input to the Random Vectors
filter.
This property specifies the minimum length of the random
point vectors generated.
This property specifies the maximum length of the random
point vectors generated.
The
Reflect filter reflects the input dataset across the
specified plane. This filter operates on any type of data
set and produces an unstructured grid
output.
This property specifies the input to the Reflect
filter.
The value of this property determines which plane to
reflect across. If the value is X, Y, or Z, the value of the Center
property determines where the plane is placed along the specified axis.
The other six options (X Min, X Max, etc.) place the reflection plane
at the specified face of the bounding box of the input
dataset.
If the value of the Plane property is X, Y, or Z, then
the value of this property specifies the center of the reflection
plane.
If this property is set to 1, the output will contain
the union of the input dataset and its reflection. Otherwise the output
will contain only the reflection of the input data.
If off, only Vectors, Normals and Tensors will be flipped.
If on, all 3-component data arrays ( considered as 3D vectors),
6-component data arrays (considered as symmetric tensors),
9-component data arrays (considered as tensors ) of signed type will be flipped.
All other arrays won't be flipped and will only be copied.
The Ribbon
filter creates ribbons from the lines in the input data
set. This filter is useful for visualizing streamlines.
Both the input and output of this filter are polygonal
data. The input data set must also have at least one
point-centered vector array.
This property specifies the input to the Ribbon
filter.
The value of this property indicates the name of the
input scalar array used by this filter. The width of the ribbons will
be varied based on the values in the specified array if the value of
the Width property is 1.
The value of this property indicates the name of the
input vector array used by this filter. If the UseDefaultNormal
property is set to 0, the normal vectors for the ribbons come from the
specified vector array.
If the VaryWidth property is set to 1, the value of this
property is the minimum ribbon width. If the VaryWidth property is set
to 0, the value of this property is half the width of the
ribbon.
The value of this property specifies the offset angle
(in degrees) of the ribbon from the line normal.
If this property is set to 0, and the input contains no
vector array, then default ribbon normals will be generated
(DefaultNormal property); if a vector array has been set
(SelectInputVectors property), the ribbon normals will be set from the
specified array. If this property is set to 1, the default normal
(DefaultNormal property) will be used, regardless of whether the
SelectInputVectors property has been set.
The value of this property specifies the normal to use
when the UseDefaultNormal property is set to 1 or the input contains no
vector array (SelectInputVectors property).
If this property is set to 1, the ribbon width will be
scaled according to the scalar array specified in the
SelectInputScalars property. Toggle the variation of ribbon width with
scalar value.
The Rotational Extrusion filter forms a surface by
rotating the input about the Z axis. This filter is
intended to operate on 2D polygonal data. It produces
polygonal output.
This property specifies the input to the Rotational
Extrusion filter.
The value of this property controls the number of
intermediate node points used in performing the sweep (rotating from 0
degrees to the value specified by the Angle property.
If this property is set to 1, the open ends of the swept
surface will be capped with a copy of the input dataset. This works
property if the input is a 2D surface composed of filled polygons. If
the input dataset is a closed solid (e.g., a sphere), then either two
copies of the dataset will be drawn or no surface will be drawn. No
surface is drawn if either this property is set to 0 or if the two
surfaces would occupy exactly the same 3D space (i.e., the Angle
property's value is a multiple of 360, and the values of the
Translation and DeltaRadius properties are 0).
This property specifies the angle of rotation in
degrees. The surface is swept from 0 to the value of this
property.
The value of this property specifies the total amount of
translation along the Z axis during the sweep process. Specifying a
non-zero value for this property allows you to create a corkscrew
(value of DeltaRadius > 0) or spring effect.
The value of this property specifies the change in
radius during the sweep process.
The Shrink filter
causes the individual cells of a dataset to break apart
from each other by moving each cell's points toward the
centroid of the cell. (The centroid of a cell is the
average position of its points.) This filter operates on
any type of dataset and produces unstructured grid
output.
This property specifies the input to the Shrink
filter.
The value of this property determines how far the points
will move. A value of 0 positions the points at the centroid of the
cell; a value of 1 leaves them at their original
positions.
The Smooth filter operates on a polygonal data set by
iteratively adjusting the position of the points using
Laplacian smoothing. (Because this filter only adjusts
point positions, the output data set is also polygonal.)
This results in better-shaped cells and more evenly
distributed points. The Convergence slider limits the
maximum motion of any point. It is expressed as a fraction
of the length of the diagonal of the bounding box of the
data set. If the maximum point motion during a smoothing
iteration is less than the Convergence value, the
smoothing operation terminates.
This property specifies the input to the Smooth
filter.
This property sets the maximum number of smoothing
iterations to perform. More iterations produce better
smoothing.
The value of this property limits the maximum motion of
any point. It is expressed as a fraction of the length of the diagonal
of the bounding box of the input dataset. If the maximum point motion
during a smoothing iteration is less than the value of this property,
the smoothing operation terminates.
The
Triangle Strips filter converts triangles into triangle
strips and lines into polylines. This filter operates on
polygonal data sets and produces polygonal
output.
This property specifies the input to the Triangle Strips
filter.
This property specifies the maximum number of
triangles/lines to include in a triangle strip or
polyline.
The Plot on Sorted Lines filter sorts and orders
polylines for graph visualization. See http://www.paraview.org/ParaView3/index.php/Plotting_Over_Curves for more information.
This property specifies the input to the Plot Edges
filter.
Extracts the surface, intersect it with a 2D plane. Plot
the resulting polylines.
The Subdivide filter iteratively divides each triangle in
the input dataset into 4 new triangles. Three new points
are added per triangle -- one at the midpoint of each
edge. This filter operates only on polygonal data
containing triangles, so run your polygonal data through
the Triangulate filter first if it is not composed of
triangles. The output of this filter is also
polygonal.
This parameter specifies the input to the Subdivide
filter.
The value of this property specifies the number of
subdivision iterations to perform.
The Tensor Glyph filter generates an ellipsoid, cuboid, cylinder or superquadric glyph at every point in
the input data set. The glyphs are oriented and scaled according to eigenvalues and eigenvectors of tensor
point data of the input data set. The Tensor Glyph filter operates on any type of data set. Its output is
polygonal. This filter supports symmetric tensors. Symmetric tensor components are expected to have the
following order: XX, YY, ZZ, XY, YZ, XZ"
This property specifies the input to the Glyph filter.
This property indicates the name of the tensor array on which to operate. The indicated array's
eigenvalues and eigenvectors are used for scaling and orienting the glyphs.
This property determines which type of glyph will be placed at the points in the input dataset.
Toggle whether to extract eigenvalues from tensor. If false, eigenvalues/eigenvectors are not extracted and
the columns of the tensor are taken as the eigenvectors (the norm of column, always positive, is the eigenvalue).
If true, the glyph is scaled and oriented according to eigenvalues and eigenvectors; additionally, eigenvalues
are provided as new data array.
This property determines whether or not to color the glyphs.
This property indicates the name of the scalar array to use for coloring
This property determines whether input scalars or computed eigenvalues at the point should be used
to color the glyphs. If ThreeGlyphs is set and the eigenvalues are chosen for coloring then each glyph
is colored by the corresponding eigenvalue and if not set the color corresponding to the largest
eigenvalue is chosen.
This property specifies the scale factor to scale every glyph by.
This property determines whether scaling of glyphs by ScaleFactor times eigenvalue should be limited.
This is useful to prevent uncontrolled scaling near singularities.
If scaling by eigenvalues should be limited, this value sets an upper limit for scale factor times
eigenvalue.
This property determines whether or not to draw a mirror of each glyph.
Toggle whether to produce three glyphs, each of which oriented along an eigenvector and scaled according
to the corresponding eigenvector.
The Tessellate filter
tessellates cells with nonlinear geometry and/or scalar
fields into a simplicial complex with linearly
interpolated field values that more closely approximate
the original field. This is useful for datasets containing
quadratic cells.
This property specifies the input to the Tessellate
filter.
The value of this property sets the maximum
dimensionality of the output tessellation. When the value of this
property is 3, 3D cells produce tetrahedra, 2D cells produce triangles,
and 1D cells produce line segments. When the value is 2, 3D cells will
have their boundaries tessellated with triangles. When the value is 1,
all cells except points produce line segments.
This property controls the maximum chord error allowed
at any edge midpoint in the output tessellation. The chord error is
measured as the distance between the midpoint of any output edge and
the original nonlinear geometry.
This property controls the maximum field error allowed
at any edge midpoint in the output tessellation. The field error is
measured as the difference between a field value at the midpoint of an
output edge and the value of the corresponding field in the original
nonlinear geometry.
This property specifies the maximum number of times an
edge may be subdivided. Increasing this number allows further
refinement but can drastically increase the computational and storage
requirements, especially when the value of the OutputDimension property
is 3.
If the value of this property is set to 1, coincident
vertices will be merged after tessellation has occurred. Only geometry
is considered during the merge and the first vertex encountered is the
one whose point attributes will be used. Any discontinuities in point
fields will be lost. On the other hand, many operations, such as
streamline generation, require coincident vertices to be merged. Toggle
whether to merge coincident vertices.
The
Tetrahedralize filter converts the 3D cells of any type of
dataset to tetrahedrons and the 2D ones to triangles. This
filter always produces unstructured grid
output.
This property specifies the input to the Tetrahedralize
filter.
The Transform
filter allows you to specify the position, size, and
orientation of polygonal, unstructured grid, and
curvilinear data sets.
This property specifies the input to the Transform
filter.
The values in this property allow you to specify the
transform (translation, rotation, and scaling) to apply to the input
dataset.
If off, only Vectors and Normals will be transformed.
If on, all 3-component data arrays (considered as 3D vectors) will be transformed.
All other arrays won't be flipped and will only be copied.
The
Triangulate filter decomposes polygonal data into only
triangles, points, and lines. It separates triangle strips
and polylines into individual triangles and lines,
respectively. The output is polygonal data. Some filters
that take polygonal data as input require that the data be
composed of triangles rather than other polygons, so
passing your data through this filter first is useful in
such situations. You should use this filter in these cases
rather than the Tetrahedralize filter because they produce
different output dataset types. The filters referenced
require polygonal input, and the Tetrahedralize filter
produces unstructured grid output.
This property specifies the input to the Triangulate
filter.
The Tube filter
creates tubes around the lines in the input polygonal
dataset. The output is also polygonal.
This property specifies the input to the Tube
filter.
This property indicates the name of the scalar array on
which to operate. The indicated array may be used for scaling the
tubes. (See the VaryRadius property.)
This property indicates the name of the vector array on
which to operate. The indicated array may be used for scaling and/or
orienting the tubes. (See the VaryRadius property.)
The value of this property indicates the number of faces
around the circumference of the tube.
If this property is set to 1, endcaps will be drawn on
the tube. Otherwise the ends of the tube will be open.
The value of this property sets the radius of the tube.
If the radius is varying (VaryRadius property), then this value is the
minimum radius.
The property determines whether/how to vary the radius
of the tube. If varying by scalar (1), the tube radius is based on the
point-based scalar values in the dataset. If it is varied by vector,
the vector magnitude is used in varying the radius.
If varying the radius (VaryRadius property), the
property sets the maximum tube radius in terms of a multiple of the
minimum radius. If not varying the radius, this value has no
effect.
If this property is set to 0, and the input contains no
vector array, then default ribbon normals will be generated
(DefaultNormal property); if a vector array has been set
(SelectInputVectors property), the ribbon normals will be set from the
specified array. If this property is set to 1, the default normal
(DefaultNormal property) will be used, regardless of whether the
SelectInputVectors property has been set.
The value of this property specifies the normal to use
when the UseDefaultNormal property is set to 1 or the input contains no
vector array (SelectInputVectors property).
The Warp (scalar) filter translates the points of the
input data set along a vector by a distance determined by
the specified scalars. This filter operates on polygonal,
curvilinear, and unstructured grid data sets containing
single-component scalar arrays. Because it only changes
the positions of the points, the output data set type is
the same as that of the input. Any scalars in the input
dataset are copied to the output, so the data can be
colored by them.
This property specifies the input to the Warp (scalar)
filter.
This property contains the name of the scalar array by
which to warp the dataset.
The scalar value at a given point is multiplied by the
value of this property to determine the magnitude of the change vector
for that point.
The values of this property specify the direction along
which to warp the dataset if any normals contained in the input dataset
are not being used for this purpose. (See the UseNormal
property.)
If point normals are present in the dataset, the value
of this property toggles whether to use a single normal value (value =
1) or the normals from the dataset (value = 0).
If the value of this property is 1, then the
Z-coordinates from the input are considered to be the scalar values,
and the displacement is along the Z axis. This is useful for creating
carpet plots.
The Warp (vector) filter translates the points of the
input dataset using a specified vector array. The vector
array chosen specifies a vector per point in the input.
Each point is translated along its vector by a given scale
factor. This filter operates on polygonal, curvilinear,
and unstructured grid datasets. Because this filter only
changes the positions of the points, the output dataset
type is the same as that of the input.
This property specifies the input to the Warp (vector)
filter.
The value of this property contains the name of the
vector array by which to warp the dataset's point
coordinates.
Each component of the selected vector array will be
multiplied by the value of this property before being used to compute
new point coordinates.
This filter
extracts the portion of the input dataset that lies along
the specified plane. The Slice filter takes any type of
dataset as input. The output of this filter is polygonal
data.
This property specifies the input to the Slice
filter.
This property sets the parameters of the slice
function.
This parameter controls whether to extract the entire
cells that are sliced by the region or just extract a triangulated
surface of that region.
This parameter controls whether to produce triangles in the output.
The values in this property specify a list of current
offset values. This can be used to create multiple slices with
different centers. Each entry represents a new slice with its center
shifted by the offset value.
This filter
extracts the portion of the input dataset that lies along
the specified plane. The Slice filter takes any type of
dataset as input. The output of this filter is polygonal
data.
This property specifies the input to the Slice
filter.
This property sets the parameters of the slice
function.
The values in this property specify a list of current
offset values. This can be used to create multiple slices with
different centers. Each entry represents a new slice with its center
shifted by the offset value.
This filter
extracts the portion of the input dataset that lies along
the specified plane. The Slice filter takes any type of
dataset as input. The output of this filter is a multiblock of multipiece
polygonal data. This is a multithreaded implementation.
This property specifies the input of the slice
filter.
This property sets the parameters of the plane
function.
If enabled and input is not an Image Data, Sphere Tree will be computed, for a faster slice computation.
If enabled and input is not an Image Data, Tree hierarchy will be computed, for a faster slice computation.
If enabled, compute the normal on each cell. Since all output cells are coplanar,
the normal generated is simply the normal of the plane used to slice with.
By default computing of normals is disabled.
If enabled, interpolate attribute data. By default this is
enabled. Point data is always interpolated. Cell data is transferred
unless input is an image data.
If enabled and input is a Structured Grid or a Rectilinear Grid,
output slice will consist of polygons instead of only triangles.
The Clip filter
cuts away a portion of the input data set using an
implicit function (an implicit description).
This filter operates on all types of data
sets, and it returns unstructured grid data on
output.
This property specifies the dataset on which the Clip
filter will operate.
This property specifies the parameters of the clip
function (an implicit description) used to clip the dataset.
If clipping with scalars, this property specifies the
name of the scalar array on which to perform the clip
operation.
If clipping with scalars, this property sets the scalar
value about which to clip the dataset based on the scalar array chosen.
(See SelectInputScalars.) If clipping with a clip function, this
property specifies an offset from the clip function to use in the
clipping operation. Neither functionality is currently available in
ParaView's user interface.
Invert which part of the geometry is clipped.
If UseValueAsOffset is true, Value is used as an offset
parameter to the implicit function. Otherwise, Value is used only when
clipping using a scalar array.
This parameter controls whether to extract entire cells
in the given region or clip those cells so all of the output will stay
only on that side of region.
If this property is set to 1 it will clip to the exact specifications
for the **Box** option only, otherwise the clip will only approximate the box geometry. The
exact clip is very expensive as it requires generating 6 plane clips. Additionally,
**Invert** must be checked and **Crinkle clip** must be unchecked.
This clip filter cuts away a portion of the input polygonal dataset using
a plane to generate a new polygonal dataset.
This property specifies the dataset on which the Clip
filter will operate.
This property specifies the parameters of the clipping
plane used to clip the polygonal data.
Generate polygonal faces in the output.
Generate clipping outlines in the output wherever an
input face is cut by the clipping plane.
Generate (cell) data for coloring purposes such that the
newly generated cells (including capping faces and clipping outlines)
can be distinguished from the input cells.
If this flag is turned off, the clipper will return the
portion of the data that lies within the clipping plane. Otherwise, the
clipper will return the portion of the data that lies outside the
clipping plane.
Specify the tolerance for creating new points. A small
value might incur degenerate triangles.
Specify the color for the faces from the
input.
Specify the color for the capping faces (generated on
the clipping interface).
The Threshold filter extracts the portions of the input
dataset whose scalars lie within the specified range. This
filter operates on either point-centered or cell-centered
data. This filter operates on any type of dataset and
produces unstructured grid output. To select between these
two options, select either Point Data or Cell Data from
the Attribute Mode menu. Once the Attribute Mode has been
selected, choose the scalar array from which to threshold
the data from the Scalars menu. The Lower Threshold and
Upper Threshold sliders determine the range of the scalars
to retain in the output. The All Scalars check box only
takes effect when the Attribute Mode is set to Point Data.
If the All Scalars option is checked, then a cell will
only be passed to the output if the scalar values of all
of its points lie within the range indicated by the Lower
Threshold and Upper Threshold sliders. If unchecked, then
a cell will be added to the output if the specified scalar
value for any of its points is within the chosen
range.
This property specifies the input to the Threshold
filter.
The value of this property contains the name of the
scalar array from which to perform thresholding.
The values of this property specify the upper and lower
bounds of the thresholding operation.
If the value of this property is 1, then a cell is only
included in the output if the value of the selected array for all its
points is within the threshold. This is only relevant when thresholding
by a point-centered array.
If off, the vertex scalars are treated as a discrete set. If on, they
are treated as a continuous interval over the minimum and maximum. One
important "on" use case: When setting lower and upper threshold
equal to some value and turning AllScalars off, the results are
cells containing the isosurface for that value. WARNING: Whether on
or off, for higher order input, the filter will not give accurate
results.
This filter clip away the cells using lower and upper
thresholds.
This property specifies the input to the Threshold
filter.
The value of this property contains the name of the
scalar array from which to perform thresholding.
The values of this property specify the upper and lower
bounds of the thresholding operation.
The Contour
filter computes isolines or isosurfaces using a selected
point-centered scalar array. The Contour filter operates
on any type of data set, but the input is required to have
at least one point-centered scalar (single-component)
array. The output of this filter is
polygonal.
This property specifies the input dataset to be used by
the contour filter.
This property specifies the name of the scalar array
from which the contour filter will compute isolines and/or
isosurfaces.
If this property is set to 1, a scalar array containing
a normal value at each point in the isosurface or isoline will be
created by the contour filter; otherwise an array of normals will not
be computed. This operation is fairly expensive both in terms of
computation time and memory required, so if the output dataset produced
by the contour filter will be processed by filters that modify the
dataset's topology or geometry, it may be wise to set the value of this
property to 0. Select whether to compute normals.
If this property is set to 1, a scalar array containing
a gradient value at each point in the isosurface or isoline will be
created by this filter; otherwise an array of gradients will not be
computed. This operation is fairly expensive both in terms of
computation time and memory required, so if the output dataset produced
by the contour filter will be processed by filters that modify the
dataset's topology or geometry, it may be wise to set the value of this
property to 0. Not that if ComputeNormals is set to 1, then gradients
will have to be calculated, but they will only be stored in the output
dataset if ComputeGradients is also set to 1.
If this property is set to 1, an array of scalars
(containing the contour value) will be added to the output dataset. If
set to 0, the output will not contain this array.
Select the output precision of the coordinates. **Single** sets the
output to single-precision floating-point (i.e., float), **Double**
sets it to double-precision floating-point (i.e., double), and
**Default** sets it to the same precision as the precision of the
points in the input. Defaults to ***Single***.
This parameter controls whether to produce triangles in the output.
Warning: Many filters do not properly handle non-triangular polygons.
This property specifies the values at which to compute
isosurfaces/isolines and also the number of such
values.
This property specifies an incremental point locator for
merging duplicate / coincident points.
The Glyph filter generates a glyph (i.e., an arrow, cone, cube, cylinder, line,
sphere, or 2D glyph) at each point or cell in the input dataset. The glyphs can be
oriented and scaled by the input scalar and vector arrays. If the arrays are
point-centered, glyphs are placed at points in the input dataset. If the arrays
are cell-centered, glyphs are placed at the center of cells in the input dataset.
A transform that applies to the glyph source can be modified to change the shape
of the glyph. This filter operates on any type of data set. Its output is polygonal.
To use this filter, select the **Scale Array** to control glyph scaling
and **Orientation Array** to orient the glyphs if desired - each array
can be set to 'None' if scaling or orientation is not desired. When scaling
by a 3-element vector array, the **Vector Scale Mode** can be set to either
'Scale by Magnitude', which scales glyphs according to the vector magnitude,
or 'Scale by Components', which treats a each component as a separate scaling
factor in the corresponding dimension, i.e., the first component is the
scaling factor in the x-dimension, the second component scales the y-dimension,
and the third component scales the z-dimension.
When the **Rescale Glyphs** option is on, data in the **Scale Array** will be
mapped linearly from the **Glyph Data Range** to the range [0, **Maximum Glyph Size**].
If, however, the **Vector Scale Mode** property is set to 'Scale by Components',
the glyphs will be scaled according to each vector component without this remapping.
The **Glyph Mode** property controls which points in the input dataset
are selected for glyphing (since in most cases, glyphing all points in
the input dataset can be both performance impeding as well as visually
cluttered).
This property specifies the input to this filter. This is the
dataset from which the locations are selected to be glyphed.
This property determines which type of glyph will be placed at the
points in the input dataset.
The values in this property allow you to specify the transform
(translation, rotation, and scaling) to apply to the glyph
source.
Select the input array to be used for scaling the glyphs. If the scale
array is a vector array, you can control how the glyphs are scaled with
the **Vector Scale Mode** property.
Select the mode when the scaling array is a vector. **Scale by Magnitude** scales the glyph by
the vector magnitude. **Scale by Components** scales glyphs by each vector component in the dimension
that component represents, e.g., the x-direction is scaled by component 0, the y-direction is
scaled by component 1, and so on.
Specifies array component to use. Fixed at 4 to ensure the
ArrayRangeDomain is set to the vector magnitude for up to 3-component
arrays.
Enable rescaling the glyph scale factor to the value specified by **Glyph Size Range**.
If off, the glyph scale factor will be taken directly from the Scale Array, if one is
selected.
The range of the data. The first element defines the value that maps
to a glyph size of zero, and the second element defines the value that maps to the
maximum glyph size. Data values inside the range are mapped linearly between 0 and
the **Maximum Glyph Size**, while data values outside this range will be clamped to this
range prior to mapping to the glyph size.
Sets the maximum glyph size. The upper data value specified in the **Glyph Data Range**
will map to this glyph size while the lower data value will map to zero glyph size.
Select the input array to use for orienting the glyphs.
This property indicates the mode that will be used to generate
glyphs from the dataset.
This property specifies the maximum number of sample points to use
when sampling the space when Uniform Spatial Distribution is used.
This property specifies the seed that will be used for generating a
uniform distribution of glyph points when a Uniform Spatial
Distribution is used.
This property specifies the stride that will be used when glyphing by
Every Nth Point.
It has been replaced by 'GlyphWithCustomSource'. Please consider using that instead.
The Glyph filter generates a glyph (i.e., an arrow, cone, cube,
cylinder, line, sphere, or 2D glyph) at each point in the input
dataset. The glyphs can be oriented and scaled by the input
point-centered scalars and vectors. The Glyph filter operates on any
type of data set. Its output is polygonal
To use this filter, you first select the arrays to use for as the
**Scalars** and **Vectors**, if any. To orient the glyphs using the
selected **Vectors**, use **Orient** property. To scale the glyphs using
the selected **Scalars** or **Vectors**, use the **Scale Mode** property.
The **Glyph Mode** property controls which points in the input dataset
are selected for glyphing (since in most cases, glyphing all points in
the input dataset can be both performance impeding as well as visually
cluttered.
This property specifies the input to the Glyph filter. This is the
dataset from which the points are selected to be glyphed.
This property determines which type of glyph will be
placed at the points in the input dataset.
Select the input array to be treated as the active **Scalars**. You
can scale the glyphs using the selected scalars by setting the **Scale
Mode** property to **scalar**.
Select the input array to be treated as the active **Vectors**. You can
scale the glyphs using the selected vectors by setting the **Scale
Mode** property to **vector** or **vector_components**. You can orient the
glyphs using the selected vectors by checking the **Orient** property.
If this property is set to 1, the glyphs will be oriented based on the
vectors selected using the **Vectors** property.
Select how to scale the glyphs. Set to **off** to disable scaling
entirely. Set to **scalar** to scale the glyphs using the array selected
using the **Scalars** property. Set to **vector** to scale the glyphs
using the magnitude of the array selected using the **Vectors**
property. Set to **vector_components** to scale using the **Vectors**,
scaling each component individually.
Specify the constant multiplier to use to scale the glyphs.
This property indicates the mode that will be used to generate
glyphs from the dataset.
This property specifies the maximum number of sample points to use
when sampling the space when Uniform Spatial Distribution is used.
This property specifies the seed that will be used for generating a
uniform distribution of glyph points when a Uniform Spatial
Distribution is used.
This property specifies the stride that will be used when glyphing by
Every Nth Point.
The values in this property allow you to specify the transform
(translation, rotation, and scaling) to apply to the glyph
source.
The glyph is provided as the **Source** input to this filter.
The Glyph filter generates a glyph (i.e., an arrow, cone, cube, cylinder, line,
sphere, or 2D glyph) at each point or cell in the input dataset. The glyphs can be
oriented and scaled by the input scalar and vector arrays. If the arrays are
point-centered, glyphs are placed at points in the input dataset. If the arrays
are cell-centered, glyphs are placed at the center of cells in the input dataset.
A transform that applies to the glyph source can be modified to change the shape
of the glyph. This filter operates on any type of data set. Its output is polygonal.
To use this filter, select the **Scale Array** to control glyph scaling
and **Orientation Array** to orient the glyphs if desired - each array
can be set to 'None' if scaling or orientation is not desired. When scaling
by a 3-element vector array, the **Vector Scale Mode** can be set to either
'Scale by Magnitude', which scales glyphs according to the vector magnitude,
or 'Scale by Components', which treats a each component as a separate scaling
factor in the corresponding dimension, i.e., the first component is the
scaling factor in the x-dimension, the second component scales the y-dimension,
and the third component scales the z-dimension.
When the **Rescale Glyphs** option is on, data in the **Scale Array** will be
mapped linearly from the **Glyph Data Range** to the range [0, **Maximum Glyph Size**].
If, however, the **Vector Scale Mode** property is set to 'Scale by Components',
the glyphs will be scaled according to each vector component without this remapping.
The **Glyph Mode** property controls which points in the input dataset
are selected for glyphing (since in most cases, glyphing all points in
the input dataset can be both performance impeding as well as visually
cluttered).
This property specifies the input to this filter. This is the
dataset from which the points are selected to be glyphed.
This property determines the glyph geometry source that will be
placed at the points in the input dataset.
This property determines which type of glyph will be
placed at the points in the input dataset.
The values in this property allow you to specify the transform
(translation, rotation, and scaling) to apply to the glyph
source.
Select the input array to be used for scaling the glyphs. If the scale
array is a vector array, you can control how the glyphs are scaled with
the **Vector Scale Mode** property.
Select the mode when the scaling array is a vector. **Scale by Magnitude** scales the glyph by
the vector magnitude. **Scale by Components** scales glyphs by each vector component in the dimension
that component represents, e.g., the x-direction is scaled by component 0, the y-direction is
scaled by component 1, and so on.
Specifies array component to use. Fixed at 4 to ensure the
ArrayRangeDomain is set to the vector magnitude for up to 3-component
arrays.
Enable rescaling the glyph scale factor to the value specified by **Glyph Size Range**.
If off, the glyph scale factor will be taken directly from the Scale Array, if one is
selected.
The range of the data. The first element defines the value that maps
to a glyph size of zero, and the second element defines the value that maps to the
maximum glyph size. Data values inside the range are mapped linearly between 0 and
the **Maximum Glyph Size**, while data values outside this range will be clamped to this
range prior to mapping to the glyph size.
Sets the maximum glyph size. The upper data value specified in the **Glyph Data Range**
will map to this glyph size while the lower data value will map to zero glyph size.
Select the input array to use for orienting the glyphs.
This property indicates the mode that will be used to generate
glyphs from the dataset.
This property specifies the maximum number of sample points to use
when sampling the space when Uniform Spatial Distribution is used.
This property specifies the seed that will be used for generating a
uniform distribution of glyph points when a Uniform Spatial
Distribution is used.
This property specifies the stride that will be used when glyphing by
Every Nth Point.
The Glyph filter generates a glyph at each point in the input dataset.
The glyphs can be oriented and scaled by the input point-centered scalars
and vectors. The Glyph filter operates on any type of data set. Its
output is polygonal. This filter is available on the
Toolbar.
This property specifies the input to the Glyph filter.
This is the dataset from which the points are selected to be glyphed.
This property determines which type of glyph will be
placed at the points in the input dataset.
Select the input array to be treated as the active "Scalars".
You can scale the glyphs using the selected scalars by setting the
"Scale Mode" property to "scalar"
Select the input array to be treated as the active "Vectors".
You can scale the glyphs using the selected vectors by setting the "Scale Mode"
property to "vector" or "vector_components". You can orient the glyphs using the
selected vectors by checking the "Orient" property.
If this property is set to 1, the glyphs will be
oriented based on the vectors selected using the "Vectors" property.
Select how to scale the glyphs. Set to "off" to disable
scaling entirely. Set to "scalar" to scale the glyphs using the
array selected using the "Scalars" property. Set to "vector" to scale the
glyphs using the magnitude of the array selected using the "Vectors" property.
Set to "vector_components" to scale using the "Vectors", scaling each component
individually.
Specify the constant multiplier to use to scale the glyphs.
This property indicates the mode that will be used to generate
glyphs from the dataset.
This property specifies the maximum number of sample points to use
when sampling the space when Uniform Spatial Distribution is used.
This property specifies the seed that will be used for generating
a uniform distribution of glyph points when a Uniform Spatial
Distribution is used.
This property specifies the stride that will be used when glyphing
by Every Nth Point.
The values in this property allow you to specify the
transform (translation, rotation, and scaling) to apply to the glyph
source.
This filter allows you to specify a location and then either interpolate
the data attributes from the input dataset at that location or extract the
cell(s) at the location.
Set the input dataset producer
Select whether to interpolate (probe) data attributes at the specified
location, or to extract cell(s) containing the specified location.
Select the location of interest in 3D space.
The Probe filter samples the data set attributes of the
current data set at the points in a point cloud. The Probe
filter uses interpolation to determine the values at the
selected point, whether or not it lies at an input point.
The Probe filter operates on any type of data and produces
polygonal output (a point cloud).
This property specifies the dataset from which to obtain
probe values.
This property specifies the dataset whose geometry will
be used in determining positions to probe.
Set whether to pass the field-data arrays from the Input i.e. the input
providing the geometry to the output. On by default.
Set whether to compute the tolerance or to use a user provided
value. On by default.
Set the tolerance to use for
vtkDataSet::FindCell
The Plot Over Line filter samples the data set attributes
of the current data set at the points along a line. The
values of the point-centered variables along that line
will be displayed in an XY Plot. This filter uses
interpolation to determine the values at the selected
point, whether or not it lies at an input point. The Probe
filter operates on any type of data and produces polygonal
output (a line).
Probe is a filter that computes point attributes at
specified point positions. The filter has two inputs: the
Input and Source. The 'Source' geometric structure is passed
through the filter. The point attributes are computed at
the 'Source' point positions by interpolating into the
'Input' data. For example, we can compute data values on a plane
(plane specified as Source) from a volume (Input). The
cell data of the Input data is copied to the output based
on in which Input cell each Source point is. If an array
of the same name exists both in Input's point and cell
data, only the one from the point data is
probed. This is the implementation of the
'Resample With Dataset' filter available in ParaView
version 5.1 and earlier.
This has been replaced by 'Resample With Dataset'. Please consider
using that instead.
This property specifies the dataset from which to obtain
probe values. The data attributes come from this dataset.
This property specifies the dataset whose geometry will
be used in determining positions to probe. The mesh comes from this
dataset.
When set the input's cell data arrays are shallow copied to the output.
When set the input's point data arrays are shallow copied to the output.
Set whether to pass the field-data arrays from the Input i.e. the input
providing the geometry to the output. On by default.
Set whether to compute the tolerance or to use a user provided
value. On by default.
Set the tolerance to use for
vtkDataSet::FindCell
Please use "Resample To Image" instead instead of "ImageResampling" filter.
This property specifies the dataset whose data will
be probed
How many linear resampling we want along each axis
Do we use input bounds or custom ones?
Custom probing bounds if needed
The
Stream Tracer filter generates streamlines in a vector
field from a collection of seed points. Production of
streamlines terminates if a streamline crosses the
exterior boundary of the input dataset
(ReasonForTermination=1). Other reasons for termination
include an initialization issue (ReasonForTermination=2),
computing an unexpected value (ReasonForTermination=3),
reached the Maximum Streamline Length input value
(ReasonForTermination=4), reached the Maximum Steps
input value (ReasonForTermination=5), and velocity was
lower than the Terminal Speed input value
(ReasonForTermination=6). This filter operates on any
type of dataset, provided it has point-centered vectors.
The output is polygonal data containing polylines.
This property specifies the input to the Stream Tracer
filter.
This property contains the name of the vector array from
which to generate streamlines.
This property determines which interpolator to use for
evaluating the velocity vector field. The first is faster though the
second is more robust in locating cells during streamline
integration.
Specify whether or not to compute surface
streamlines.
This property determines in which direction(s) a
streamline is generated.
This property determines which integrator (with
increasing accuracy) to use for creating streamlines.
This property specifies the unit for
Minimum/Initial/Maximum integration step size. The Length unit refers
to the arc length that a particle travels/advects within a single step.
The Cell Length unit represents the step size as a number of
cells.
This property specifies the initial integration step
size. For non-adaptive integrators (Runge-Kutta 2 and Runge-Kutta 4),
it is fixed (always equal to this initial value) throughout the
integration. For an adaptive integrator (Runge-Kutta 4-5), the actual
step size varies such that the numerical error is less than a specified
threshold.
When using the Runge-Kutta 4-5 integrator, this property
specifies the minimum integration step size.
When using the Runge-Kutta 4-5 integrator, this property
specifies the maximum integration step size.
This property specifies the maximum number of steps,
beyond which streamline integration is terminated.
This property specifies the maximum streamline length
(i.e., physical arc length), beyond which line integration is
terminated.
This property specifies the terminal speed, below which
particle advection/integration is terminated.
This property specifies the maximum error (for
Runge-Kutta 4-5) tolerated throughout streamline integration. The
Runge-Kutta 4-5 integrator tries to adjust the step size such that the
estimated error is less than this threshold.
Specify whether or not to compute
vorticity.
The value of this property determines how the seeds for
the streamlines will be generated.
This filter generates evenly spaced streamlines in a 2D
vector field from a start position. Production of
streamlines terminates if a streamline crosses the
exterior boundary of the input dataset
(ReasonForTermination=1), an initialization issue (ReasonForTermination=2),
computing an unexpected value (ReasonForTermination=3),
reached the Maximum Streamline Length input value
(ReasonForTermination=4), reached the Maximum Steps
input value (ReasonForTermination=5), velocity was
lower than the Terminal Speed input value
(ReasonForTermination=6), a streamline formed a loop
(ReasonForTermination=7), and the streamline was too close to
other streamlines (ReasonForTermination=8). This filter
operates on a 2D dataset aligned with plane XY with
point-centered vectors aligned with plane XY.
The output is polygonal data containing polylines.
This property specifies the input to the filter.
This property contains the name of the vector array from
which to generate streamlines.
This property determines which interpolator to use for
evaluating the velocity vector field. The first is faster though the
second is more robust in locating cells during streamline
integration.
This property determines which integrator (with
increasing accuracy) to use for creating streamlines.
This property specifies the unit for
Initial integration step size. The Length unit refers
to the arc length that a particle travels/advects within a single step.
The Cell Length unit represents the step size as a number of
cells.
This property specifies the initial integration step
size. For non-adaptive integrators (Runge-Kutta 2 and Runge-Kutta 4),
it is fixed (always equal to this initial value) throughout the
integration.
This property specifies the maximum number of steps,
beyond which streamline integration is terminated.
Specify the separating distance between
streamlines expressed in IntegrationStepUnit.
Specifies SeparatingDistanceRatio. If streamlines
get closer than SeparatingDistance * SeparatingDistanceRatio to
other streamlines integration stops.
Loops are considered closed if the have two points at
distance less than this. This is expressed in IntegrationStepUnit.
Specify the starting point (seed) of the first streamline
in the global coordinate system.
This property specifies the terminal speed, below which
particle advection/integration is terminated.
Specify whether or not to compute
vorticity.
The
Stream Tracer With Custom Source filter generates streamlines
in a vector field from a collection of seed points. Production
of streamlines terminates if a streamline crosses the
exterior boundary of the input dataset
(ReasonForTermination=1). Other reasons for termination
include an initialization issue (ReasonForTermination=2),
computing an unexpected value (ReasonForTermination=3),
reached the Maximum Streamline Length input value
(ReasonForTermination=4), reached the Maximum Steps
input value (ReasonForTermination=5), and velocity was
lower than the Terminal Speed input value
(ReasonForTermination=6). This filter operates on any
type of dataset, provided it has point-centered vectors.
The output is polygonal data containing polylines.
This filter takes a Source input
that provides the seed points.
This property specifies the input to the Stream Tracer
filter.
This property contains the name of the vector array from
which to generate streamlines.
This property determines which interpolator to use for
evaluating the velocity vector field. The first is faster though the
second is more robust in locating cells during streamline
integration.
Specify whether or not to compute surface
streamlines.
This property determines in which direction(s) a
streamline is generated.
This property determines which integrator (with
increasing accuracy) to use for creating streamlines.
This property specifies the unit for
Minimum/Initial/Maximum integration step size. The Length unit refers
to the arc length that a particle travels/advects within a single step.
The Cell Length unit represents the step size as a number of
cells.
This property specifies the initial integration step
size. For non-adaptive integrators (Runge-Kutta 2 and Runge-Kutta 4),
it is fixed (always equal to this initial value) throughout the
integration. For an adaptive integrator (Runge-Kutta 4-5), the actual
step size varies such that the numerical error is less than a specified
threshold.
When using the Runge-Kutta 4-5 integrator, this property
specifies the minimum integration step size.
When using the Runge-Kutta 4-5 integrator, this property
specifies the maximum integration step size.
This property specifies the maximum number of steps,
beyond which streamline integration is terminated.
This property specifies the maximum streamline length
(i.e., physical arc length), beyond which line integration is
terminated.
This property specifies the terminal speed, below which
particle advection/integration is terminated.
This property specifies the maximum error (for
Runge-Kutta 4-5) tolerated throughout streamline integration. The
Runge-Kutta 4-5 integrator tries to adjust the step size such that the
estimated error is less than this threshold.
Specify whether or not to compute
vorticity.
This property specifies the input used to obtain the
seed points.
The Temporal Cache
can be used to save multiple copies of a data set at
different time steps to prevent thrashing in the pipeline
caused by downstream filters that adjust the requested
time step. For example, assume that there is a downstream
Temporal Interpolator filter. This filter will (usually)
request two time steps from the upstream filters, which in
turn (usually) causes the upstream filters to run twice,
once for each time step. The next time the interpolator
requests the same two time steps, they might force the
upstream filters to re-evaluate the same two time steps.
The Temporal Cache can keep copies of both of these time
steps and provide the requested data without having to run
upstream filters.
This property specifies the input of the Temporal Cache
filter.
The cache size determines the number of time steps that
can be cached at one time. The maximum number is 10. The minimum is 2
(since it makes little sense to cache less than that).
The Temporal
Interpolator converts data that is defined at discrete
time steps to one that is defined over a continuum of time
by linearly interpolating the data's field data between
two adjacent time steps. The interpolated values are a
simple approximation and should not be interpreted as
anything more. The Temporal Interpolator assumes that the
topology between adjacent time steps does not
change.
This property specifies the input of the Temporal
Interpolator.
If Discrete Time Step Interval is set to 0, then the
Temporal Interpolator will provide a continuous region of time on its
output. If set to anything else, then the output will define a finite
set of time points on its output, each spaced by the Discrete Time Step
Interval. The output will have (time range)/(discrete time step
interval) time steps. (Note that the time range is defined by the time
range of the data of the input filter, which may be different from
other pipeline objects or the range defined in the animation
inspector.) This is a useful option to use if you have a dataset with
one missing time step and wish to 'fill in' the missing data with an
interpolated value from the steps on either side.
This file modifies the time range or time steps of the
data without changing the data itself. The data is not
resampled by this filter, only the information
accompanying the data is modified.
This property specifies the input of the
filter.
Determine which time step to snap to.
The Temporal
Shift Scale filter linearly transforms the time values of
a pipeline object by applying a shift and then scale.
Given a data at time t on the input, it will be
transformed to time t*Shift + Scale on the output.
Inversely, if this filter has a request for time t, it
will request time (t-Shift)/Scale on its
input.
The input to the Temporal Shift Scale
filter.
Apply a translation to the data before scaling. To
convert T{5,100} to T{0,1} use Preshift=-5, Scale=1/95, PostShift=0 To
convert T{5,105} to T{5,10} use Preshift=-5, Scale=5/100,
PostShift=5
The amount of time the input is shifted.
The factor by which the input time is
scaled.
If Periodic is true, requests for time will be wrapped
around so that the source appears to be a periodic time source. If data
exists for times {0,N-1}, setting periodic to true will cause time 0 to
be produced when time N, 2N, 2N etc is requested. This effectively
gives the source the ability to generate time data indefinitely in a
loop. When combined with Shift/Scale, the time becomes periodic in the
shifted and scaled time frame of reference. Note: Since the input time
may not start at zero, the wrapping of time from the end of one period
to the start of the next, will subtract the initial time - a source
with T{5..6} repeated periodically will have output time {5..6..7..8}
etc.
If Periodic time is enabled, this flag determines if the
last time step is the same as the first. If PeriodicEndCorrection is
true, then it is assumed that the input data goes from 0-1 (or whatever
scaled/shifted actual time) and time 1 is the same as time 0 so that
steps will be 0,1,2,3...N,1,2,3...N,1,2,3 where step N is the same as 0
and step 0 is not repeated. When this flag is false the data is assumed
to be literal and output is of the form 0,1,2,3...N,0,1,2,3... By
default this flag is ON
If Periodic time is enabled, this controls how many time
periods time is reported for. A filter cannot output an infinite number
of time steps and therefore a finite number of periods is generated
when reporting time.
Given an input
that changes over time, vtkTemporalStatistics looks at the
data for each time step and computes some statistical
information of how a point or cell variable changes over
time. For example, vtkTemporalStatistics can compute the
average value of "pressure" over time of each point. Note
that this filter will require the upstream filter to be
run on every time step that it reports that it can
compute. This may be a time consuming operation.
vtkTemporalStatistics ignores the temporal spacing. Each
timestep will be weighted the same regardless of how long
of an interval it is to the next timestep. Thus, the
average statistic may be quite different from an
integration of the variable if the time spacing
varies.
Set the input to the Temporal Statistics
filter.
Compute the average of each point and cell variable over
time.
Compute the minimum of each point and cell variable over
time.
Compute the maximum of each point and cell variable over
time.
Compute the standard deviation of each point and cell
variable over time.
**Temporal Statistics** filter needs to process all timesteps
available in your dataset and can potentially take a long time to complete.
Do you want to continue?
The Particle Trace filter generates pathlines in a vector
field from a collection of seed points. The vector field
used is selected from the Vectors menu, so the input data
set is required to have point-centered vectors. The Seed
portion of the interface allows you to select whether the
seed points for this integration lie in a point cloud or
along a line. Depending on which is selected, the
appropriate 3D widget (point or line widget) is displayed
along with traditional user interface controls for
positioning the point cloud or line within the data set.
Instructions for using the 3D widgets and the
corresponding manual controls can be found in section 7.4.
This filter operates on any type of data set, provided it
has point-centered vectors. The output is polygonal data
containing polylines. This filter is available on the
Toolbar.
Specify which is the Input of the StreamTracer
filter.
Specify the seed dataset. Typically from where the
vector field integration should begin. Usually a point/radius or a line
with a given resolution.
If the input seeds are not changing, then this
can be set to 1 to avoid having to do a repeated grid search
that would return the exact same result.
If the input grid is not changing, then this
can be set to 1 to avoid having to create cell locators for
each update.
When animating particles, it is nice to inject new ones
every Nth step to produce a continuous flow. Setting
ForceReinjectionEveryNSteps to a non zero value will cause the particle
source to reinject particles every Nth step even if it is otherwise
unchanged. Note that if the particle source is also animated, this flag
will be redundant as the particles will be reinjected whenever the
source changes anyway
Specify which vector array should be used for the
integration through that filter.
Compute vorticity and angular rotation of particles as
they progress
The Particle Trace filter generates pathlines in a vector
field from a collection of seed points. The vector field
used is selected from the Vectors menu, so the input data
set is required to have point-centered vectors. The Seed
portion of the interface allows you to select whether the
seed points for this integration lie in a point cloud or
along a line. Depending on which is selected, the
appropriate 3D widget (point or line widget) is displayed
along with traditional user interface controls for
positioning the point cloud or line within the data set.
Instructions for using the 3D widgets and the
corresponding manual controls can be found in section 7.4.
This filter operates on any type of data set, provided it
has point-centered vectors. The output is polygonal data
containing polylines. This filter is available on the
Toolbar.
Specify which is the Input of the StreamTracer
filter.
Specify the seed dataset. Typically from where the
vector field integration should begin. Usually a point/radius or a line
with a given resolution.
Setting TerminationTime to a positive value will cause
particles to terminate when the time is reached. The units of time
should be consistent with the primary time variable.
When animating particles, it is nice to inject new ones
every Nth step to produce a continuous flow. Setting
ForceReinjectionEveryNSteps to a non zero value will cause the particle
source to reinject particles every Nth step even if it is otherwise
unchanged. Note that if the particle source is also animated, this flag
will be redundant as the particles will be reinjected whenever the
source changes anyway
If the input seeds are not changing, then this
can be set to 1 to avoid having to do a repeated grid search
that would return the exact same result.
If the input grid is not changing, then this
can be set to 1 to avoid having to create cell locators for
each update.
Specify which vector array should be used for the
integration through that filter.
Compute vorticity and angular rotation of particles as
they progress
This property specifies the terminal speed, below which
particle advection/integration is terminated.
The Particle Trace filter generates pathlines in a vector
field from a collection of seed points. The vector field
used is selected from the Vectors menu, so the input data
set is required to have point-centered vectors. The Seed
portion of the interface allows you to select whether the
seed points for this integration lie in a point cloud or
along a line. Depending on which is selected, the
appropriate 3D widget (point or line widget) is displayed
along with traditional user interface controls for
positioning the point cloud or line within the data set.
Instructions for using the 3D widgets and the
corresponding manual controls can be found in section 7.4.
This filter operates on any type of data set, provided it
has point-centered vectors. The output is polygonal data
containing polylines. This filter is available on the
Toolbar.
Specify which is the Input of the StreamTracer
filter.
If the input seeds are not changing, then this
can be set to 1 to avoid having to do a repeated grid search
that would return the exact same result.
If the input grid is not changing, then this
can be set to 1 to avoid having to create cell locators for
each update.
Specify the seed dataset. Typically from where the
vector field integration should begin. Usually a point/radius or a line
with a given resolution.
Setting TerminationTime to a positive value will cause
particles to terminate when the time is reached. The units of time
should be consistent with the primary time variable.
When animating particles, it is nice to inject new ones
every Nth step to produce a continuous flow. Setting
ForceReinjectionEveryNSteps to a non zero value will cause the particle
source to reinject particles every Nth step even if it is otherwise
unchanged. Note that if the particle source is also animated, this flag
will be redundant as the particles will be reinjected whenever the
source changes anyway
Specify which vector array should be used for the
integration through that filter.
Compute vorticity and angular rotation of particles as
they progress
Prevents cache from getting reset so that new computation
always start from previous results.
Particle Pathlines takes any dataset as input, it extracts the
point locations of all cells over time to build up a polyline
trail. The point number (index) is used as the 'key' if the points
are randomly changing their respective order in the points list,
then you should specify a scalar that represents the unique
ID. This is intended to handle the output of a filter such as the
TemporalStreamTracer.
The input cells to create pathlines for.
Set a second input, which is a selection. Particles with the same
Id in the selection as the primary input will be chosen for
pathlines Note that you must have the same IdChannelArray in the
selection as the input
Set the number of particles to track as a ratio of the input.
Example: setting MaskPoints to 10 will track every 10th point.
If the Particles being traced animate for a long time, the trails
or traces will become long and stringy. Setting the
MaxTraceTimeLength will limit how much of the trace is
displayed. Tracks longer then the Max will disappear and the
trace will appear like a snake of fixed length which progresses
as the particle moves. This length is given with respect to
timesteps.
If a particle disappears from one end of a simulation and
reappears on the other side, the track left will be
unrepresentative. Set a MaxStepDistance{x,y,z} which acts as a
threshold above which if a step occurs larger than the value (for
the dimension), the track will be dropped and restarted after the
step. (ie the part before the wrap around will be dropped and the
newer part kept).
Specify the name of a scalar array which will be used to fetch
the index of each point. This is necessary only if the particles
change position (Id order) on each time step. The Id can be used
to identify particles at each step and hence track them properly.
If this array is set to "Global or Local IDs", the global point
ids are used if they exist or the point index is otherwise.
The Outline filter
generates an outline of the outside edges of the input
dataset, rather than the dataset's bounding box. This
filter operates on structured grid datasets and produces
polygonal output.
This property specifies the input to the outline
(curvilinear) filter.
The Generic Clip filter cuts away a portion of the input
data set using a plane, a sphere, a box, or a scalar
value. The menu in the Clip Function portion of the
interface allows the user to select which implicit
function to use or whether to clip using a scalar value.
Making this selection loads the appropriate user
interface. For the implicit functions, the appropriate 3D
widget (plane, sphere, or box) is also displayed. The use
of these 3D widgets, including their user interface
components, is discussed in section 7.4. If an implicit
function is selected, the clip filter returns that portion
of the input data set that lies inside the function. If
Scalars is selected, then the user must specify a scalar
array to clip according to. The clip filter will return
the portions of the data set whose value in the selected
Scalars array is larger than the Clip value. Regardless of
the selection from the Clip Function menu, if the Inside
Out option is checked, the opposite portions of the data
set will be returned. This filter operates on all types of
data sets, and it returns unstructured grid data on
output.
Set the input to the Generic Clip
filter.
Set the parameters of the clip function.
If clipping with scalars, this property specifies the
name of the scalar array on which to perform the clip
operation.
Choose which portion of the dataset should be clipped
away.
If clipping with a scalar array, choose the clipping
value.
The Generic
Contour filter computes isolines or isosurfaces using a
selected point-centered scalar array. The available scalar
arrays are listed in the Scalars menu. The scalar range of
the selected array will be displayed. The interface for
adding contour values is very similar to the one for
selecting cut offsets (in the Cut filter). To add a single
contour value, select the value from the New Value slider
in the Add value portion of the interface and click the
Add button, or press Enter. To instead add several evenly
spaced contours, use the controls in the Generate range of
values section. Select the number of contour values to
generate using the Number of Values slider. The Range
slider controls the interval in which to generate the
contour values. Once the number of values and range have
been selected, click the Generate button. The new values
will be added to the Contour Values list. To delete a
value from the Contour Values list, select the value and
click the Delete button. (If no value is selected, the
last value in the list will be removed.) Clicking the
Delete All button removes all the values in the list. If
no values are in the Contour Values list when Accept is
pressed, the current value of the New Value slider will be
used. In addition to selecting contour values, you can
also select additional computations to perform. If any of
Compute Normals, Compute Gradients, or Compute Scalars is
selected, the appropriate computation will be performed,
and a corresponding point-centered array will be added to
the output. The Generic Contour filter operates on a
generic data set, but the input is required to have at
least one point-centered scalar (single-component) array.
The output of this filter is polygonal.
Set the input to the Generic Contour
filter.
This property specifies the name of the scalar array
from which the contour filter will compute isolines and/or
isosurfaces.
Select whether to compute normals.
Select whether to compute gradients.
Select whether to compute scalars.
This property specifies the values at which to compute
isosurfaces/isolines and also the number of such
values.
This property specifies an incremental point locator for
merging duplicate / coincident points.
The
Generic Cut filter extracts the portion of the input data
set that lies along the specified plane or sphere. From
the Cut Function menu, you can select whether cutting will
be performed with a plane or a sphere. The appropriate 3D
widget (plane widget or sphere widget) will be displayed.
The parameters of the cut function can be specified
interactively using the 3D widget or manually using the
traditional user interface controls. Instructions for
using these 3D widgets and their corresponding user
interfaces are found in section 7.4. By default, the cut
lies on the specified plane or sphere. Using the Cut
Offset Values portion of the interface, it is also
possible to cut the data set at some offset from the
original cut function. The Cut Offset Values are in the
spatial units of the data set. To add a single offset,
select the value from the New Value slider in the Add
value portion of the interface and click the Add button,
or press Enter. To instead add several evenly spaced
offsets, use the controls in the Generate range of values
section. Select the number of offsets to generate using
the Number of Values slider. The Range slider controls the
interval in which to generate the offsets. Once the number
of values and range have been selected, click the Generate
button. The new offsets will be added to the Offset Values
list. To delete a value from the Cut Offset Values list,
select the value and click the Delete button. (If no value
is selected, the last value in the list will be removed.)
Clicking the Delete All button removes all the values in
the list. The Generic Cut filter takes a generic dataset
as input. Use the Input menu to choose a data set to cut.
The output of this filter is polygonal
data.
Set the input to the Generic Cut filter.
Set the parameters to the implicit function used for
cutting.
The values in this property specify a list of current
offset values. This can be used to create multiple slices with
different centers. Each entry represents a new slice with its center
shifted by the offset value.
Extract geometry from a higher-order
dataset.
Set the input to the Generic Geometry
Filter.
Select whether to forward original ids.
The Generic Outline
filter generates an axis-aligned bounding box for the
input data set. The Input menu specifies the data set for
which to create a bounding box. This filter operates on
generic data sets and produces polygonal
output.
Set the input to the Generic Outline
filter.
The
Generic Stream Tracer filter generates streamlines in a
vector field from a collection of seed points. The vector
field used is selected from the Vectors menu, so the input
data set is required to have point-centered vectors. The
Seed portion of the interface allows you to select whether
the seed points for this integration lie in a point cloud
or along a line. Depending on which is selected, the
appropriate 3D widget (point or line widget) is displayed
along with traditional user interface controls for
positioning the point cloud or line within the data set.
Instructions for using the 3D widgets and the
corresponding manual controls can be found in section 7.4.
The Max. Propagation entry box allows you to specify the
maximum length of the streamlines. From the Max.
Propagation menu, you can select the units to be either
Time (the time a particle would travel with steady flow)
or Length (in the data set's spatial coordinates). The
Init. Step Len. menu and entry specify the initial step
size for integration. (For non-adaptive integrators,
Runge-Kutta 2 and 4, the initial step size is used
throughout the integration.) The menu allows you to
specify the units. Time and Length have the same meaning
as for Max. Propagation. Cell Length specifies the step
length as a number of cells. The Integration Direction
menu determines in which direction(s) the stream trace
will be generated: FORWARD, BACKWARD, or BOTH. The
Integrator Type section of the interface determines which
calculation to use for integration: Runge-Kutta 2,
Runge-Kutta 4, or Runge-Kutta 4-5. If Runge-Kutta 4-5 is
selected, controls are displayed for specifying the
minimum and maximum step length and the maximum error. The
controls for specifying Min. Step Len. and Max. Step Len.
are the same as those for Init. Step Len. The Runge-Kutta
4-5 integrator tries to choose the step size so that the
estimated error is less than the value of the Maximum
Error entry. If the integration takes more than Max. Steps
to complete, if the speed goes below Term. Speed, if Max.
Propagation is reached, or if a boundary of the input data
set is crossed, integration terminates. This filter
operates on any type of data set, provided it has
point-centered vectors. The output is polygonal data
containing polylines.
Set the input to the Generic Stream Tracer
filter.
The value of this property determines how the seeds for
the streamlines will be generated.
This property contains the name of the vector array from
which to generate streamlines.
Specify the maximum streamline length.
Specify the initial integration step.
This property determines in which direction(s) a
streamline is generated.
This property determines which integrator (with
increasing accuracy) to use for creating streamlines.
Set the maximum error allowed in the integration. The
meaning of this value depends on the integrator chosen.
Specify the minimum integration step.
Choose the unit to use for the integration
step.
Specify the maximum integration step.
Specify the maximum number of steps used in the
integration.
If at any point the speed is below this value, the
integration is terminated.
Tessellate
a higher-order dataset.
Set the input to the Generic Tessellator
filter.
Groups multiple datasets to create a multiblock
dataset
This property indicates the the inputs to the Group
Datasets filter.
Groups all the time steps in the input into a collection with no time information.
Each timestep will become one block of the output.
This property specifies the input dataset.
The Level
Scalars filter uses colors to show levels of a multiblock
dataset.
This property specifies the input to the Level Scalars
filter.
The Level
Scalars filter uses colors to show levels of a
hierarchical dataset.
This property specifies the input to the Level Scalars
filter.
The Level
Scalars filter uses colors to show levels of a
hierarchical dataset.
This property specifies the input to the Level Scalars
filter.
Set the input to the Geometry Filter.
Toggle whether to generate faces containing triangle
strips. This should render faster and use less memory, but no cell data
is copied.
This makes UseStrips call Modified() after changing its
setting to ensure that the filter's output is immediately
changed.
Toggle whether to generate an outline or a
surface.
Nonlinear faces are approximated with flat polygons.
This parameter controls how many times to subdivide nonlinear surface
cells. Higher subdivisions generate closer approximations but take more
memory and rendering time. Subdivision is recursive, so the number of
output polygons can grow exponentially with this
parameter.
If on, the output polygonal dataset will have a cell data
array that holds the cell index of the original 3D cell that produced
each output cell. This is useful for cell picking.
If on, the output polygonal dataset will have a
point data array that holds the point index of the original 3D vertex
that produced each output vertex. This is useful for
picking.
The Image
Data to Point Set filter takes an image data (uniform
rectilinear grid) object and outputs an equivalent structured
grid (which as a type of point set). This brings the data to a
broader category of data storage but only adds a small amount of
overhead. This filter can be helpful in applying filters that
expect or manipulate point coordinates.
The Rectilinear Grid to Point Set
filter takes an rectilinear grid object and outputs an
equivalent Structured Grid (which is a type of point set). This
brings the data to a broader category of data storage but only
adds a small amount of overhead. This filter can be helpful in
applying filters that expect or manipulate point
coordinates.
Removes ghost
cells and point data and cell data ghost arrays.
This property specifies the input to the remove ghost
information filter.
Set the input to the Flatten Filter.
Set the input to the Ordered Composite Distributor
filter.
Toggle whether to pass the data through without
compositing.
Set the vtkPKdTree to distribute with.
When not empty, the output will be converted to the
given type.
Set the input to the MPI Move Data
filter.
Specify how the data is to be
redistributed.
Specify the type of the dataset.
Set the input to the Client Server Move Data
filter.
Set the input to the Reduction filter.
Set the algorithm that takes multiple inputs and
produces a single reduced output.
Set the reduction mode. Reducing all data to one
processor means that the destination process will have all data while
each other process will still have it's own data. Moving all to one processor means
that the destination process will have all data while other process have no data
anymore. Reduce all data to all processors is self explanatory.
Set the process to reduce to.
If set to a non-negative value, then produce results
using only the node Id specified.
If true, the filter will generate vtkOriginalProcessIds
arrays indicating the process id on which the cell/point was
generated.
Set the input to the Reduction filter.
Set the algorithm that runs on each node in
parallel.
Set the algorithm that takes multiple inputs and
produces a single reduced output.
If set to a non-negative value, then produce results
using only the node Id specified.
If true, the filter will generate vtkOriginalProcessIds
arrays indicating the process id on which the cell/point was
generated.
This property specifies the name of the scalar array
from which we will color by.
This property specifies the input to the
filter.
This property specifies the input to the
filter.
Choose arrays whose entries will be used to form
observations for statistical analysis.
This flag indicates if a column must be inserted
at index 0 with the names (ids) of the input columns.
This flag indicates if the output column must be
named using the names listed in the index 0 column.
This flag indicates if the sub-table must be
effectively transposed or not.
This
filter creates a scatter plot from a
dataset.
This property specifies the input to the
filter.
RectilinearGridGeometryFilter is a filter that extracts
geometry from a rectilinear grid. By specifying
appropriate i-j-k indices, it is possible to extract a
point, a curve, a surface, or a "volume". The volume is
actually a (n x m x o) region of points. The extent
specification is zero-offset. That is, the first k-plane
in a 50x50x50 rectilinear grid is given by (0,49, 0,49,
0,0).
Set the input to the Rectilinear Grid Geometry
filter.
TextureMapToPlane is a filter that generates 2D texture
coordinates by mapping input dataset points onto a plane.
The plane is generated automatically. A least squares
method is used to generate the plane
automatically.
Set the input to the Texture Map to Plane
filter.
This property specifies the 3D coordinates for the
origin of the plane. Set all to zero if you want to use automatic
generation.
This property specifies the 3D coordinates for
Point1 of the plane. Set all to zero if you want to use automatic
generation.
This property specifies the 3D coordinates for
Point2 of the plane. Set all to zero if you want to use automatic
generation.
If set the plane values will be automatically generated.
Note that for this to work all the Origin, Point1 and Point2 must all
be set to zero.
This is a filter that generates 2D texture coordinates by
mapping input dataset points onto a sphere. The sphere is
generated automatically. The sphere is generated
automatically by computing the center i.e. averaged
coordinates, of the sphere. Note that the generated
texture coordinates range between (0,1). The s-coordinate
lies in the angular direction around the z-axis, measured
counter-clockwise from the x-axis. The t-coordinate lies
in the angular direction measured down from the north pole
towards the south pole.
Set the input to the Texture Map to Sphere
filter.
Control how the texture coordinates are generated. If
Prevent Seam is set, the s-coordinate ranges from 0->1 and 1->0
corresponding to the theta angle variation between 0->180 and
180->0 degrees. Otherwise, the s-coordinate ranges from 0->1
between 0->360 degrees.
This is a filter that generates 2D texture coordinates by
mapping input dataset points onto a cylinder. The cylinder
is generated automatically. The cylinder is generated
automatically by computing the axis of the cylinder. Note
that the generated texture coordinates for the
s-coordinate ranges from (0-1) (corresponding to angle of
0->360 around axis), while the mapping of the
t-coordinate is controlled by the projection of points
along the axis.
Set the input to the Texture Map to Cylinder
filter.
Control how the texture coordinates are generated. If
Prevent Seam is set, the s-coordinate ranges from 0->1 and 1->0
corresponding to the theta angle variation between 0->180 and
180->0 degrees. Otherwise, the s-coordinate ranges from 0->1
between 0->360 degrees.
When set, the filter will try to determine the size and orientation of the cylinder
used for texture mapping using data bounds.
When **GenerateCylinderAutomatically** is not set, specify the first point defining
the axis of the cylinder through its center.
When **GenerateCylinderAutomatically** is not set, specify the second point defining
the axis of the cylinder through its center.
Set the input to the Polyline to Rectilinear Grid
filter.
Set the input to the Min Max filter.
Select whether to perform a min, max, or sum operation
on the data.
The Annotate Time
filter can be used to show the data time in a text
annotation.
This property specifies the input dataset for which to
display the time.
The value of this property is a format string used to
display the input time. The format string is specified using printf
style.
The amount of time the input is shifted (after
scaling).
The factor by which the input time is
scaled.
The Time Step Progress Bar
filter can be used to show the relative position of the actual time step/value
relatively to the number of timesteps/data time range in a progress bar.
This property specifies the input dataset for which to
display the time step.
CellDerivatives is a filter that computes derivatives of
scalars and vectors at the center of cells. You can choose
to generate different output including the scalar gradient
(a vector), computed tensor vorticity (a vector), gradient
of input vectors (a tensor), and strain matrix of the
input vectors (a tensor); or you may choose to pass data
through to the output.
This property specifies the input to the
filter.
This property indicates the name of the scalar array to
differentiate.
This property indicates the name of the vector array to
differentiate.
This property Controls how the filter works to generate
vector cell data. You can choose to compute the gradient of the input
scalars, or extract the vorticity of the computed vector gradient
tensor. By default, the filter will take the gradient of the input
scalar data.
This property controls how the filter works to generate
tensor cell data. You can choose to compute the gradient of the input
vectors, or compute the strain tensor of the vector gradient tensor. By
default, the filter will take the gradient of the vector data to
construct a tensor.
Converts a selection from one type to
another.
Set the vtkDataObject input used to convert the
selection.
Set the selection to convert.
Set the ContentType for the output.
vtkCompositeDataToUnstructuredGridFilter appends all vtkDataSet leaves of
the input composite dataset to a single unstructured grid. The subtree to
be combined can be chosen using the SubTreeCompositeIndex. If the
SubTreeCompositeIndex is a leaf node, then no appending is
required.
Set the input composite dataset.
Select the index of the subtree to be appended. For now,
this property is internal.
Filter
computing surface normals.
The
TableToPolyData filter converts a vtkTable to a set of
points in a vtkPolyData. One must specifies the columns in
the input table to use as the X, Y and Z coordinates for
the points in the output.
This property specifies the input..
This property specifies which data array is going to be
used as the X coordinate in the generated polydata
dataset.
This property specifies which data array is going to be
used as the Y coordinate in the generated polydata
dataset.
This property specifies which data array is going to be
used as the Z coordinate in the generated polydata
dataset.
Specify whether the points of the polydata are 3D or 2D.
If this is set to true then the Z Column will be ignored and the z
value of each point on the polydata will be set to 0. By default this
will be off.
Allow user to keep columns specified as X,Y,Z as Data
arrays. By default this will be off.
The
TableToStructuredGrid filter converts a vtkTable to a
vtkStructuredGrid. One must specifies the columns in the
input table to use as the X, Y and Z coordinates for the
points in the output, and the whole
extent.
This property specifies the input..
This property specifies which data array is going to be
used as the X coordinate in the generated polydata
dataset.
This property specifies which data array is going to be
used as the Y coordinate in the generated polydata
dataset.
This property specifies which data array is going to be
used as the Z coordinate in the generated polydata
dataset.
Performs the Fast Fourier Transform on the columns of a
table.
This property specifies the input of the
filter.
Extracts the data of a selection (e.g. points or cells)
over time, takes the FFT of them, and plots
them.
This is a
filter that produces a vtkTable from the chosen attribute
in the input data object. This filter can accept composite
datasets. If the input is a composite dataset, the output
is a multiblock with vtkTable leaves.
This property specifies the input of the
filter.
Select the attribute data to pass.
It is possible for this filter to add additional
meta-data to the field data such as point coordinates (when point
attributes are selected and input is pointset) or structured
coordinates etc. To enable this addition of extra information, turn
this flag on. Off by default.
This filter
prepare arbitrary data to be plotted in any of the plots. By default the
data is shown in a XY line plot.
The input.
This filter either computes a statistical model of a dataset or takes
such a model as its second input. Then, the model (however it is
obtained) may optionally be used to assess the input dataset. This filter
computes contingency tables between pairs of attributes. This result is a
tabular bivariate probability distribution which serves as a
Bayesian-style prior model. Data is assessed by computing <ul>
<li> the probability of observing both variables simultaneously;
<li> the probability of each variable conditioned on the other (the
two values need not be identical); and <li> the pointwise mutual
information (PMI). </ul> Finally, the summary statistics include
the information entropy of the observations.
The input to the filter. Arrays from this dataset will
be used for computing statistics and/or assessed by a statistical
model.
A previously-calculated model with which to assess a
separate dataset. This input is optional.
Specify which type of field data the arrays will be
drawn from.
Choose arrays whose entries will be used to form
observations for statistical analysis.
Specify the task to be performed: modeling and/or
assessment. <ol> <li> "Detailed model of input data,"
creates a set of output tables containing a calculated statistical
model of the <b>entire</b> input dataset;</li>
<li> "Model a subset of the data," creates an output table (or
tables) summarizing a <b>randomly-chosen subset</b> of the
input dataset;</li> <li> "Assess the data with a model,"
adds attributes to the first input dataset using a model provided on
the second input port; and</li> <li> "Model and assess the
same data," is really just operations 2 and 3 above applied to the same
input dataset. The model is first trained using a fraction of the input
data and then the entire dataset is assessed using that
model.</li> </ol> When the task includes creating a model
(i.e., tasks 2, and 4), you may adjust the fraction of the input
dataset used for training. You should avoid using a large fraction of
the input data for training as you will then not be able to detect
overfitting. The <i>Training fraction</i> setting will be
ignored for tasks 1 and 3.
Specify the fraction of values from the input dataset to
be used for model fitting. The exact set of values is chosen at random
from the dataset.
This filter either computes a statistical model of a dataset or takes
such a model as its second input. Then, the model (however it is
obtained) may optionally be used to assess the input dataset.<p>
This filter computes the min, max, mean, raw moments M2 through M4,
standard deviation, skewness, and kurtosis for each array you
select.<p> The model is simply a univariate Gaussian distribution
with the mean and standard deviation provided. Data is assessed using
this model by detrending the data (i.e., subtracting the mean) and then
dividing by the standard deviation. Thus the assessment is an array whose
entries are the number of standard deviations from the mean that each
input point lies.
The input to the filter. Arrays from this dataset will
be used for computing statistics and/or assessed by a statistical
model.
A previously-calculated model with which to assess a
separate dataset. This input is optional.
Specify which type of field data the arrays will be
drawn from.
Choose arrays whose entries will be used to form
observations for statistical analysis.
Specify the task to be performed: modeling and/or
assessment. <ol> <li> "Detailed model of input data,"
creates a set of output tables containing a calculated statistical
model of the <b>entire</b> input dataset;</li>
<li> "Model a subset of the data," creates an output table (or
tables) summarizing a <b>randomly-chosen subset</b> of the
input dataset;</li> <li> "Assess the data with a model,"
adds attributes to the first input dataset using a model provided on
the second input port; and</li> <li> "Model and assess the
same data," is really just operations 2 and 3 above applied to the same
input dataset. The model is first trained using a fraction of the input
data and then the entire dataset is assessed using that
model.</li> </ol> When the task includes creating a model
(i.e., tasks 2, and 4), you may adjust the fraction of the input
dataset used for training. You should avoid using a large fraction of
the input data for training as you will then not be able to detect
overfitting. The <i>Training fraction</i> setting will be
ignored for tasks 1 and 3.
Specify the fraction of values from the input dataset to
be used for model fitting. The exact set of values is chosen at random
from the dataset.
Should the assessed values be signed deviations or
unsigned?
This filter either computes a statistical model of a dataset or takes
such a model as its second input. Then, the model (however it is
obtained) may optionally be used to assess the input dataset.<p>
This filter iteratively computes the center of k clusters in a space
whose coordinates are specified by the arrays you select. The clusters
are chosen as local minima of the sum of square Euclidean distances from
each point to its nearest cluster center. The model is then a set of
cluster centers. Data is assessed by assigning a cluster center and
distance to the cluster to each point in the input data
set.
The input to the filter. Arrays from this dataset will
be used for computing statistics and/or assessed by a statistical
model.
A previously-calculated model with which to assess a
separate dataset. This input is optional.
Specify which type of field data the arrays will be
drawn from.
Choose arrays whose entries will be used to form
observations for statistical analysis.
Specify the task to be performed: modeling and/or
assessment. <ol> <li> "Detailed model of input data,"
creates a set of output tables containing a calculated statistical
model of the <b>entire</b> input dataset;</li>
<li> "Model a subset of the data," creates an output table (or
tables) summarizing a <b>randomly-chosen subset</b> of the
input dataset;</li> <li> "Assess the data with a model,"
adds attributes to the first input dataset using a model provided on
the second input port; and</li> <li> "Model and assess the
same data," is really just operations 2 and 3 above applied to the same
input dataset. The model is first trained using a fraction of the input
data and then the entire dataset is assessed using that
model.</li> </ol> When the task includes creating a model
(i.e., tasks 2, and 4), you may adjust the fraction of the input
dataset used for training. You should avoid using a large fraction of
the input data for training as you will then not be able to detect
overfitting. The <i>Training fraction</i> setting will be
ignored for tasks 1 and 3.
Specify the fraction of values from the input dataset to
be used for model fitting. The exact set of values is chosen at random
from the dataset.
Specify the number of clusters.
Specify the maximum number of iterations in which
cluster centers are moved before the algorithm
terminates.
Specify the relative tolerance that will cause early
termination.
This filter either computes a statistical model of a dataset or takes
such a model as its second input. Then, the model (however it is
obtained) may optionally be used to assess the input dataset.<p>
This filter computes the covariance matrix for all the arrays you select
plus the mean of each array. The model is thus a multivariate Gaussian
distribution with the mean vector and variances provided. Data is
assessed using this model by computing the Mahalanobis distance for each
input point. This distance will always be positive.<p> The learned
model output format is rather dense and can be confusing, so it is
discussed here. The first filter output is a multiblock dataset
consisting of 2 tables: <ol> <li> Raw covariance data.
<li> Covariance matrix and its Cholesky decomposition. </ol>
The raw covariance table has 3 meaningful columns: 2 titled "Column1" and
"Column2" whose entries generally refer to the N arrays you selected when
preparing the filter and 1 column titled "Entries" that contains numeric
values. The first row will always contain the number of observations in
the statistical analysis. The next N rows contain the mean for each of
the N arrays you selected. The remaining rows contain covariances of
pairs of arrays.<p> The second table (covariance matrix and
Cholesky decomposition) contains information derived from the raw
covariance data of the first table. The first N rows of the first column
contain the name of one array you selected for analysis. These rows are
followed by a single entry labeled "Cholesky" for a total of N+1 rows.
The second column, Mean contains the mean of each variable in the first N
entries and the number of observations processed in the final (N+1)
row.<p> The remaining columns (there are N, one for each array)
contain 2 matrices in triangular format. The upper right triangle
contains the covariance matrix (which is symmetric, so its lower triangle
may be inferred). The lower left triangle contains the Cholesky
decomposition of the covariance matrix (which is triangular, so its upper
triangle is zero). Because the diagonal must be stored for both matrices,
an additional row is required â€” hence the N+1 rows and
the final entry of the column named "Column".
The input to the filter. Arrays from this dataset will
be used for computing statistics and/or assessed by a statistical
model.
A previously-calculated model with which to assess a
separate dataset. This input is optional.
Specify which type of field data the arrays will be
drawn from.
Choose arrays whose entries will be used to form
observations for statistical analysis.
Specify the task to be performed: modeling and/or
assessment. <ol> <li> "Detailed model of input data,"
creates a set of output tables containing a calculated statistical
model of the <b>entire</b> input dataset;</li>
<li> "Model a subset of the data," creates an output table (or
tables) summarizing a <b>randomly-chosen subset</b> of the
input dataset;</li> <li> "Assess the data with a model,"
adds attributes to the first input dataset using a model provided on
the second input port; and</li> <li> "Model and assess the
same data," is really just operations 2 and 3 above applied to the same
input dataset. The model is first trained using a fraction of the input
data and then the entire dataset is assessed using that
model.</li> </ol> When the task includes creating a model
(i.e., tasks 2, and 4), you may adjust the fraction of the input
dataset used for training. You should avoid using a large fraction of
the input data for training as you will then not be able to detect
overfitting. The <i>Training fraction</i> setting will be
ignored for tasks 1 and 3.
Specify the fraction of values from the input dataset to
be used for model fitting. The exact set of values is chosen at random
from the dataset.
This filter either computes a statistical model of a dataset or takes
such a model as its second input. Then, the model (however it is
obtained) may optionally be used to assess the input dataset. <p>
This filter performs additional analysis above and beyond the
multicorrelative filter. It computes the eigenvalues and eigenvectors of
the covariance matrix from the multicorrelative filter. Data is then
assessed by projecting the original tuples into a possibly
lower-dimensional space. <p> Since the PCA filter uses the
multicorrelative filter's analysis, it shares the same raw covariance
table specified in the multicorrelative documentation. The second table
in the multiblock dataset comprising the model output is an expanded
version of the multicorrelative version. <p> As with the
multicorrelative filter, the second model table contains the mean values,
the upper-triangular portion of the symmetric covariance matrix, and the
non-zero lower-triangular portion of the Cholesky decomposition of the
covariance matrix. Below these entries are the eigenvalues of the
covariance matrix (in the column labeled "Mean") and the eigenvectors (as
row vectors) in an additional NxN matrix.
The input to the filter. Arrays from this dataset will
be used for computing statistics and/or assessed by a statistical
model.
A previously-calculated model with which to assess a
separate dataset. This input is optional.
Specify which type of field data the arrays will be
drawn from.
Choose arrays whose entries will be used to form
observations for statistical analysis.
Specify the task to be performed: modeling and/or
assessment. <ol> <li> "Detailed model of input data,"
creates a set of output tables containing a calculated statistical
model of the <b>entire</b> input dataset;</li>
<li> "Model a subset of the data," creates an output table (or
tables) summarizing a <b>randomly-chosen subset</b> of the
input dataset;</li> <li> "Assess the data with a model,"
adds attributes to the first input dataset using a model provided on
the second input port; and</li> <li> "Model and assess the
same data," is really just operations 2 and 3 above applied to the same
input dataset. The model is first trained using a fraction of the input
data and then the entire dataset is assessed using that
model.</li> </ol> When the task includes creating a model
(i.e., tasks 2, and 4), you may adjust the fraction of the input
dataset used for training. You should avoid using a large fraction of
the input data for training as you will then not be able to detect
overfitting. The <i>Training fraction</i> setting will be
ignored for tasks 1 and 3.
Specify the fraction of values from the input dataset to
be used for model fitting. The exact set of values is chosen at random
from the dataset.
Before the eigenvector decomposition of the covariance
matrix takes place, you may normalize each (i,j) entry by sqrt(
cov(i,i) * cov(j,j) ). This implies that the variance of each variable
of interest should be of equal importance.
When reporting assessments, should the full eigenvector
decomposition be used to project the original vector into the new space
(Full basis), or should a fixed subset of the decomposition be used
(Fixed-size basis), or should the projection be clipped to preserve at
least some fixed "energy" (Fixed-energy basis)?<p> As an example,
suppose the variables of interest were {A,B,C,D,E} and that the
eigenvalues of the covariance matrix for these were {5,2,1.5,1,.5}. If
the "Full basis" scheme is used, then all 5 components of the
eigenvectors will be used to project each {A,B,C,D,E}-tuple in the
original data into a new 5-components space.<p> If the
"Fixed-size" scheme is used and the "Basis Size" property is set to 4,
then only the first 4 eigenvector components will be used to project
each {A,B,C,D,E}-tuple into the new space and that space will be of
dimension 4, not 5.<p> If the "Fixed-energy basis" scheme is used
and the "Basis Energy" property is set to 0.8, then only the first 3
eigenvector components will be used to project each {A,B,C,D,E}-tuple
into the new space, which will be of dimension 3. The number 3 is
chosen because 3 is the lowest N for which the sum of the first N
eigenvalues divided by the sum of all eigenvalues is larger than the
specified "Basis Energy" (i.e., (5+2+1.5)/10 = 0.85 >
0.8).
The maximum number of eigenvector components to use when
projecting into the new space.
The minimum energy to use when determining the
dimensionality of the new space into which the assessment will project
tuples.
Compute robust PCA with medians instead of means.
The Material Interface filter finds voxels inside of which a material
fraction (or normalized amount of material) is higher than a given
threshold. As these voxels are identified surfaces enclosing adjacent
voxels above the threshold are generated. The resulting volume and its
surface are what we call a fragment. The filter has the ability to
compute various volumetric attributes such as fragment volume, mass,
center of mass as well as volume and mass weighted averages for any of
the fields present. Any field selected for such computation will be also
be copied into the fragment surface's point data for visualization. The
filter also has the ability to generate Oriented Bounding Boxes (OBB) for
each fragment. The data generated by the filter is organized in three
outputs. The "geometry" output, containing the fragment surfaces. The
"statistics" output, containing a point set of the centers of mass. The
"obb representation" output, containing OBB representations (poly data).
All computed attributes are copied into the statistics and geometry
output. The obb representation output is used for validation and
debugging purposes and is turned off by default. To measure the size of
craters, the filter can invert a volume fraction and clip the volume
fraction with a sphere and/or a plane.
Input to the filter can be a hierarchical box data set
containing image data or a multiblock of rectilinear
grids.
Material fraction is defined as normalized amount of
material per voxel. It is expected that arrays containing material
fraction data has been down converted to a unsigned
char.
Material fraction is defined as normalized amount of
material per voxel. Any voxel in the input data set with a material
fraction greater than this value is included in the output data
set.
Inverting the volume fraction generates the negative of
the material. It is useful for analyzing craters.
This property sets the type of clip geometry, and
associated parameters.
Mass arrays are paired with material fraction arrays.
This means that the first selected material fraction array is paired
with the first selected mass array, and so on sequentially. As the
filter identifies voxels meeting the minimum material fraction
threshold, these voxel's mass will be used in fragment center of mass
and mass calculation. A warning is generated if no mass array is
selected for an individual material fraction array. However, in that
case the filter will run without issue because the statistics output
can be generated using fragments' centers computed from axis aligned
bounding boxes.
Specifies the arrays from which to volume weighted
average.
For arrays selected a volume weighted average is
computed. The values of these arrays are also copied into fragment
geometry cell data as the fragment surfaces are
generated.
For arrays selected a mass weighted average is computed.
These arrays are also copied into fragment geometry cell data as the
fragment surfaces are generated.
Compute Object Oriented Bounding boxes (OBB). When
active the result of this computation is copied into the statistics
output. In the case that the filter is built in its validation mode,
the OBB's are rendered.
If this property is set, then the geometry output is
written to a text file. The file name will be constructed using the
path in the "Output Base Name" widget.
If this property is set, then the statistics output is
written to a text file. The file name will be constructed using the
path in the "Output Base Name" widget.
This property specifies the base including path of where
to write the statistics and geometry output text files. It follows the
pattern "/path/to/folder/and/file" here file has no extension, as the
filter will generate a unique extension.
The Slice Along PolyLine filter is similar to the Slice Filter except that it slices along a surface that
is defined by sweeping the input polyline parallel to the z-axis. Explained another way: take a laser
cutter and move it so that it hits every point on the input polyline while keeping it parallel
to the z-axis. The surface cut from the input dataset is the result.
Set the vtkDataObject to slice.
Set the polyline to slice along.
The threshold used internally to determine correspondence between the polyline
and the output slice. If the output has sections missing, increasing this
value may help.
The Intersect Fragments filter perform geometric intersections on sets of
fragments. The filter takes two inputs, the first containing fragment
geometry and the second containing fragment centers. The filter has two
outputs. The first is geometry that results from the intersection. The
second is a set of points that is an approximation of the center of where
each fragment has been intersected.
This input must contain fragment
geometry.
This input must contain fragment
centers.
This property sets the type of intersecting geometry,
and associated parameters.
vtkGaussianSplatter
is a filter that injects input points into a structured
points (volume) dataset. As each point is injected, it
"splats" or distributes values to nearby voxels. Data is
distributed using an elliptical, Gaussian distribution
function. The distribution function is modified using
scalar values (expands distribution) or normals (creates
ellipsoidal distribution rather than spherical). Warning:
results may be incorrect in parallel as points can't splat
into other processor's cells.
This property specifies the input to the
filter.
Choose a scalar array to splat into the output cells. If
ignore arrays is chosen, point density will be counted
instead.
Set / get the dimensions of the sampling structured
point set. Higher values produce better results but are much
slower.
Set / get the (xmin,xmax, ymin,ymax, zmin,zmax) bounding
box in which the sampling is performed. If any of the (min,max) bounds
values are min >= max, then the bounds will be computed
automatically from the input data. Otherwise, the user-specified bounds
will be used.
Set / get the radius of propagation of the splat. This
value is expressed as a percentage of the length of the longest side of
the sampling volume. Smaller numbers greatly reduce execution
time.
Set / get the sharpness of decay of the splats. This is
the exponent constant in the Gaussian equation. Normally this is a
negative value.
Turn on/off the scaling of splats by scalar
value.
Multiply Gaussian splat distribution by this value. If
ScalarWarping is on, then the Scalar value will be multiplied by the
ScaleFactor times the Gaussian function.
Turn on/off the generation of elliptical splats. If
normal warping is on, then the input normals affect the distribution of
the splat. This boolean is used in combination with the Eccentricity
ivar.
Control the shape of elliptical splatting. Eccentricity
is the ratio of the major axis (aligned along normal) to the minor
(axes) aligned along other two axes. So Eccentricity gt 1 creates
needles with the long axis in the direction of the normal; Eccentricity
lt 1 creates pancakes perpendicular to the normal
vector.
Turn on/off the capping of the outer boundary of the
volume to a specified cap value. This can be used to close surfaces
(after isosurfacing) and create other effects.
Specify the cap value to use. (This instance variable
only has effect if the ivar Capping is on.)
Specify the scalar accumulation mode. This mode
expresses how scalar values are combined when splats are overlapped.
The Max mode acts like a set union operation and is the most commonly
used; the Min mode acts like a set intersection, and the sum is just
weird.
Set the Null value for output points not receiving a
contribution from the input points. (This is the initial value of the
voxel samples.)
Computes linear material interfaces in 2D or 3D mixed
cells produced by Eulerian or ALE simulation
codes
The Pass Arrays filter makes a shallow copy of the output
data object from the input data object except for passing
only the arrays specified to the output from the
input.
Add a point array by name to be passed.
Add a cell array by name to be passed.
Add a field array by name to be passed.
This hidden property must always be set to 1 for this
proxy to work.
This hidden property must always be set to 0 for this
proxy to work.
This property specifies the input to the Cell Data to
Point Data filter.
This property specifies the number of levels in the AMR data structure.
This property specifies the maximum number of blocks in the output
AMR data structure.
This property specifies the refinement ratio between levels.
Create a vtkUniformGrid from a vtkImageData by passing in arrays to be used
for point and/or cell blanking. By default, values of 0 in the specified
array will result in a point or cell being blanked. Use Reverse to switch this.
Specify the array to use for blanking.
Reverse the array value to whether or not a point or cell is blanked.
This filter aggregates a dataset onto a subset of processes.
This property specifies the input to the filter.
This property specifies the number of target processes to
aggregate the dataset onto.
This filter generate a periodic
multiblock dataset
This property specifies the input to the Periodic filter.
This property lists the ids of the blocks to make periodic
from the input multiblock dataset.
This property specifies the mode of iteration, either a user-provided number
of periods, or the maximum number of periods to rotate to 360 degrees.
This property specifies the number of iteration
This property specifies the mode of rotation, either from a user provided
angle or from an array in the data.
Rotation angle in degree.
Field array name that contains the rotation angle in radians.
This property specifies the axis of rotation
This property specifies the 3D coordinates for the
center of the rotation.
Specify whether the rotations should be computed on-the-fly, which is
compute intensive, or if the arrays should be explicitly generated and
stored, at the cost of using more memory.
This property specifies the dataset whose data will
be probed
Use input bounds or custom ones?
How many linear samples we want along each axis
Custom probing bounds
This filter takes two inputs - Input and Source, and samples the
point and cell values of Input on to the point locations of Source.
The output has the same structure as Source but its point data
have the resampled values from Input."
This property specifies the dataset from which to obtain
probe values. The data attributes come from this dataset.
This property specifies the dataset whose geometry will
be used in determining positions to probe. The mesh comes from this
dataset.
Control whether the source point data is to be
treated as categorical. If the data is categorical, then the
resultant data will be determined by a nearest neighbor
interpolation scheme rather than by linear interpolation.
When set the input's cell data arrays are shallow copied to the output.
When set the input's point data arrays are shallow copied to the output.
Set whether to pass the field-data arrays from the Input i.e. the input
providing the geometry to the output. On by default.
Set whether to compute the tolerance or to use a user provided
value. On by default.
Set the tolerance to use for
vtkDataSet::FindCell
When set, points that did not get valid values during resampling, and
cells containing such points, are marked as blank.
The cell locator to use for finding cells for probing.
This filter creates a line along the object and defaults its
representation to showing a ruler along that line.
Select along which axis the ruler should be aligned. Note:
this filter requires that all points in the dataset to which it is applied
be copied to a single rank when this option is set to any of the
Oriented Bound Box options, so make sure the dataset can fit on one rank
before applying this filter.
Synchronize time step values in the first input (Input) to time step
values in the second input (Source) that are considered close enough.
The outputted data set is from the first input and the number of
output time steps is also equal to the number of time steps in
the first input. Time step values in the first input that are
"close" to time step values in the second input are replaced
with the value from the second input. Close is determined to
be if the difference is less than RelativeTolerance multiplied
by the time range of the first input.
This property specifies the dataset whose geometry and
fields will be output.
This property specifies the dataset from which to obtain
the time step values.
The D3 filter is
available when ParaView is run in parallel. It operates on
any type of data set to evenly divide it across the
processors into spatially contiguous regions. The output
of this filter is of type unstructured
grid.
This property specifies the input to the D3
filter.
This property determines how cells that lie on processor
boundaries are handled. The "Assign cells uniquely" option assigns each
boundary cell to exactly one process, which is useful for isosurfacing.
Selecting "Duplicate cells" causes the cells on the boundaries to be
copied to each process that shares that boundary. The "Divide cells"
option breaks cells across process boundary lines so that pieces of the
cell lie in different processes. This option is useful for volume
rendering.
If this property is set to 1, the D3 filter requires
communication routines to use minimal memory than without this
restriction.
The minimum number of ghost levels to add to each
processor's output. If the pipeline also requests ghost levels, the
larger value will be used.
The GhostCellGenerator operates on unstructured grids only.
This filter does not redistribute the input data, it only
generates ghost cells at processor boundaries by fetching
topological and geometrical information of those cells on
neighbor ranks. The filter can take advantage of global point
ids if they are available - if so it will perform faster,
otherwise point coordinates will be exchanged and processed.
This property specifies the input to the ghost cells
generator.
Specify if the filter must generate the ghost cells only
if required by the pipeline downstream. To force at least a fixed level
of ghosts, this must be set to 0 (unchecked).
When **BuildIfRequired** if off, use this to specify the minimum number of
ghost cells to request. The filter may request more ghost levels than indicated if a
downstream filter asked for more ghost levels.
Specify if the filter must take benefit of global point
ids if they exist or if point coordinates should be used instead.
This property provides the name for the input array
containing the global point ids if the GlobalIds array of the point
data if not set. Default is GlobalNodeIds.
The Particle Trace filter generates pathlines in a vector
field from a collection of seed points. The vector field
used is selected from the Vectors menu, so the input data
set is required to have point-centered vectors. The Seed
portion of the interface allows you to select whether the
seed points for this integration lie in a point cloud or
along a line. Depending on which is selected, the
appropriate 3D widget (point or line widget) is displayed
along with traditional user interface controls for
positioning the point cloud or line within the data set.
Instructions for using the 3D widgets and the
corresponding manual controls can be found in section 7.4.
This filter operates on any type of data set, provided it
has point-centered vectors. The output is polygonal data
containing polylines. This filter is available on the
Toolbar.
Specify the restart dataset. This is optional and
can be used to have particle histories that were computed
previously be included in this filter's computation.
Clear the particle cache from previous time steps.
Set the first time step. Default is 0.
Specify whether or not this is a restarted simulation.
Prevents cache from getting reset so that new computation
always start from previous results.
This filter converts vtkHyperTreeGrid data to vtkUnstructuredGrid. The converted output consumes much more memory but is compatible with most of the standard filters.
This property specifies the input to the converter.
This filter extracts isocontours directly from HyperTreeGrid input datasets.
This property specifies the input to the converter.
This property specifies the name of the scalar array
from which the contour filter will compute isolines and/or
isosurfaces.
This property specifies the values at which to compute
isosurfaces/isolines and also the number of such
values.
This filter thresholds directly from HyperTreeGrid input datasets.
This property specifies the input to the converter.
This property specifies the name of the scalar array
from which the contour filter will compute isolines and/or
isosurfaces.
The values of this property specify the upper and lower
bounds of the thresholding operation.
Clip an hyper tree grid along an axis aligned plane or box and output
a hyper tree grid with same dimensionality.
Type of clip operation to use.
Axis to use as the normal to the clip plane.
Position of the clip plane along the normal axis.
By default, the portion of the dataset above the clip plane is kept.
Enabling this option will keep the portion below the plane.
Defines the most negative point of the clip box.
Defines the most positive point of the clip box.
Coefficients of the quadric function, defined as the 'a' values in:
F(x,y,z) = a0*x^2 + a1*y^2 + a2*z^2 + a3*x*y + a4*y*z + a5*x*z + a6*x + a7*y + a8*z + a9
Cut a hyper tree grid along an axis-aligned plane and output a new hyper
tree grid. Only works for 3D grids.
Axis to use as the normal to the cut plane.
Position of the cut plane along the normal axis.
This filter reflect the cells of a hyper tree grid with respect to
one of the planes parallel to the bounding box of the data set.
Axis to use as the normal to the reflection plane.
Position of the reflection plane along the normal axis.
This filter generates output points at the center of the leaf
cells in the hyper tree grid.
These points can be used for placing glyphs or labeling.
The cell attributes will be associated with the points in the output.
If enabled, vertex cells will be added to the output dataset. This
is useful for visualizing the output points, which are not rendered
otherwise.
Extract all levels down to a specified depth from a hyper tree grid.
If the required depth is greater or equal to the maximum level of the
input grid, then the output is identical.
Note that when a material mask is present, the geometry extent of the
output grid is guaranteed to contain that of the input tree, but the
former might be strictly larger than the latter. This is not a bug
but an expected behavior of which the user should be aware.
Maximum depth to which the output grid should be limited.
Generate PolyData representing the external surface of a HTG.
Takes as input an hyper tree grid and a single plane and generates the
polygonal data intersection surface.
This property sets the parameters of the plane
function.
If enabled, the output plane is computed using the dual grid, providing
perfect connectivity at the cost of speed. If disabled, the AMR mesh
is used directly, and the output is obtained faster and is more useful
for rendering.
Convert a point set into a molecule. Every point of the input becomes an atom
of the output molecule. It needs a point array containing the atomic numbers.
This property indicates the name of the scalar array
corresponding to atomic numbers.
This property determines if the lines (cell of type VTK_LINE) are converted into bonds.
Convert a molecule into lines. Each atom of the input becomes a point of the output polydata, each bond a line.
Appends one or more molecules into a single molecule. It also appends the associated atom data and edge data.
Note that input data arrays should match (same number of arrays with same names in each input)
Compute the bonds of a molecule. If the
interatomic distance is less than the sum of the two atom's covalent radii
(and a tolerance), a single bond is added.
This algorithm does not consider valences, hybridization, aromaticity, or
anything other than atomic separations. It will not produce anything other
than single bonds.
This property determines the tolerance to apply on covalent radius.
This property determines if the tolerance is absolute (value is added to radius and should be positive)
or not (value multiplied with radius and should be greater than 1).
This filter fairly distributes points over processors into contiguous spatial regions.
The output is a PolyData which does not contain any cell.
Distribution is done using a Kd-tree.