Commit d45e37be authored by Roxana Bujack's avatar Roxana Bujack Committed by Berk Geveci

Add the moment invariants module.

This module provides classes and filters for rotation invariant pattern
detection in 2D and 3D scalar-, vector-, and matrix-valued datasets.
vtkComputeMoments computes the moments. Its output together with the
pattern go into vtkMomentInvariants, which puts out a field containig
the similarity of the pattern to each point.  Detailed information on
the theory can be found in Bujack and Hagen
"Moment Invariants for Multi-Dimensional Data":
http://www.informatik.uni-leipzig.de/~bujack/2017TensorDagstuhl.pdf.
A detailed description of the functionality of the module can be
found here:
http://www.informatik.uni-leipzig.de/~bujack/2017MomentInvariantsModule.pdf
parent 3258f91c
# Rotation-invariant Pattern Detection
For pattern detection, the orientation of the pattern is usually not known a priory. The process should not be decelerated more than necessary while the pattern detection algorithm looks for all possible rotated copies of the template. Therefore, rotation invariance is a critical requirement.
Moment invariants can achieve rotation invariance without the need for point to point correlations, which are difficult to generate in smooth fields. For an introduction, we recommend
*Flusser, J., Suk, T., & Zitová, B. (2016). 2D and 3D Image Analysis by Moments. John Wiley & Sons*.
We have implemented the prototypes of two vtk filters that together are able to perform pattern detection. The algorithm, which we used, is described in
*Bujack, R., & Hagen, H. (2017). Moment Invariants for Multi-Dimensional Data. In Modeling, Analysis, and Visualization of Anisotropy (pp. 43-64). Springer, Cham*.
The first filter computes the moments and the second one performs the normalization based on the given pattern and computes the similarity. They are able to handle two- and three-dimensional scalar, vector, and matrix fields in the format of a vtkImageData. The architecture with inputs and outputs and their types can be found in the following figure.
![The architectiure of the moments module.][workflow with types]
[workflow with types]: chartTypes.jpg "The architectiure of the moments module."
The architecture illustrated with example images is shown the following figure.
![The architectiure of the moments module.][workflow with images]
[workflow with images]: chartOverview.jpg "The architectiure of the moments module with example images."
<!--
# Theory
Moments are the projections of a function with respect to a function space basis. We can think of them as the coordinates that represent the pattern.
They can then be used to construct moment invariants - values that do not change under certain transformations.
We will follow the normalization approach for the construction of moment invariants. That means a standard position is defined by demanding certain moments to assume fixed values and all functions are transformed to match it.
Then the remaining moments form a complete and independent set of moment invariants.
In *Dirilten, H., & Newman, T. G. (1977). Pattern matching under affine transformations. IEEE Transactions on Computers, 26(3), 314-317*, Dirilten and Newman suggest the use of moment tensors for the construction of moment invariants through tensor contraction for scalar functions.
Langbein et al. have generalized the definition of the moment tensor to tensor valued functions in *Langbein, M., & Hagen, H. (2009). A generalization of moment invariants on 2D vector fields to tensor fields of arbitrary order and dimension. Advances in Visual Computing, 1151-1160*.
**Definition:** For a tensor field $T:\R^d\to\R^{d^n \times d^m}$ with compact support, the **moment tensor** $\leftidx{^o}M$ of order $o\in\N$ takes the shape
\begin{equation} \begin{aligned}\label{mom_tensor2}
\leftidx{^o}M=\int_{\R^d} x^{\otimes o} \otimes T(x)\d^d x,
\end{aligned}\end{equation}
where $x^{\otimes o}$ denotes the $o$-th tensor power of the vector $x$.
-->
# Extensions
The **MomentInvariants** module contains actually a bunch of extra algorithms and helper classes.
The class **vtkMomentsHelper** provides functions for the moments computation that will be needed by vtkComputeMoments and vtkMomentInvariants.
The class **vtkMomentsTensor** provides the functionality to treat tensors of arbitrary dimension and rank. It supports addition, outer product, and contractions.
The algorithm **vtkSimilarityBalls** is a filter that takes the similarity field as produced by vtkMomentInvariants and a grid of type vtkImageData. It computes the local maxima in space plus scale and produces the output localMaxSimilarity that contains the similarity value together with the corresponding radius at the maxima. All other points are zero.
For further visualization, it also produces two output fields that encode the radius through drawing a solid ball or a hollow sphere around those places.
The second input, i.e. the grid, steers the resolution of the balls. It is helpful if its extent is a multiple of the first input's. Then, the circles are centered nicely.
The spheres/circles are good for 2D visualizations, because they can be laid over a visualization of the field.
The balls are good for 3D volume rendering or steering of the seeding of visualization elements.
The 2D visualzation is decribed in
*Bujack, R., Hotz, I., Scheuermann, G., & Hitzer, E. (2015). Moment invariants for 2D flow fields via normalization in detail. IEEE transactions on visualization and computer graphics, 21(8), 916-929*
and the 3D counterpart in
*Bujack, R., Kasten, J., Hotz, I., Scheuermann, G., & Hitzer, E. (2015, April). Moment invariants for 3D flow fields via normalization. In Visualization Symposium (PacificVis), 2015 IEEE Pacific (pp. 9-16). IEEE*.
A schematic overview of the use of vtkSimilarityBalls with example images is given in the following Figure.
![The extended architectiure of the moments module.][workflow of vtkSimilarityBalls]
[workflow of vtkSimilarityBalls]: chartBalls.jpg "The extended architectiure of the moments module: vtkSimilarityBalls."
The algorithm **vtkReconstructFromMoments** is a filter that takes the momentData as produced by vtkComputeMoments or vtkMomentInvariants and a grid.
It reconstructs the function from the moments, just like from the coefficients of a Taylor series.
For the reconstruction, we need to orthonormalize the moments first. Then, we multiply the coefficients with their corresponding basis function and add them up.
There are in principal three applications.
First, if we put in the moments of the pattern and the grid of the pattern, we see which parts of the template the algorithm can actually grasp with the given order during the pattern detection. Tte following Figure shows images created using moments up to second order.
![The extended architectiure of the moments module.][workflow to reconstruct the pattern]
[workflow to reconstruct the pattern]: chartReconstructPattern.jpg "The extended architectiure of the moments module: reconstruction of the pattern."
Second, if we put in the normalized moments of the pattern and the grid of the pattern, we can see how the standard position looks like. There might be several standard positions due to the ambiguity of the eigenvectors that differ by rotations of 180 degree and possibly a reflection. The algorithm will use the first one. In the previous Figure, the reflection along the x-axis would also be a standard position.
Third, if we put in the moments of the field and the original field data, we can see how well the subset of points, on which the moments were computed, actually represents the field. The following Figure depicts an example using a 16 x 16 coarse grid and moments up to second order.
![The extended architectiure of the moments module.][workflow to reconstruct the field]
[workflow to reconstruct the field]: chartReconstructField.jpg "The extended architectiure of the moments module: reconstruction of the field."
set(Module_SRCS
vtkMomentsTensor.h
vtkComputeMoments.cxx
vtkMomentInvariants.cxx
vtkMomentsHelper.cxx
vtkSimilarityBalls.cxx
vtkReconstructFromMoments.cxx
)
vtk_module_library(vtkFiltersMomentInvariants ${Module_SRCS})
find_package (Eigen3 REQUIRED)
target_include_directories(vtkFiltersMomentInvariants PRIVATE ${EIGEN3_INCLUDE_DIR})
vtk_add_test_python(
NO_VALID
patternDetectionTestSimple.py
)
"""
we cutout a random point from the given dataset,
rotate it, scale it, multiply with a constant, add a constant
and look for it.
The test is considered successful if the original position is the global maximum of similarity.
Inaccuracies come from the resolution of the dataset and the pattern as well as the coarse resolutions.
I chose the numbers to perfectly match and the rotation angle to be a multiple of 90 degree. So that the sampling error is suppressed.
"""
import vtk
try:
import numpy as np
except ImportError:
print("Numpy (http://numpy.scipy.org) not found.")
print("This test requires numpy!")
from vtk.test import Testing
Testing.skip()
import sys
import os, shutil
import math
from patternDetectionHelper import *
import random
from vtk.util.misc import vtkGetDataRoot
VTK_DATA_ROOT = vtkGetDataRoot()
numberOfIntegrationSteps = 0
order = 2
angleResolution = 8
eps = 1e-2
numberOfMoments = 5 #4*n+5
nameOfPointData = "values"
def doTest(filePath):
if not os.path.isfile(filePath):
print("File: " + filePath + " does not exist.")
sys.exit(1)
else:
(dataset, bounds, dimension) = readDataset(filePath)
npBounds = np.array(bounds)
if (dataset.GetDimensions()[0]-1) % (numberOfMoments-1) != 0:
print("Warning! The numberOfMoments will not produce a true coarse version of the input dataset.")
# produce coarse grid where the moments will be computed
if dimension == 3:
momentDataCoarse = createCoarseDataset(bounds, numberOfMoments, numberOfMoments, numberOfMoments)
else:
momentDataCoarse = createCoarseDataset(bounds, numberOfMoments, numberOfMoments, 0)
# produce pattern
if dimension == 3:
radiusPattern = np.amin(npBounds[1::2]-npBounds[0::2]) / 10
offset = max(radiusPattern, momentDataCoarse.GetSpacing()[0], momentDataCoarse.GetSpacing()[1], momentDataCoarse.GetSpacing()[2])
patternPosition = [random.uniform(bounds[2*i]+offset, bounds[2*i+1]-offset) for i in range(3)]
else:
radiusPattern = np.amin((npBounds[1::2]-npBounds[0::2])[:-1]) / 10
offset = max(radiusPattern, momentDataCoarse.GetSpacing()[0], momentDataCoarse.GetSpacing()[1])
patternPosition = [random.uniform(bounds[2*i]+offset, bounds[2*i+1]-offset) for i in range(3)]
patternPosition[2] = bounds[5]
patternPosition = momentDataCoarse.GetPoint(momentDataCoarse.FindPoint(patternPosition))
# rotate, multiply constant, change size, add constant
rotationAngle = 2 * math.pi * 0.25 * random.randint(0,3)
factorOuter = random.uniform(0.5,2)
factorInner = random.uniform(0.5,2)
summand = [random.uniform(-1,1) for i in range(9)]
# rotationAngle = 0
# factorInner= 1
# factorOuter= 1
# summand = [0 for i in range(9)]
if dimension == 2:
summand[2] = summand[5] = summand[6] = summand[7] = summand[8] = 0
pattern = scaleDataset(shiftDataset(rotateDatasetExact(cutoutPattern(dataset, dimension, patternPosition, radiusPattern), rotationAngle, nameOfPointData),summand, nameOfPointData),factorOuter, nameOfPointData)
pattern.SetSpacing(pattern.GetSpacing()[0]*factorInner, pattern.GetSpacing()[1]*factorInner, pattern.GetSpacing()[2]*factorInner)
print("patternPosition=", patternPosition, " rotationAngle=", rotationAngle, " factorInner=", factorInner, " factorOuter=", factorOuter, " summand=", summand)
# compute moments
momentsAlgo = vtk.vtkComputeMoments()
momentsAlgo.SetFieldData(dataset)
momentsAlgo.SetGridData(momentDataCoarse)
momentsAlgo.SetNameOfPointData(nameOfPointData)
momentsAlgo.SetNumberOfIntegrationSteps(numberOfIntegrationSteps)
momentsAlgo.SetOrder(order)
momentsAlgo.SetRadiiArray([radiusPattern,0,0,0,0,0,0,0,0,0])
momentsAlgo.Update()
# pattern detetction
invariantsAlgo = vtk.vtkMomentInvariants()
invariantsAlgo.SetMomentData(momentsAlgo.GetOutput())
invariantsAlgo.SetPatternData(pattern)
invariantsAlgo.SetNameOfPointData(nameOfPointData)
invariantsAlgo.SetIsScaling(1)
invariantsAlgo.SetIsTranslation(1)
invariantsAlgo.SetIsRotation(1)
invariantsAlgo.SetIsReflection(1)
invariantsAlgo.SetNumberOfIntegrationSteps(numberOfIntegrationSteps)
invariantsAlgo.SetAngleResolution(angleResolution)
invariantsAlgo.SetEps(eps)
invariantsAlgo.Update()
# print invariantsAlgo
# detetction of the local maxima and provide visualization with solid balls and hollow spheres
ballsAlgo = vtk.vtkSimilarityBalls()
ballsAlgo.SetSimilarityData(invariantsAlgo.GetOutput())
ballsAlgo.SetGridData(dataset)
# ballsAlgo.SetKindOfMaxima(2)
ballsAlgo.Update()
print("maxSimilarity =", ballsAlgo.GetOutput(0).GetPointData().GetArray("localMaxValue").GetRange()[1])
# print ballsAlgo.GetOutput().FindPoint(patternPosition)
if ballsAlgo.GetOutput(0).GetPointData().GetArray("localMaxValue").GetTuple1( ballsAlgo.GetOutput().FindPoint(patternPosition)) == ballsAlgo.GetOutput(0).GetPointData().GetArray("localMaxValue").GetRange()[1]:
return
else:
sys.exit(1)
files = ["2DScalar.vti", "2DMatrix.vti", "2DVector.vti", "3DScalar.vti"]
for f in files:
filePath = VTK_DATA_ROOT + "/Data/" + f
doTest(filePath)
sys.exit(0)
vtk_module(vtkFiltersMomentInvariants
TEST_DEPENDS
vtkIOXML
DEPENDS
vtkFiltersCore
vtkImagingCore
)
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/*=========================================================================
Program: Visualization Toolkit
Module: vtkReconstructFromMoments.h
Copyright (c) Ken Martin, Will Schroeder, Bill Lorensen
All rights reserved.
See Copyright.txt or http://www.kitware.com/Copyright.htm for details.
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the above copyright notice for more information.
=========================================================================*/
/*=========================================================================
Copyright (c) 2017, Los Alamos National Security, LLC
All rights reserved.
Copyright 2017. Los Alamos National Security, LLC.
This software was produced under U.S. Government contract DE-AC52-06NA25396
for Los Alamos National Laboratory (LANL), which is operated by
Los Alamos National Security, LLC for the U.S. Department of Energy.
The U.S. Government has rights to use, reproduce, and distribute this software.
NEITHER THE GOVERNMENT NOR LOS ALAMOS NATIONAL SECURITY, LLC MAKES ANY WARRANTY,
EXPRESS OR IMPLIED, OR ASSUMES ANY LIABILITY FOR THE USE OF THIS SOFTWARE.
If software is modified to produce derivative works, such modified software
should be clearly marked, so as not to confuse it with the version available
from LANL.
Additionally, redistribution and use in source and binary forms, with or
without modification, are permitted provided that the following conditions
are met:
- Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
- Neither the name of Los Alamos National Security, LLC, Los Alamos National
Laboratory, LANL, the U.S. Government, nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY LOS ALAMOS NATIONAL SECURITY, LLC AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL LOS ALAMOS NATIONAL SECURITY, LLC OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
=========================================================================*/
/**
* @class vtkReconstructFromMoments
* @brief reconstruct the underlying function from its moments
*
* vtkReconstructFromMoments is a filter that takes the momentData as
* produced by vtkComputeMoments or vtkMomentInvariants and a grid. It
* reconstructs the function from the moments, just like from the
* coefficients of a taylor series. There are in principal three
* applications.
* 1. If we put in the moments of the pattern and the grid of the pattern,
* we see what the algorithm can actually se during the pattern detection.
* 2. If we put in the normalized moments of the pattern and the grid of the
* pattern, we how the standard position looks like.
* 3. If we put in the moments of the field and the original field data, we
* can see how well the subset of points, on which the moments were computed,
* actually represents the field For the reconstruction, we need to
* orthonormalize the moments first. Then, we multiply the coefficients with
* their coresponding basis function and add them up.
* @par Thanks:
* Developed by Roxana Bujack at Los Alamos National Laboratory.
*/
#ifndef vtkReconstructFromMoments_h
#define vtkReconstructFromMoments_h
#include "vtkDataSetAlgorithm.h"
#include "vtkFiltersMomentInvariantsModule.h" // For export macro
#include <vector> // For internal vector methods
class vtkImageData;
class VTKFILTERSMOMENTINVARIANTS_EXPORT vtkReconstructFromMoments : public vtkDataSetAlgorithm
{
public:
static vtkReconstructFromMoments* New();
vtkTypeMacro(vtkReconstructFromMoments, vtkDataSetAlgorithm);
void PrintSelf(ostream& os, vtkIndent indent) override;
/**
* standard pipeline input for port 1
* This is the data as produced by vtkComputeMoments or vtkMomentInvariants.
*/
void SetMomentsData(vtkDataObject* input) { this->SetInputData(0, input); };
/**
* standard pipeline input for port 1
* This is the data as produced by vtkComputeMoments or vtkMomentInvariants.
*/
void SetMomentsConnection(vtkAlgorithmOutput* algOutput)
{
this->SetInputConnection(0, algOutput);
};
/**
* standard pipeline input for port 1
* This input steers the topology of the reconstructed field.
*/
void SetGridData(vtkDataObject* input) { this->SetInputData(1, input); };
/**
* standard pipeline input for port 1
* This input steers the topology of the reconstructed field.
*/
void SetGridConnection(vtkAlgorithmOutput* algOutput) { this->SetInputConnection(1, algOutput); };
//@{
/**
* Set/Get The reconstruction function returns zero if the point is outside the integration radius
* of a vertex iff AllowExtrapolation is false
*/
vtkSetMacro(AllowExtrapolation, bool);
vtkGetMacro(AllowExtrapolation, bool);
//@}
protected:
/**
* constructor setting defaults
*/
vtkReconstructFromMoments();
/**
* destructor
*/
~vtkReconstructFromMoments() override;
int RequestUpdateExtent(vtkInformation*, vtkInformationVector**, vtkInformationVector*) override;
/** main executive of the program, reads the input, calles the functions, and produces the utput.
* @param request: ?
* @param inputVector: the input information
* @param outputVector: the output information
*/
int RequestData(vtkInformation*, vtkInformationVector**, vtkInformationVector*) override;
private:
vtkReconstructFromMoments(const vtkReconstructFromMoments&) = delete;
void operator=(const vtkReconstructFromMoments&) = delete;
/**
* the number of fields in the source
* equals NumberOfBasisFunctions * numberOfRadii
*/
size_t NumberOfFields;
/**
* the number of basis functions in the source
* equals \sum_{i=0}^order dimension^o
*/
size_t NumberOfBasisFunctions;
/**
* Dimension of the source can be 2 or 3
*/
int Dimension;
/**
* Rank of the source is 0 for scalars, 1 for vectors, 3 for matrices
*/
int FieldRank;
/**
* Maximal order up to which the moments are calculated
*/
int Order;
/**
* Different integration radii in a vector
*/
std::vector<double> Radii;
/**
* If false, reconstruction outside the integration radius is set to zero
*/
bool AllowExtrapolation;
/**
* the agorithm has two input ports
* port 0 is the moments Data, which is a vtkImageData and the 0th output of vtkMomentInvariants
* port 1 a finer grid that is used for the drawing of the circles and balls
*/
int FillInputPortInformation(int port, vtkInformation* info) override;
/**
* the agorithm generates 1 output of the topology of the gridData
*/
int FillOutputPortInformation(int port, vtkInformation* info) override;
/**
* Make sure that the user has not entered weird values.
* @param momentsData: the moments from which the reconstruction will be computed
* @param gridData: the grid for the reconstruction
*/
void CheckValidity(vtkImageData* momentsData, vtkDataSet* gridData);
/**
* Find out the dimension and the data type of the moments dataset.
* @param momentsData: the moments from which the reconstruction will be computed
*/
void InterpretGridData(vtkDataSet* gridData);
/**
* Find out the dimension and the data type of the moments dataset.
* @param momentsData: the moments from which the reconstruction will be computed
*/
void InterpretMomentsData(vtkImageData* momentsData);
};
#endif
This diff is collapsed.
This diff is collapsed.
82f397adef8dfa11e3a1021dfd22ce1d
5439804c56e4120a09620da18cfdb7ed
1eb935717161fb6f2c5c1ef15b862815
f804c6d8bac5b773669a7a0544410fe6
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment