Particle Tracer fails to produce exploitable data in an in situ environment
The filter Particle Tracer
(implementation is mostly in vtkParticleTracerBase
in VTK) has a parameter DisableResetCache
which is supposed to allow to run the filter in situ, i.e. a mode where each call to UpdateTimeStep(double t)
outputs a dataset at time t
. However the filter fails to produce satisfactory outputs in this context.
There are a few reasons:
- Instead of relying on
vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEP()
to acquire the current time step, the filter gets the current timestep throughvtkDataObject::DATA_TIME_STEP()
, which is not correctly populated in the context I was running the filter (I will share the script below) - When above point is fixed (correctly setting the timestep), one can get the expected output in situ, but there is a memory leak that originates from a mishandling of the attribute
vtkParticleTracer::FirstIteration
. It gets set to true and triggers a memory leaking path. If the attribute is left untouched, the output is wrong once again. - The handling of
RequestInformation
andRequestUpdateExtent
is questionable. They are setting a bunch of internal variables with a complicated algorithm which make it hard to untangle.
The Catalyst script cata.py I am using to test this is run using the CxxFullExample
in ParaView. It runs a pipeline where one traces a particle on a random vector field. The script should output for each file a growing polyline.
Here is the patch particle_tracer.diff allowing to get a correct output with a leak. It needs to be used on top of vtk/vtk!10104 (merged) and !6292 (merged)
The filter should be refactored to make it work correctly in situ without leaking. In addition, it should rely on the key vtkStreamingDemandDrivenPipeline::INCOMPLETE_TIME_STEPS()
instead of the manual parameter DisableResetCache
.
There might be issues to discuss regarding backward compatibility. I have little faith we would be able to maintain it upon refactoring.