HDF5 reader is losing it's brains remote server
The HDF5 reader is losing it's brains when loading a single file containing particles when using multiple ranks remote server. Here is how to replicate:
- 5.9.1/ 5.10.0-RC1, Linux, remote server (I have 16 ranks, it will show on 2).
- Load file fields.exo. Apply. Wireframe. The only reason we are loading this one is that the issue seems to follow cells in this file. You do not need this file loaded to see the issue.
- Load particles.h5part. Appply.
- Change the time to be 200.
- Paint by ProcessId, and also by Solid Color, White. Notice that the single file of points was spread around the 16 ranks. Notice where the points are. (You may want to take a screenshot, so you don't have to keep bouncing back and forth.)
- Move forward one timestep.
Most of the points have disappeared! Notice that what is left is all in ProcessId 0. Further, I believe they are the points that were in process zero earlier.
- Move forward one timestep.
More of the points have disappeared! This will quickly go on until no points are visible.
- On the particles.h5part reader in the Pipeline Browser, Reload Files.
Now, all of the points for this timestep are visible once again!
To see what should be happening, open particles.h5part with a builtin server, go to timetep 200, and start moving forward in time.
I will pass the dataset to Cory and Utkarsh.
Edited by W. Alan Scott