Rank 0 allocating too much memory when loading dataset
For some parallel vtm datasets, we have noticed that when loading these datasets in parallel, rank 0 allocates much more memory than all other ranks. For example, with one dataset, rank 0 will allocate 6G, while all other ranks allocate 500MB. This prevents very large data from being loaded, no matter how many nodes are allocated.
After some investigation, there seems to be two main factors determining whether this happens: #1) how many vtu files a vtm file includes, and #2) the size of the vtu files being loaded.
If the size of the vtu file is ~100 kilobytes and 20,000 vtu files are loaded, it works fine.
If the size of the vtu file is ~2MB and 20,000 vtu files are loaded, the memory problem will happen.
If the size of the vtu file is ~2MB and 15,000 vtu files are loaded, it works fine.
If the size of the vtu file is ~2MB and 16,000 vtu files are loaded, the memory problem will happen.
This issue happens with Paraview 5.6.2. It doesn't seem to happen with Paraview 5.7.0.
I've attached some synthetic datasets that you can test. It's basically a vtm file that repeatedly points to the same vtu file. Included are two vtm files which load 15,000 and 16,000 vtu files, respectively. Also included is a python script that will generate a vtm file that includes a user-defined number of vtu files (for further testing).
To replicate issue: Run Paraview 5.6.2 in parallel over several processes. Load test.16000.vtm. Use the Memory Inspector to see that rank 0 has much more memory allocated than everyone else. In comparison, all processes should have about the same amount of memory allocated when loading test.15000.vtm.