Exodus dataset with multiple blocks very slow
ParaView 5.6.0 has a horrible performance issue with large data. This is impacting numerous users.
RELEASE STOPPER BUG FOR 5.7.0.
From what I can tell, when running a large number of files (shows up with 64, and appears to go up exponentially), with a large number of blocks (hundreds to thousands), our initial dataset header read has gone from instantanious to dozens of seconds or many minutes. This is keeping users from moving off of 5.4.1. Also, from what I can tell, this slowdown happened between 5.4.1 and 5.5.2. Here is how to replicate:
-
Create a junk directory somewhere. Move into this directory many.e (UUR, too big to attach, will send to Cory and Utkarsh) and script.py (I will attach).
-
Run script, as follows: python script.py. This will create a 64 file dataset (or 8 file or 256 file dataset). Now, we have a dataset to work with.
-
Linux, 5.6.0, builtin server. Run Paraview.
-
File/ Open manyMany.e.64.*. Count how many seconds to get to an apply button. It should be over 10 seconds. Goes to something like a minute with 256 files.
-
Linux, 5.4.1, builtin server. Run Paraview.
-
File/ Open manyMany.e.64.*. Count how many seconds to get to an apply button. It should be under 1 second. Stays under 1 second with 256 files.
When we are running at scale (thousands of files, around a thousand blocks and/or sets), this has become unusable. Further, this issue doesn't appear to be parallelizing, so you can't throw more nodes at it.