IOSS reader is slow in parallel on Exodus reads
I have noticed a slowdown in the IOSS reader in parallel reading Exodus datasets. From what I can tell, the IOSS reader is just under half as fast as the legacy Exodus reader. Here is how I tested. Note I will give these files and scripts to Utkarsh and Cory. UUR, they can give them to anyone.
First, create a script for both loads. Save a screenshot. Keep these two scripts.
Linux, remote server (16 ranks), 5.9.1.
The reason I am comparing 5.9.1 vs master is that I have backend static builds. I cannot load the plugin Legacy Exodus reader. Thus, I am getting at Exodus reader through 5.9.1, and IOSS reader through master.
- Load can.e.16.[0-15]. All vars on. Apply.
This is a dataset of 16 full cans. I'm trying to make something reasonably large. Headers, of course, will be exactly the same. Ditto data.
Save screenshot. Call it deleteMeSomthing
Stop Trace. Save the trace as deleteMeLoadSpeed5.9.py
Repeat repeat with master mid December, 2021.
Save the trace as deleteMeLoadSpeedMaster.py
Now, add timing information into both traces.
Run pvbatch on both traces, 16 ranks.
For me, Exodus reader (5.9.1) is reading in about 18 seconds. For me, IOSS reader (master) is reading in about 40 seconds.