Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Register
  • Sign in
  • ParaView ParaView
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 2,008
    • Issues 2,008
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 89
    • Merge requests 89
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • ParaViewParaView
  • ParaViewParaView
  • Issues
  • #20873
Closed
Open
Issue created Aug 04, 2021 by W. Alan Scott@wascottMaintainer

IOSS reader is slow reading headers in parallel

The IOSS reader is slow when reading a multifile dataset in parallel. I have a test case with 512 files, 344 timesteps and about 100 blocks. It reads as follows:

  • Header load (i.e., before Apply button is active): IOSS reader - 3:15 , Exodus reader - :06
  • Apply to data loaded: IOSS reader - :30, Exodus reader - :34
  • Timestep 50: IOSS reader - :11, Exodus reader - :08

What is happening is probably two fold:

  • My guess is that the IOSS reader is reading the header for every file. The Exodus reader reads one header, and passes the data around to all of the other file header structures.
  • To make matters worse, time data is stored throughout the files. To acquire time data for the Information tab, I suspect the IOSS reader is reading all time data throughout the files for every single file in the dataset. It then throws this data away, except process 0.

@cory.quammen As I have users that have up to a million files, this is a showstopper bug for the 5.10 release.

I have a dataset that replicates this issue, but it is 238 Gbytes... I can share it with Kitware. (I could make it one or two timesteps...)

@utkarsh.ayachit

Edited Aug 05, 2021 by W. Alan Scott
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking