CTH memory use grows with compositeIndex
This issue was created automatically from an original Mantis Issue. Further discussion may take place here.
CTH AMR memory footprint is growing at a rate that appears proportionate to total compositeIndex. We believe that some type of CTH meta data for a whole dataset is being spread around all pvservers.
This is a showstopper for CTH visualizations beyond about 100 million cells (depending on the block to cell count).
This is easy to test using a dataset that I have access to. (This dataset may not be released to Kitware - sorry. If necessary, I should be able to have someone create a releasable dataset.)
How I replicated the problem: ParaView master. Remote server, 128 cores (processes). Linux client. I then did soft links into a dataset, changing the number of files that were read. Max was about 400 files, about 250 million cells. Total was #### blockIns and #### compositeIndexs.
Scaling went as follows, using the new Memory Inspector. Numbers are for one node, 8 processes in 12 GBytes of memory space:
- Nothing open: 1.04 GBytes, CompositeIndex 0
- One file open: 1.61 GBytes, CompositeIndex 844. Empty cores: ?? GBytes
- 8 files open: 1.97 GBytes, CompositeIndex 1.97 Empty cores: 1.41 GBytes
- 32 files open: 2.53 GBytes, Composite index 26242 Empty cores: 2.03 Gbytes
- 128 files open: 4.88 GBytes, Composite index 82717. No empty cores.
- 256 files open: crash. Note that more nodes never work, where fewer cores per node does work.
Note that, in theorie, with 128 cores and 128 files open, I now have one file open per core. I would assume that the size of 128 files (for full cores) would be taking the same as 8 files.
Throwing this in a spread sheet, it appears that every additional compositeIndex adds about 5KBytes of memory loss. Ouch!
Sorry for the confusing bug report - ask me for details.
Listing as a crash, since it does on large data.