Slow Rendering Performance when using OpenGL (Hardware Rendering) and MPI
This issue was created automatically from an original Mantis Issue. Further discussion may take place here.
Summery: When using ParaView in parallel with either MVAPICH2 or OpenMPI and hardware rendering (nVidia Quadro 5000/6000, Tesla C2050 tested), rendering performance drops significantly. Problem reproduced using a VTK test program and when running on a single node with MPI simply linked to the executable (no composting occurring).
Notes:
- Have ruled out composting as the issue by running on one node with MPI simply linked but not invoked (no mpirun or mpiexec called).
- No X forwarding or Mesa in play on the systems exhibiting this issue.
- Immediate mode rendering toggled on and off with no effect when test data set loaded into PV GUI.
To Reproduce:
- Build ParaView w\ MPI support and OpenGL support for the installed graphics card.
- Issue a pvbatch job with timer logs to capture run time. Recompile PV w/o MPI and re-time.
- Convert the pvbatch job to a VTK C++ program. Link against vtkRendering and time execution. Relink program against vtkParallel and time execution.
Results Seen (VTK Program w\ MPI and 4 million triangles): HP DL585 compute node w\ Quadro 5000, MVAPICH2: ~10 fps HP Z800 workstation w\ Quadro 6000, OpenMPI: ~4 fps HP Z800 workstation w\ Tesla C2050, OpenMPI: ~2.5 fps
Results w/o MPI Linked for DL585 Compute Node, 4 million triangles: ~114 fps