Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • ParaView ParaView
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 1,960
    • Issues 1,960
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 95
    • Merge requests 95
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • ParaViewParaView
  • ParaViewParaView
  • Issues
  • #20924
Closed
Open
Issue created Sep 01, 2021 by Patrick Kopper@patrick.kopper

Remote rendering/processing in PV 5.9.1 crashes on unconnected datasets

Loading a .vtu dataset containing unconnected points/vertices above the Remote Rendering Threshold into a parallel pvserver and performing an action involving GenerateCuts crashes the server. I did not systematically test operations involving other VTK calls.

The same dataset is handled without issues when running pvserver in serial mode. Please also let me know if there is another workaround to avoid the GenerateCuts call.

Tested in Paraview 5.9.1 (binary and source, both server and client). Server running either on CentOS 7.9 or Ubuntu 18.04.5, client running on Arch Linux 2021.09.01. The same dataset is handled without issues on ParaView 5.6 and 5.8.

Steps to reproduce

  1. Start a parallel pvserver (preferably on 32+ cores) and connect a client
  2. Set the Remote Render Threshold to 0 (or any value smaller than the geometry size)
  3. Load the attached .vtu vtkUnstructuredGrid
  4. Change the representation type to Point Gaussian
    • Depending on the dataset and configuration, also filters (e.g. Clip) will crash

Dataset

The attached dataset contains unconnected vertices in an vtkUnstructuredGrid. The same issue happens when writing the point cloud using vtkPolyData. vtkUnstructuredGrid_Vertices.vtu

Crash Log

Client connected.
terminate called after throwing an instance of 'std::runtime_error'
  what():  -2147483648 -nan -0.0189998

Loguru caught a signal: SIGABRT
Stack trace:
32            0x401667 /apps/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/bin/pvserver-real() [0x401667]
31      0x2b05d2bba555 __libc_start_main + 245
30            0x4015b0 /apps/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/bin/pvserver-real() [0x4015b0]
29      0x2b05d51820b2 vtkTCPNetworkAccessManager::ProcessEventsInternal(unsigned long, bool) + 690
28      0x2b05d61db475 vtkMultiProcessController::ProcessRMIs(int, int) + 597
27      0x2b05d61dad13 vtkMultiProcessController::ProcessRMI(int, void*, int, int) + 291
26      0x2b05d483e4a0 vtkPVSessionServer::OnClientServerMessageRMI(void*, int) + 272
25      0x2b05d48342e5 vtkPVSessionBase::ExecuteStream(unsigned int, vtkClientServerStream const&, bool) + 53
24      0x2b05d48352bb vtkPVSessionCore::ExecuteStream(unsigned int, vtkClientServerStream const&, bool) + 59
23      0x2b05d4835482 vtkPVSessionCore::ExecuteStreamInternal(vtkClientServerStream const&, bool) + 242
22      0x2b05d55c8ddd vtkClientServerInterpreter::ProcessStream(vtkClientServerStream const&) + 29
21      0x2b05d55c8b3e vtkClientServerInterpreter::ProcessOneMessage(vtkClientServerStream const&, int) + 1294
20      0x2b05d55c840d vtkClientServerInterpreter::ProcessCommandInvoke(vtkClientServerStream const&, int) + 1229
19      0x2b05d55c7da9 vtkClientServerInterpreter::CallCommandFunction(char const*, vtkObjectBase*, char const*, vtkClientServerStream const&, vtkClientServerStream&) + 345
18      0x2b05d32e65d8 vtkPVRenderViewCommand(vtkClientServerInterpreter*, vtkObjectBase*, char const*, vtkClientServerStream const&, vtkClientServerStream&, void*) + 8312
17      0x2b05dab38bf1 vtkPVRenderView::StillRender() + 97
16      0x2b05dab44503 vtkPVRenderView::Render(bool, bool) + 659
15      0x2b05dab498a2 vtkPVRenderViewDataDeliveryManager::RedistributeDataForOrderedCompositing(bool) + 4850
14      0x2b05ea9ff090 vtkDIYKdTreeUtilities::GenerateCuts(std::vector<vtkDataObject*> const&, int, bool, vtkMultiProcessController*, double const*) + 512
13      0x2b05ea9fd153 vtkDIYKdTreeUtilities::GenerateCuts(std::vector<vtkSmartPointer<vtkPoints>, std::allocator<vtkSmartPointer<vtkPoints> > > const&, int, vtkMultiProcessController*, double const*) + 3699
12      0x2b05ea9fbd44 /apps/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/bin/../lib/../lib/libvtkFiltersParallelDIY2-pv5.9.so.1(+0x2ad44) [0x2b05ea9fbd44]
11      0x2b05eaa2475b /apps/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/bin/../lib/../lib/libvtkFiltersParallelDIY2-pv5.9.so.1(+0x5375b) [0x2b05eaa2475b]
10      0x2b05eaa10814 /apps/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/bin/../lib/../lib/libvtkFiltersParallelDIY2-pv5.9.so.1(+0x3f814) [0x2b05eaa10814]
9       0x2b05ea9ef96f /apps/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/bin/../lib/../lib/libvtkFiltersParallelDIY2-pv5.9.so.1(+0x1e96f) [0x2b05ea9ef96f]
8       0x2b05ea9f5690 /apps/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/bin/../lib/../lib/libvtkFiltersParallelDIY2-pv5.9.so.1(+0x24690) [0x2b05ea9f5690]
7       0x2b05ea9fb291 /apps/ParaView-5.9.1-MPI-Linux-Python3.8-64bit/bin/../lib/../lib/libvtkFiltersParallelDIY2-pv5.9.so.1(+0x2a291) [0x2b05ea9fb291]
6       0x2b05d8adcc53 /lib64/libstdc++.so.6(+0x5ec53) [0x2b05d8adcc53]
5       0x2b05d8adca33 /lib64/libstdc++.so.6(+0x5ea33) [0x2b05d8adca33]
4       0x2b05d8adca06 /lib64/libstdc++.so.6(+0x5ea06) [0x2b05d8adca06]
3       0x2b05d8adea95 __gnu_cxx::__verbose_terminate_handler() + 357
2       0x2b05d2bcfa78 abort + 328
1       0x2b05d2bce387 gsignal + 55
0       0x2b05d2bce400 /lib64/libc.so.6(+0x36400) [0x2b05d2bce400]
(  76.248s) [pvserver.0      ]                       :0     FATL| Signal: SIGABRT
(  76.298s) [pvserver.8      ] vtkMPICommunicator.cxx:68    WARN| MPI had an error
------------------------------------------------
Unknown error class, error stack:
PMPI_Iprobe(123)..........................: MPI_Iprobe(src=MPI_ANY_SOURCE, tag=0, comm=0x84000004, flag=0x7ffc7fa0a1c0, status=0x7ffc7fa0a230) failed
MPIDI_CH3i_Progress_test(114).............: an error occurred while handling an event returned by MPIDI_CH3I_Sock_Wait()
MPIDI_CH3I_Progress_handle_sock_event(479): 
MPIDI_CH3I_Socki_handle_read(4014)........: connection failure (set=0,sock=4,errno=104:Connection reset by peer)
------------------------------------------------
application called MPI_Abort(comm=0x84000004, 203011159) - process 8
Edited Sep 01, 2021 by Patrick Kopper
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Assignee
Assign to
Time tracking