Skip to content

Small improvements on vtkBuffer, vtkQuadricDecimation and vtkSmartVolumeMapper

Alexy Pellegrini requested to merge alexy.pellegrini/vtk:improvements into master

Information:

These changes were originally authored by @LucasGandelKitware (long code quotes removed)


  • vtkBuffer.h:

As you probably know there is a ambiguity between MAX/MIN defines and std max()/min() on Windows platform, in case you include “Windows.h”. Sure there are different workarounds available, like “NOMINMAX” or “#pragma push_macro("MIN") …”, but you need to take care about the order of includes to make it work in all scenarios.

Our proposal to change call to std::min() to (std::min)() in vtkBuffer.h from: std::copy(this->Pointer, this->Pointer + std::min(this->Size, newsize), newArray); to: std::copy(this->Pointer, this->Pointer + (std::min)(this->Size, newsize), newArray);

This very common way to workaround such issues, see Eigen and similar libraries.

  • vtkQuadricDecimation.cxx:

(check commits)

  • Large volume rendering:

As we already discussed for volumes larger than 512³ the volume rendering is getting instable. It is highly dependent on used GPU and available memory.

If we look into the vtkSmartVolumeMapper.cxx, we will find following code segment: (check commits)

So, the interactive volume rendering is controlled by computation of the reduction ratio, but the issue is that GetReductionRatio() has following implementation in vtkOpenGLGPUVolumeRayCastMapper.h:

void GetReductionRatio(double* ratio) override { ratio[0] = ratio[1] = ratio[2] = 1.0; }

It means that the reduction or low res will never happen! Following is the trivial implementation, how we currently use it, to handle this case: (check commits)

In this case we also need to adjust following code in vtkSmartVolumeMapper.cxx, because we can use scale directly: (check commits)

As you can see, this is very trivial implementation of the reduction ratio computation for the fixed 512³ threshold. We think this can be enhanced by querying the available GPU memory via DirectX or similar and adjusting the reduction rate accordantly to currently available GPU memory. This will be probably difficult to implement portable, that’s why we would like to propose to make it configurable via API. This way the client code can decide or configure the reduction rate as required.


I hope a single pull request for multiple small changes is OK.

Merge request reports