Depth buffer read using vtkOpenGLRenderWindow::GetZbufferData incorrect on wasm/Emscripten
When reading the depth buffer using: vtkOpenGLRenderWindow::GetZbufferData
The depth values are read into RGBA tuples using a shader, then these tuples are shifted into a 32-bit integer:
for (int i = 0, j = 0; i < width * height; ++i)
{
vtkTypeUInt32 z_int = z_data_quarters[j++];
z_int += (z_data_quarters[j++] << 8);
#if defined(GL_DEPTH_COMPONENT24) || defined(GL_DEPTH_COMPONENT32)
z_int += (z_data_quarters[j++] << 16);
#endif
#ifdef GL_DEPTH_COMPONENT32
z_int += (z_data_quarters[j++] << 24);
#endif
z_data[i] = z_int / float(0xffffff);
}
For each index i
, j
must be incremented 4 times to get to the next RGBA tuple, yet when compiling with Emscripten GL_DEPTH_COMPONENT32 is not defined, therefore j
is only incremented 3 times. This results in garbage depth values after the first one.
In addition, it sems the constant 0xffffff
should really be a variable value of (2^GetDepthBufferSize())-1
.
Edited by Andrew Ward