Skip to content

GitLab

  • Menu
Projects Groups Snippets
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • iMSTK iMSTK
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 65
    • Issues 65
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 11
    • Merge requests 11
  • Deployments
    • Deployments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • Repository
  • Activity
  • Graph
  • Create a new issue
  • Commits
  • Issue Boards
Collapse sidebar
  • iMSTK
  • iMSTKiMSTK
  • Issues
  • #148

Closed
Open
Created Jun 21, 2017 by Nicholas Milef@NickMilefDeveloper

Motion blur

Overview

Motion blur is a very helpful form of visual feedback to show an object moving at very fast speeds. The current way it's implemented is through VTK is by using accumulating multiple renders. Instead, we need to use a velocity buffer.

Example on the difference between the two techniques: http://john-chapman-graphics.blogspot.com/2013/01/what-is-motion-blur-motion-pictures-are.html

Here's an example of an advanced implementation (also compared different approaches): http://www.iryoku.com/next-generation-post-processing-in-call-of-duty-advanced-warfare

Note that the second set of renders show the teapot having realistic motion blur compared to the first render (what VTK currently uses). One problem with VTK's approach particularly for our use case is the wagon-wheel effect (https://en.wikipedia.org/wiki/Wagon-wheel_effect).

According to the VTK documentation, the motion blur is also expensive to compute (requires rendering the scene multiple times, unless I'm reading this incorrectly): http://www.vtk.org/doc/nightly/html/classvtkRenderWindow.html#a816914890cd363a6ebdbb1d024f89198

Requirements

One of the requirements is that we have velocity (derivative) transformation matrices for each mesh that gets passed to the shaders. It would be great if this is added to the geometry or mesh class.

Implementation

  • Compute velocity matrix
  • Vertex shader: transform velocity into screen-space
  • Fragment shader: write velocity to render-target (HW rasterizer will interpolate velocity)
  • Post process fragment shader: do a Gaussian blur according to the velocity vector

@sreekanth-arikatla

Edited Jun 21, 2017 by Nicholas Milef
Assignee
Assign to
Time tracking