1. 05 Feb, 2016 1 commit
  2. 29 Jan, 2016 1 commit
    • Berk Geveci's avatar
      Refactored and update the way algorithms are updated. · f020ebb6
      Berk Geveci authored
      The way algorithms were updated (made to execute) with
      request meta-data (such as update extent) was very error
      prone and counter-intuitive. Added new methods to make
      updating with meta-data easier. I also deprecated a number
      of methods to set request meta-data. This will encourage
      developers to migrate to the new API which is less error-
  3. 27 Aug, 2015 1 commit
  4. 18 Jul, 2015 1 commit
  5. 31 Mar, 2015 1 commit
    • Dan Lipsa's avatar
      Redesign "vtkGhostLevels" arrays and related ghost functionalties. · 4dee0274
      Dan Lipsa authored
      Co-authored-by: default avatarYuanxin Liu <leo.liu@kitware.com>
      Co-authored-by: Berk Geveci's avatarBerk Geveci <berk.geveci@kitware.com>
       -The semantics of each unsigned char in the ghost arrays changes:
        Instead of storing a numeric value representing how far a cell is
        from the boundary, it is now a bit field specified by
        vtkDataSetAttributes::CellGhostTypes and
        vtkDataSetAttributes::PointGhostTypes.  The bit field is consistent
        with VisIt specs.
      - Previously, filters strip all ghost cells they request from upstream
        before finalizing the output. This is no longer done.
      - vtkUniform grids previously supported blanking through member arrays
        vtkUniformGrid::CellVisibility and
        vtkUniformGrid::PointVisibility. These arrays are removed and the
        blanking functionality are supported through the new ghost arrays
      - the "vtkGhostLevel" arrays for cell and point data are renamed to
        vtkDataSetAttributes::GhostArrayName() ("vtkGhostType").
      - the version for VTK Legacy files is increased to 4.0 and the version for
        VTK XML files is increased to 2.0. When reading older files we
        convert vtkGhostLevels array to vtkGhostType.
  6. 02 Sep, 2014 1 commit
    • Berk Geveci's avatar
      Added support for specifying split mode. · d8bdbb60
      Berk Geveci authored
      When the vtkStreamingDemandDrivenPipeline automatically
      partitions a structured request because of CAN_PRODUCE_SUB_EXTENT
      being set, it always used vtkExtentTranslator::BLOCK_MODE. I added
      support for downstream filters to specify a split mode to be used
      as an alternative. This is done by setting the UPDATE_SPLIT_MODE
      key in RequestUpdateExtent.
      Change-Id: Ie8dc2af898352adeaa261405d483ac619c035021
  7. 01 Jul, 2014 1 commit
    • Berk Geveci's avatar
      PERF: Removed unnecessary garbage collection. · a4ee6414
      Berk Geveci authored
      Information iterators increment/decrement the reference of their
      information object by default. This leads to lots of garbage
      collection passes, which when not necessary cause performance
      degradation. Used a weak reference instead. In these cases, we
      know for sure that someone else has a reference to the information
      objects (the executive).
      Change-Id: I9bc0f6a09b96c726b5d2a3bb7e78f3f7c67152b2
  8. 19 Jun, 2014 1 commit
    • Berk Geveci's avatar
      Executive was setting update extent initialized falsely. · 8c0040fa
      Berk Geveci authored
      vtkStreamingDemandDrivenPipeline was setting
      UPDATE_EXTENT_INITIALIZED when updating the whole extent,
      which was causing pipelines not to re-execute when extents
      changes. Fixed.
      Change-Id: Ic696ba7e8290f29450864b2c079a2686747e26f4
  9. 12 Jun, 2014 2 commits
    • Berk Geveci's avatar
      PERF: Removed unnecessary function call. · 38851122
      Berk Geveci authored
      The result of NeedToExecuteData() was already cached in N2E and
      is not supposed to changed. Replaced the functional call with the
      Change-Id: I295634db6239f626404854fb3aa66a8f74d8bb9a
    • Berk Geveci's avatar
      Added new meta-data and request capability to the pipeline. · be279517
      Berk Geveci authored
      Added the ability for keys to copy themselves during pipeline passes.
      This allows arbitrary meta-data to flow back and forth in the pipeline
      with changing the executives. Although this is similar to KEYS_TO_COPY,
      it works significantly better. KEYS_TO_COPY has one major flow: it
      does not work if the algorithm that is supposed to add the key to
      KEYS_TO_COPY already executed and a filter downstream is causing re-
      Also added the ability for keys to store meta-data in a data object's
      information. Allowing them to check it later in NeedToExecuteData.
      This allows filters to send a request upstream to potentially cause
      a new execution of the pipeline with changing the executives. This was
      not possible before.
      This was done by adding 3 new virtual methods to vtkInformationKey:
      NeedToExecute(), StoreMetaData() and CopyDefaultInformation(). Normally
      these do nothing. Subclasses can overwrite them to implement new
      behavior. These methods are called by the pipeline at certain times.
      CopyDefaultInformation() is called during CopyDefaultInformation(),
      StoreMetaData() after execution (REQUEST_DATA), NeedToExecute during
      Change-Id: Ice192eb0be9f0a47b512c1b0fa3cfbede74d81e1
  10. 21 May, 2014 3 commits
    • Berk Geveci's avatar
      Moved key. · ac6f1854
      Berk Geveci authored
      It made more sense to have both CAN_PRODUCE_SUB_EXTENT and
      CAN_HANDLE_PIECE_REQUEST in vtkAlgorithm.
      Change-Id: Ie64bb3e7c8417113bfb7f677c10d8feb5fdddc39
    • Berk Geveci's avatar
      Removed unused keys. · e27ae6a9
      Berk Geveci authored
      Change-Id: I24bf3b0959bf6e22543bb526b26d9fda14ab4c20
    • Berk Geveci's avatar
      Refactored how pieces and extents are handled. · 1a0b4e9d
      Berk Geveci authored
      Refactoring the way VTK goes between piece and structured
      extents. Before, extent translators were used when the pipeline
      moved from structured to unstructured data converting piece
      request to extent request. This caused many problems with filters
      that altered extents, mainly a lot of redundant IO due to
      repartitioning of different extents. This became extremely
      cumbersome to manage when running distributed. The new behavior
      pushes the extent translation all to way to the readers and
      only when readers are able to read a subset. This works much
      better. The only downside is that filters need to be able to
      handle data extents different than update extents. Most filters
      can do this but many imaging filters cannot. Those that are
      needed in parallel will have to be updated.
      As part of this work, I also removed MAXIMUM_NUMBER_OF_PIECES
      reduced to being a boolean. 1 for serial sources, -1 for parallel
      sources. I removed it and added a CAN_HANDLE_PIECE_REQUEST instead.
      This key, produced by a source, tells the executive that a source is
      able to handle piece request. It is a source only key produced in
      RequestInformation and is not propagated downstream. If this key is
      not present, the executive will only execute the source for piece 0
      to produce the entire data. It is then up to the user to add a filter
      that splits the data for other piece requests. The only exception to
      this is when CAN_PRODUCE_SUB_EXTENT is present, in which case the
      executive will split using an extent translator AT THE source - not
      Change-Id: I8db4040289ff87331adeecded4a738313d9b52df
  11. 06 Feb, 2014 1 commit
    • Berk Geveci's avatar
      Removed unnecessary recursion in pipeline streaming. · 9c95cee8
      Berk Geveci authored
      When filters set CONTINUE_EXECUTING, the pipeline used recursion
      to continue executing. I don't think that this was the original
      intent. Fixed it with a 1 line change.
      Change-Id: I98da463e3bde81fa10cde95a82827de357ff3f07
  12. 15 Jan, 2014 1 commit
    • Berk Geveci's avatar
      Removed priority based streaming and fast path. · 575ebda2
      Berk Geveci authored
      Removed priority based streaming and fast path codes in
      preparation for future refactoring. These were polluting core
      classes despite having a small user base. In the future, they
      may be refactored into the appropriate subclasses and "plugins".
      Change-Id: I54562546688c6de468b0068e9b6c65e49c5ec269
  13. 31 Oct, 2013 1 commit
  14. 23 Apr, 2013 1 commit
    • Berk Geveci's avatar
      Cleaned up garbage collection. · 0e39fa5d
      Berk Geveci authored
      Removed garbage collection related Register/UnRegister/
      ReportReferences calls from classes that did not need them.
      Using garbage collection unnecessarily can lead to
      performance issues.
      Change-Id: I2eefb6a86d9e64f898247df522a6082c07cec8aa
  15. 10 Sep, 2012 1 commit
    • Yuanxin Liu's avatar
      Changes related to the extra time update passes · 7b42e6a8
      Yuanxin Liu authored
      - Fixed a few bugs on when the pipeline calls the algorithm
      - For the time changing filters, e.g. vtkTemporalShiftScale, run
        the time changing function at the earlier time pass also
      Change-Id: I3a3678f720e72b7280a96e84b137173b795505bb
  16. 13 Aug, 2012 1 commit
  17. 17 Jul, 2012 1 commit
  18. 07 Jun, 2012 1 commit
    • Yuanxin Liu's avatar
      Add two pipeline passes to handle time-dependent meta data · abfa3454
      Yuanxin Liu authored
      Before this change, when pipeline time changes, the AMR data meta data
      gets updated by the reader in the RequestUpdateExtent pass, but this is
      the same pass where some down streaming algorithms might try to use
      the meta data and make requests accordingly.  To solve this problem,
      another two passes were introduced.  These passes will only be called
      when the key TIME_DEPENDENT_INFORMATION is present.
      Change-Id: I5e6bb09ba93e7ae67a35469479e6de327631513f
  19. 14 May, 2012 1 commit
  20. 11 May, 2012 1 commit
    • Yuanxin Liu's avatar
      remove vtkTemporalDataSet and push its pipeline support to filters · be247f1d
      Yuanxin Liu authored
      The main change is to remove the use of vtkTemporalDataSet and move
      the support of multiple temporal data sets from the execution
      pipeline to filters. To be specific,
      - Before, a filter can request objects from multiple time steps by
        setting the key UPDATE_TIME_STEPS to a vector of doubles; the
        resulting objects get wrapped by the pipeline into a single
        vtkTemporalDataSet object.
      - After, a filter can only ask for a single time step from the
        pipeline by setting the key UPDATE_TIME_STEP to a single double.
        The "wrapping" no longer happens. If a filter want to request
        multiple time steps, it needs to either inherit from
        vtkMultiTimeStepAlgorithm or use the CONTINUE_EXECUTION to loop the
        upstream pipeline and store the data from each iteration.
      The following key changes/constants are backward incompatible:
      vtkStreamingDemandDrivenPipeline::UPDATE_TIME_STEPS(() ->
      vtkStreamingDemandDrivenPipeline::PREVIOUS_UPDATE_TIME_STEPS() ->
      deleted: vtkCompositeDataPipeline::REQUIRES_TIME_DOWNSTREAM()
      depreicated: VTK_TIME_EXTENT
      Change-Id: I635b6401ae4f0a7ea7c4b5c466ced40ee75963c7
  21. 17 Apr, 2012 1 commit
    • Yuanxin Liu's avatar
      Move temporal filters and tests to hybrid and turn on the tests · 7cdaf5a3
      Yuanxin Liu authored
      - Move a pipeline key from vtkExtractCTHPart to vtkStreamingDemandDrivenPipeline ( suggested by Berk). This removes the dependency on Parallel.
      - Move a bunch of files related to temporal from filters/parallel to filters/hybrid
      - Enable two tests in hybrid.
      Change-Id: Icf3c670485bcf5557c184ece3a3293d1ff034b67
  22. 09 Apr, 2012 2 commits
    • VTK Developers's avatar
      Remove trailing whitespace from all source files · 2d323fc4
      VTK Developers authored and Brad King's avatar Brad King committed
      Exclude ThirdParty, Utilities/MetaIO, and Utilities/KWSys as these
      are maintained outside VTK.
      Co-Author: Marcus D. Hanwell <marcus.hanwell@kitware.com>
      Co-Author: Chris Harris <chris.harris@kitware.com>
      Co-Author: Brad King <brad.king@kitware.com>
    • VTK Developers's avatar
      Modularize VTK tree layout · cdd4d6fd
      VTK Developers authored and Brad King's avatar Brad King committed
      Move source files from their former monolithic VTK location to their new
      location in modular VTK without modification.  This preserves enough
      information for "git blame -M" and "git log --follow" to connect
      modularized VTK files to their original location and history.
      Co-Author: Marcus D. Hanwell <marcus.hanwell@kitware.com>
      Co-Author: Chris Harris <chris.harris@kitware.com>
      Co-Author: Brad King <brad.king@kitware.com>
      Co-Author: Nikhil Shetty <nikhil.shetty@kitware.com>
  23. 01 Nov, 2011 1 commit
  24. 28 Oct, 2011 1 commit
  25. 20 Sep, 2011 1 commit
  26. 19 Sep, 2011 2 commits
    • Berk Geveci's avatar
      More progress. Everything compiles now · d10d7093
      Berk Geveci authored
    • Berk Geveci's avatar
      Started removing data object's dependency on the pipeline. · 791b167f
      Berk Geveci authored
      It was decided to remove any dependencies that data objects
      have on the pipeline logic. When modularization is complete,
      this will allow us to build a small "data model" library
      that does not depend on the "execution model". It also
      cleans up a lot of the interdepencies between data objects
      and pipeline code. To achieve this, we need to remove all
      functionality that depend on executives and pipeline logic
      from vtkDataObject and subclasses. This includes any meta-data
      such as whole extent as well as methods to setup pipeline
      connectivity such as SetInput (to be removed from algorithms).
  27. 08 Sep, 2011 1 commit
    • Julien Finet's avatar
      Reset COMBINED_UPDATE_EXTENT when data won't be executed · 96f39e17
      Julien Finet authored
      In some specific pipeline (plug an imagedataToPolydata filter into an
      imageData-only pipeline), NeedToExecutedData inside a REQUEST_UPDATE_EXTENT
      can return 0 (vtkStreamingDemandDrivenPipeline.cxx:202), however the
      input UPDATE_NUMBER_OF_PIECES keys have to be copied from output which
      sets N2E to 1 (vtkStreamingDemandDrivenPipeline.cxx:213), which eventually
      sets this->LastPropogateUpdateExtentShortCircuited to 0.
      Having set these keys doesn't necessarily means that the data will be
      executed in REQUEST_DATA, which would then prevent the
      COMBINED_UPDATE_EXTENT from being reset, this is why we need to reset it
      as soon as in the REQUEST_UPDATE_EXTENT.
      Change-Id: I048adcd7c0d041dbe9ff20f7c2c03fab8f1f0878
  28. 22 Aug, 2011 1 commit
  29. 03 Aug, 2011 1 commit
  30. 01 Jun, 2011 1 commit
    • Dave DeMarle's avatar
      Add a way for algorithms to modify the meta information. · 07c6ca2b
      Dave DeMarle authored
      Until now, algorithms in prioritized streaming would either:
      * reject entirely the meta information given to them from upstream (default)
      * pass along the meta information to the next filter downstream (if the
      algorithm was know to have no affect on the related heavy data)
      In some cases, algorithms can decide how they might modify the data instead,
      for example a simple transform filter can transform the input bounding box
      to determine what the resulting bounding box of the transformed actual
      geometry will be. Now algorithms can do that by asserting the new
      MANAGES_METAINFORMATION flag in their constructor and then doing the meta
      information manipulation when asked to by the new REQUEST_MANAGE_INFORMATION
  31. 19 May, 2011 1 commit
  32. 10 May, 2011 1 commit
    • David Gobbi's avatar
      BUG: 11515. Merge update extent requests from downstream algorithms. · 7b628490
      David Gobbi authored
      This commit fixes bug 11515.  Previously, if an algorithm output
      had multiple consumers, and each requested a different update extent,
      then the final requested update extent would be the one that was used.
      So if one algorithm asked for (0,255,0,255,0,0) and another asked for
      (0,127,0,127,0,0) then the extent would be set to (0,127,0,127,0,0),
      and the algorithm that required (0,255,0,255,0,0) would segfault.
      The fix was to merge all update extents in a new information key called
      COMBINED_UPDATE_EXTENT.  The combined extent is reset to the empty
      extent after each cycle, i.e. after REQUEST_DATA has completed.
      Two parallel image algorithms, vtkMemoryLimitImageDataStreamer and
      vtkPImageWriter, have to override this behavior because they call
      PropagateUpdateExtent() multiple times with different extents while
      they are computing the pipeline memory size.
      Change-Id: I75599aaed24f39e8c938eb6dad24a287b281f86e
  33. 09 May, 2011 1 commit
    • Julien Finet's avatar
      Support extent changes by cleaning up COMBINED_UPDATE_EXTENT. · 4029ecea
      Julien Finet authored
      When no execute is needed (in REQUEST_UPDATE_EXTENT), we still need to
      reset COMBINED_UPDATE_EXTENT otherwise, if the UPDATE_EXTENT is later
       changed, COMBINED_UPDATE_EXTENT still contains old/obsolete bounds.
      Change-Id: I071f5130dc798da6a403fc34112b21388e53a943
  34. 21 Apr, 2011 1 commit
  35. 12 Apr, 2011 1 commit