Skip to content

Improve benchmark statistics

This PR makes a few improvements to the micro benchmarks:

  • A warm up run is done and not timed to allow for any allocation of room for output data without accounting for it in the run times. Previously this time spent allocating memory would be included in the time we measured for the benchmark.

  • Benchmarks are run multiple times and we then compute some statistics about the run time of the benchmark to give a better picture of the expected run time of the function. To this end we run the benchmark either 500 times or for 1.5s, whichever comes sooner (though these are easily changeable). We then perform outlier limiting by Winsorising the data (similar to how Rust's benchmarking library works) and print out the median, mean, min and max run times along with the median absolute deviation and standard deviation.

  • Because benchmarks are run many times they can now perform some initial setup in the constructor, eg. to fill some test input data array with values to let the main benchmark loop run faster.

  • To allow for benchmarks to have members of the data type being benchmarked the struct must now be templated on this type, leading to a bit of awkwardness. I've worked around this by adding the VTKM_MAKE_BENCHMARK and VTKM_RUN_BENCHMARK macros, the make benchmark macro generates a struct that has an operator() templated on the value type which will construct and return the benchmark functor templated on that type. The run macro will then use this generated struct to run the benchmark functor on the type list passed. You can also pass arguments to the benchmark functor's constructor through the make macro however this makes things more awkward because the name of the MakeBench struct must be different for each variation of constructor arguments (for example see BenchLowerBounds).

  • Added a short comment on how to add benchmarks in vtkm/benchmarking/Benchmarker.h as the new system is a bit different from how the tests work.

  • You can now pass an extra argument when running the benchmark suite to only benchmark specific functions, eg. Benchmarks_TBB BenchmarkDeviceAdapter ScanInclusive Sort will only benchmark ScanInclusive and Sort. Running without any extra arguments will run all the benchmarks as before.

Let me know what you think, and if there's any areas that can be improved. I'm not too happy with MAKE/RUN macro system but I haven't thought of something better yet, so definitely let me know if you have any thoughts here. When it all looks ok I can squash everything down into one commit to merge in.

Merge request reports

Loading