Attendees: Ken Moreland (SNL), Matt Letter (SNL), Abhishek Yenpure (UO), Matt Larsen (LLNL), Mark Kim (ORNL), Rob Maynard (Kitware), Allison Vacanti (Kitware), Sujin Philip (Kitware), Berk Geveci (Kitware), James Kress (ORNL), Tom Otahal (SNL), Ollie Lo (LANL), Alberto Villarreal (Intel), Manish Mathai (UO), Alok Hota (SNL/UTK), Brent Lessly (UO/LBL)
Matt Letter is looking at VTK-m CMake code.
Alok has been working on improving vectorization. Has got some code to do on-the-fly AoS to SoA and is able to get code running faster particularly on KNL. We might need a way to query whether
Tom has a merge request for improved external faces. Tom will have Ken take another look and merge it in.
Abhishek has been working on particle advection.
Manish is working on line drawing with zone hiding and pseudocoloring.
The Larsen has finished an upgrade on the way the raytracing uses the canvas. Assuming that there is depth already in the canvas. Only cast rays to where the current depth buffer lives. Can composite different mappers to render on the same canvas. Currently only supports one type of actor.
Larsen has started implementing raytracing revolved quads. He has a use case with data that has 2D quads revolved around 3D axis.
James has been working on Y axis scaling in 1D.
Berk thinking harder about integration of VTK-m into VTK and utilities.
Sujin is looking into upgrading CellSets. For this, looking into benchmarking virtual methods and faux virtual methods.
Allie working on making array handle composite writable. Need to get tuple working on CUDA.
Rob is converting typedefs to using. A big cleanup has been merged in.
Rob is working on point neighborhood worklets that in structured grids finds neighborhood points around a center point.
Ollie's student has a merge request for an n-dimensional histogram. Rob is helping with that.
Ollie's student has created a query interface. This allows you to select a subset of fields and do a selection of data in these fields.
Brent has been working on image segmentation. Brent has also had two LDAV submissions accepted.
We had a longer talk about virtual methods. Currently we have support for "faux" virtual methods that use simple objects with function pointers to call different methods at runtime. Although this works, it is a pain to set up. Ken has a merge request with a feature that allows you to set up true virtual methods in CUDA in a reasonable amount of time. Sujin has done some performance benchmarks. Although the true virtual methods are set up in about the same amount of time, there is a larger overhead (on CUDA) for the second indirection to a virtual table. The question is whether the programming convenience is worth the extra overhead. We did not come to a conclusion, but before we do we want to run the benchmarks are a new CUDA device to see if the overhead is smaller.