Question about BioDynaMo GPU Support

In Hesam’s “GPU Acceleration of 3D Agent-BasedBiological Simulations” paper, the results we were showing that wrt a 64 core CPU, GPU’s can provide up to a 100 times speed-up (oversimplifying). Which is exciting!
My 1st question is: are you planning to increase the support for GPU?
My 2nd question is: I just started learning CUDA and is it hard to port the already existing CPU codes of BioDynaMo to CUDA/GPU? And can I try doing some porting exercise when I get a hold of CUDA and its shared memory architecture? Is there any additional educational aspect for CUDA ports needed?


I’m the author of that paper. One thing to point out is that the paper was making explicit use of BioDynaMo when it was using a different memory layout (structs of arrays instead of arrays of structs). That design made GPU offloading easy and didn’t require much data preparation. However it was reducing usability in creating models as it required knowledge of template metaprogramming for our end users.

As a result, the current GPU implementation does not reach those numbers anymore, as mentioned in our Bioinformatics paper.

Adding more GPU code to BioDynaMo is always appreciated as it can speedup computing significantly.

If you wish to contribute on this end, you could look in how our diffusion code works on the CPU. That is a good candidate for GPU acceleration as the data structures for that process are still flat arrays and at a high resolution they can take up most of the CPU runtime.

We have a couple implementations on diffusion but they internally are executed similarly. If you can create a port for one of them, you probably can use that for all of them. I would suggest starting with the Euler Diffusion grid and validate your port by copying the test cases we have for that in our test suite.

If you need some help along the way I will try my best to help out.