Python igraph on NVIDIA GPU - Ubuntu

Since much of calculations on igraph are done on adjacency matrices, it may be very suitable for calculations using gpu. NVIDIA GPU is commonly used for deep learning. Is it possible to use igraph with CUDA on NVIDIA GPU- Ubuntu? I have several applications involving more than 200,000 vertices, and around a million edges and for those examples, even on fast CPU it takes me more than 20 hours for computing betweenness. Otherwise, any prospects for a faster betweenness algorithms?

Thank you for any pointers. Sid

Great idea! Currently, that is not possible with igraph. I don’t know much about coding for a GPU myself, but parallelizing the betweennness calculations should be possible in theory.

For future reference, I ran into this while browsing around: https://devblogs.nvidia.com/accelerating-graph-betweenness-centrality-cuda/, it might be useful if we do want to start to implement parallel things. At the moment I believe that parallelisation in igraph is rather difficult. But perhaps it is something that would be possible in the future.

To clarify, the problem is that, according to the documentation:

It is likely that the igraph error handling method is not thread-safe, mainly because of the static global stack which is used to store the address of the temporarily allocated objects. This issue might be addressed in a later version of igraph .

Thank you. One thing I have done is that from the original graph, I had to create 8 subgraphs. In python using “pool” one can run them in parallel for computing various centrality parameters. That is at least 4 to 5 times faster than dealing with each graph separately.

1 Like