When running the python-igraph library on an ubuntu machine (graph handling + shortest path search), the package always uses all CPU cores at full capacity.
When running other applications/scripts in parallel, this results in a drastic slow-down of these components.
Did anybody else notice such a behavior? Is there an option to limit the CPU cores / threads to be used by the library?
PS: This phenomenon does not occur when running the same code under windows.
HTOP screenshot on an 10-Core machine (igraph resides within PID 32718 - 32726):
That’s weird - both the underlying C library and python-igraph are single-threaded so it should not use more than one CPU core at any given time. However, in your screenshot it seems that you are running multiple Python processes in parallel, and of course each such process may occupy one CPU full core if given enough work to do.
Can you share a short, simple, self-contained example in Python that uses igraph and and runs on multiple CPU cores on your machine?
Your insight on the fact that igraph (and the underlying library) is designed as a single-thread is very helpful.
While trying to replicate this situation in a self-contained example, we found out, that numpy’s underlying OpenBLAS library was the root of this issue. When restricting OpenBLAS to a single thread, only one single CPU is used.
Sorry that we suspected igraph to cause this behavior and thank you again for the information and support.
Multi-threading is not on our short-term roadmap yet. I can imagine speeding up certain parts of the library with parallel OpenMP threads in the mid-term, or forking threads inside computations that are easy to parallelize, but we are not aiming for full thread-safety at the moment.