I’d be interested in gaining a better understanding of how cluster_louvain specifically deals with the local moving heuristics i.e. the first stage of the standard two-step procedure as per Blondel et al. J. Stat. Mech. Theor. Exp. (2008).
Conscious of the following:
- A detailed description of cluster_louvain for R users is unavailable, as it relies on functions developed in a C-layer
- Similar questions have been previously raised in the mailing list and on this forum, namely:
2.1) This post (https://lists.nongnu.org/archive/html/igraph-help/2012-09/msg00079.html) raised some concerns about different results obtained by cluster_louvain and one matlab implementation seemingly ‘endorsed’ by the louvain developers.
2.2) This post (Modularity (Q) based on the Louvain split, unexpected values) mentions another benchmark for cluster_louvain, the Brain Connectivity Toolbox, also in matlab - a pretty good match, it seems
- There are some language-agnostic, useful pseudocodes out there: although with the aim of describing louvain as a benchmark for improved algorithms see e.g.:
3.1) Traag et al Sci Rep 9, 5233 (2019) - esp. supplementary materials;
3.2) Waltman, L., van Eck, N.J. Eur. Phys. J. B 86, 471 (2013).
Long story short, none of the above helps me unpick the cluster_louvain function specifically; nor is it meant chiefly as a resource for R users.
Obviously, I’ve attempted my own implementation in R form scratch, relying solely on the ‘basics’ as I understand them. It goes without saying I’m getting mixed results: good matching for small toy examples (i.e. yields the same n. of communities and modularity score across partitions), but then my implementation strands from cluster_louvain with slightly larger examples (not even 200 arcs…).
Grateful if anyone could point me in the right direction re what’s underneath the local moving heuristics in the cluster_louvain function, and whether that might be made digestible to a C illiterate like myself.