I am using an Adjacency Matrix of an unweighted and undirected network. It only contains “0” or “1” as the edges between nodes. I am using the function cluster_louvain to build the network. The network has 92 nodes. According to the results of the modularity function, it has a modularity value of 0.42, which is pretty ok. But the problem is the membership assignment. Every time I run the communities function, it delivers different results for the membership. Why could this happen?
Louvain is a randomized heuristic algorithm for maximizing modularity. It is expected, and useful, that it gives a different result on every run, as you can use it to get some confidence in your results. If the partitioning you get is significantly different across runs, then the network does not have a clear community structure.
The modularity value alone is not a good measure of whether there is a community structure. Even a random network will typically have modularity values of this magnitude:
> g <- sample_gnm(100, 200)
> modularity(cluster_louvain(g))
[1] 0.46495
Thanks for the quick answer. In the runs, there is some good overlap between the membership. Maybe ca. 85-90%. In this case, the question is how to present a module structure of this network and membership in one shot. Thanks for the feedback; I appreciate it.
I recommend taking a look at this paper, which contains practical guidance for community detection:
First, to get some more confidence in your result, you can do the following: use rewire(g, with=keeping_degseq()) to shuffle around the edges in the network without affecting degrees. This will destroy any existing community structure. Now perform community detection again. Did the modularity drop? If not, then there wasn’t any community structure to begin with. If yes, then the original network must have been clustered to some extent.
See section IV.F.
You seem to be worried that this stochastic method gives a different answer on each run. I wanted to note that results from deterministic methods are not necessarily better, the issues are just not as obviously visible.
There are techniques to make use of the stochasticity of results. Look up consensus clustering, see e.g. section IV.B in the paper. We can build a co-occurrence matrix D of vertices where D_{ij} is the fraction of the times vertices i and j are assigned to the same partitions in many runs of the algorithm. \max_j D_{ij} gives you an idea of how stable the position of vertex i is. We can even run community detection on D itself to get a consensus result. See the paper for more details.