How to quantify time-dependent network flexibility in igraph

I have search the net, but Im unable to find any resource about calculating network flexibility with igraph.

I would like to double confirm whether such a feature exist or not in igraph-python?

Specifically, flexbility measure that measure the frequency of a node change module allegiance; the transition of brain states between consecutive temporal segments.

… [Basset2011] Bassett, D. S., Wymbs, N. F., Porter, M. A., Mucha, P. J., Carlson, J. M., & Grafton, S. T. (2011). Dynamic reconfiguration of human brain networks during learning. Proceedings of the National Academy of Sciences, 108(18), 7641-7646.

I’m not 100% sure if I understand correctly what “flexibility” refers to in this context. Nevertheless, I’m fairly confident that igraph does not provide it indeed.

From what I could gather quickly from a glance at the paper, it is simply a metric that describes how often a node changed from one partition to another partition (across partitions in temporally adjacent graphs). This does not require any graph theoretical calculation I believe, so in principle you can simply calculate this yourself in Python. If you’re not sure how to do that, let us know, preferably with some (minimal) code demonstrating what you would like to do.

Perhaps my understanding of “flexibility” is incorrect. If so, could you then perhaps provide a mathematical definition of the term?

Thanks for the confirmation @vtraag.

For future reader, the flexibility for membership produced by find_partition_temporal can be calculated with code attach at the bottom of this thread.

For future reader on how to calculate flexibility

Idea based on maksim/dyconnmap package :

github.com

import leidenalg as la
import igraph as ig
import numpy as np

A1 = np.array ( [[0., 0., 0., 0., 0, 0, 0], [5., 0., 0., 0., 0, 0, 0], [1., 0., 0., 0., 0, 0, 0],
                 [0., 1., 2., 0., 0, 0, 0], [0., 0., 0., 1., 0, 0, 0], [0., 0., 0., 1., 0, 0, 0],
                 [0., 0., 0., 0., 1, 1, 0]] )

A2 = np.array ( [[0., 0., 0., 0., 0, 0, 0], [5., 0., 0., 0., 0, 0, 0], [1., 0., 0., 0., 0, 0, 0],
                 [0., 1., 2., 0., 0, 0, 0], [0., 0., 0., 1., 0, 0, 0], [0., 0., 1., 1., 0, 0, 0],
                 [0., 0., 0., 0., 1, 1, 0]] )

A3 = np.array ( [[0., 0., 0., 0., 0, 0, 0], [0., 0., 0., 0., 0, 0, 0], [0., 0., 0., 0., 0, 0, 0],
                 [0., 1., 2., 0., 0, 0, 0], [0., 0., 0., 1., 0, 0, 0], [0., 0., 0., 1., 0, 0, 0],
                 [0., 0., 0., 0., 1, 1, 0]] )

G_1 = ig.Graph.Weighted_Adjacency ( A1.tolist () )
G_2 = ig.Graph.Weighted_Adjacency ( A2.tolist () )
G_3 = ig.Graph.Weighted_Adjacency ( A3.tolist () )
G_1.vs ['id'] = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
G_2.vs ['id'] = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
G_3.vs ['id'] = ['A', 'B', 'C', 'D', 'E', 'F', 'G']



gamma = 0.05
membership, improvement = la.find_partition_temporal ( [G_1, G_2, G_3], la.CPMVertexPartition,
                                                       interslice_weight=0.1, resolution_parameter=gamma )
ntime_membership=np.array(membership)

def flexibility_index (x):

    l = len ( x )

    counter = 0
    for k in range ( l - 1 ):
        if x [k] != x [k + 1]:
            counter += 1

    fi = counter / np.float32 ( l - 1 )

    return fi

flexibility_accross_time=[ flexibility_index (ntime_membership[:,x]) for x in range(ntime_membership.shape[1])]

Hi @vtraag, I would like to confirm about the node label order. This question is somehow related to the OP,

Given 3 time-slice as in the figure below,

the node’s label are order in the sequence of t {1,2,3}, and can be extracted via

vertex_label = [f’{v[“name”]}-{v[“slice”]}’ for v in G.vs]

For future reader:
Assuming this is correct, then the time-dependent network flexibility (i.e.node) can be calculated as below

vertex_label = [f'{v["name"]}-{v["slice"]}' for v in G.vs]

node_label = np.array([x[0].split('-')[0] for x in vertex_label])
dic_idx = {v: np.where(node_label == v)[0].tolist() for v in np.unique(node_label)}

ntime_membership=np.array(partition_all.membership)

node_flexibility=[{dtx:flexibility_index (ntime_membership[dic_idx[dtx]])} for dtx in dic_idx]

Remark: The derivation of partition_all can be found at this OP, the flexibility index function is embedded in this thread.

Working from the cost posted here we can easily calculate flexibility of each node across slices.

membership_df.apply(flexibility_index, axis=1)

where membership_df is a pandas DataFrame as explained in this post.

Please note that it is not clear how you would like to deal with the “missing” nodes that have a membership of NaN in a particular slice. In the code above this is implicitly treated as a “change” across slices, but perhaps you would like to deal with it differently. For example, you could simply not count any comparison with a NaN as a change and divide by the number of comparisons you would then like to make.

1 Like