Starting with minimize_nested_blockmodel_dl indeed speeds up the process and 
now I'm more confident that the model will eventually run:

state=minimize_nested_blockmodel_dl(g, 
state_args=dict(base_type=LayeredBlockState, state_args=dict(deg_corr=True, 
overlap=True, layers=True, ec=g.ep.layer, recs=[g.ep.weight], 
rec_types=["discrete-binomial"])))

dS, nmoves=0, 0
for i in range(100):
    ret=state.multiflip_mcmc_sweep(niter=10)
    dS+=ret[0]
    nmoves+=ret[1]

print("Change in description length:", dS)
print("Number of accepted vertex moves:", nmoves)

mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10), verbose=True)

bs=[]
def collect_partitions(s):
   global bs
   bs.append(s.get_bs())

mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10), 
verbose=True, callback=collect_partitions)

pm=PartitionModeState(bs, nested=True, converge=True)
pv=pm.get_marginal(g)

bs=pm.get_max_nested()
state=state.copy(bs=bs)

Coming back to my initial question concerning the layer-specific partitions, it 
seems that I was too optimistic about being able to decompose this information 
from get_edge_blocks. I have already spent time reading the docs and the use of 
_be_ in source codes to convert the block membership of each half-edge to 
layer-specific block membership of each node, so I need to ask for more 
instructions.

There are a few previous posts about extracting this information but none of 
the answers addresses this decomposition issue beyond get_edge_blocks. I 
presume there is a relatively straightforward way to do this?

I also thought that there could have been a more convenient way of doing this 
by applying set_edge_filter to state.g and get_majority_blocks to extract this 
information, but yeah, didn't work.
_______________________________________________
graph-tool mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to