Dear Tiago,
thank you for this suggestion.

I tried but I am not sure of the results that I got, maybe my
computation is wrong ?
I proceeded as follows:

|[...] entropy = state.entropy() e = g.add_edge(x,y) g.ep.weights[e] =
42 new_state = state.copy(g=g, recs=[g.ep.weights],
rec_types=['discrete-poisson']) new_entropy = new_state.entropy() # Here
is the kind of value that I obtain given for the entropy in my working
example # where the graph has N = 167, E = 5787, max weight = 1458, mean
weight = 14 (this is the manufacturing email network from KONECT")
entropy Out[552]: 72938.4714059238 In [553]: new_entropy.entropy()
Out[553]: 109646.67346672397 |


Thus, as far as I understand, to compute the conditional posterior
distribution of the weight I set, we do |np.exp(entropy - new_entropy)|.
But as the difference is big, the exponential is always zero.

I tried with different nodes and weight but always obtain the same kind
of results.

I wonder if there is not an  error in my approach in order to compute
the probability of a missing edge with a given covariate/weight ?


Thanks,
adrien



On 9/26/18 3:21 PM, Tiago de Paula Peixoto wrote:

> Am 26.09.18 um 14:43 schrieb Adrien Dulac:
>> Dear all,
>>
>> I am a bit confused about the use of the weighted network models for a
>> weight prediction task;
>>
>> Suppose we have a weighted network where edges are integers. We fit a SBM
>> with a Poisson kernel as follows:
>>
>> |data = gt.load_graph(...) # The adjacency matrix has integer entries, and
>> weights greater than zero are stored in data.ep.weights. state =
>> gt.inference.minimize_blockmodel(data, B_min=10, B_max=10, state_args=
>> {'recs':[data.ep.weights], 'rec_types' : ["discrete-poisson"]}) |
>>
>> My question, is how can we obtain, from |state|, a point estimate of the
>> Poisson parameters in order to compute the distribution of the weights
>> between pairs of nodes.
> It's not this simple, since the model is microcanonical and contains
> hyperpriors, etc. The easiest thing you can do is compute the conditional
> posterior distribution of an edge and its weight. You get this by adding the
> missing edge with the desired weight to the graph, and computing the
> difference in the state.entropy(), which gives the (un-normalized) negative
> log probability (remember you have to copy the state with
> state.copy(g=g_new), after modifying the graph). By normalizing this over
> all weight values, you have the conditional posterior distribution of the
> weight.
>
> (This could be done faster by using BlockState.get_edges_prob(), but that
> does not support edge covariates yet.)
>
> Best,
> Tiago
>
​
_______________________________________________
graph-tool mailing list
[email protected]
https://lists.skewed.de/mailman/listinfo/graph-tool

Reply via email to