Thanks a lot for your comments, Abram...

> It seems to me this would be better-described as Gibbs sampling for PLN
> inference, rather than trying to compare it to MLN. :)

I adjusted the discussion a bit, based on this suggestion... though I
did leave in some of the comparisons to MLN...

> In steps 1-4, where does the sampling actually occur? As written, it looks
> like it's just going to pass around probabilities using the ordinary PLN
> rules.
>
> I think you meant:
...
> Is this what you intended?

Your suggestion is creative but not really what I meant...

However, while I was swimming in the ocean this afternoon I realized
that what I'd written before wasn't quite right...

I have revised the blog post now, in a way that makes more sense I
think -- now I make clear that the instantaneous truth value (formerly
called the sampled truth value) is a second-order distribution, and
one is sampling first-order distributions from it...

> If the system gets probabilistic
> knowledge input from outside, though, it's important that the sample values
> be effected by this, too.

yeah, that's true; I edited the post to note that...

> Since PLN inference constitutes both learning and inference, it *might* be a
> good idea to have far more information flow back to the samples from the
> full probability estimates (in analogy with contrastive divergence, where
> there is a tight intermingling of MCMC steps and weight updates). This might
> mean occasionally re-sampling A from its current PLN truth value, rather
> than using the procedure in steps 1-4.  This is quite speculative.

Sometimes that would be helpful, other times not, I guess... Hmmm ;)

... ben


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to