On Sun, Jan 19, 2014 at 2:39 AM, Sergio Donal <[email protected]> wrote:

> It sounds somehow related to Statistical Relational Learning (see, e.g.,
> Pedro Domingos' book http://www.cs.umd.edu/srl-book/).
>
> Best!
> Sergio
>


Thanks, I've read that book before (somewhat briefly).  Of course,
[relational] inductive learning is one crucial aspect of logic-based AI.
 That book focuses on making relational learning "statistical", ie,
allowing probabilities or fuzziness etc.  Its algorithms are variations of
inductive learning in classical logic, in other words, searching for
hypotheses in the *discrete* space of logic formulas.

What "geometry of mind" wants to do, is to map logical formulas to a
geometric or continuous space.  If that can be done, we can apply very
different algorithms to the problem of inductive learning or inference
(deduction).

But the task of "mapping logic formulas to space" is not just to map
formulas to points in space.  One quintessential aspect of logic is the
"deduction operator".  That means, if logic formulas are points, then old
points can give rise to new points via deduction, ie, new points can appear
"mysteriously" in the space.  That makes such a space very different from
"ordinary" mathematical space that mathematicians work with.

That's why I want to build a "bridge" that goes from logic to mathematical
space, to be able to "see" spatially the structure of logic, including
deduction.

If it does not include deduction, then we simply have a "term algebra", and
that is relatively easy to visualize / spatialize...



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to