Hi, On 21 June 2013 23:17, Piaget Modeler <[email protected]> wrote:
> > Only difference is what the fundamental memory elements are (Atoms versus > Monads) > OpenCog Atoms might seem strange and artificial if you don't come from a mathematical background; they are, in fact, a probabilistic tweak to well-established formalisms for talking about structures of things. So, for example: If I look at your own charts at http://piagetmodeler.tumblr.com/ I see boxes connected by lines -- i.e. graphs, in the sense of "graph theory". Its worth reviewing the wikipedia article for that if you've never done so. Certain kinds of graphs, drawn with arrows, and having certain other properties, are categories, and are the subject of study in a rather abstract branch of mathematics called "category theory". If you ask what happens when you map one category to another (one graph to another) you find that they combine in only certain ways, something called "internal logic". The best-known of these is "intuitionistic logic", which is a lot like classical logic, but is missing the law of the excluded middle. In short, "graphs" and "logic" are inter-tied with one another; the ways of manipulating transformations of a graph correspond to a logic. And I really do mean "logic" -- concepts from classical logic like "there exists" and "for all" become pi-types and sigma-types, and so on. Unfortunately, the idea of "logic" also gets dizzying -- you get "Kripke semantics" and "Martin-Lof" type stuff ... OpenCog Atoms also have types; this is a nod to "type theory", which is another foundational theory of math. Types are used in (most) programming languages. They fix certain problems that set theory has... OK, wandering afield. You might also want to read about "term rewriting", "model theory", "universal algebra" each of these uses words such as "atom", "predicate", etc. that correspond to opencog ideas. In short, the general opencog idea of an "atom" is not just something random that Ben dreamed up, but is a common, consensus term widely used by people working in computer science, logic, mathematics. What's different is that Ben added a probability and uncertainty to it. That makes it look more Bayesian-ish or neural-net-ish. As a result, you can map concepts from those areas onto the atomspace, if you wish. Given the wide popularity of Bayesian and neural-net-ish stuff in AI, you should wish to do this. The one thing I haven't wrapped my mind around yet is the notion of "truth value". In opencog, its probabilistic; in these other branches, its a certain object that comes from a "subobject classifier". (The subobject classifier for sets has truth values of 0,1 or true/false. Subobject classifiers for more general categories have a much wider range of truth values (sieves). Concepts from logic, such as "and", "or", "not", "for-each", "there-exists" likewise generalize to products, disjoint unions, etc. I haven't yet made the bridge between these, and Bayesian notions of the same.) Anyway, while developing this new thing, a "monad", you may find it profitable to draw inspiration from all these different fields -- you may find more commonality than you'd think; or that what's old is new again. -- Linas ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
