Lukasz,

Which of the multiple issues that Mark listed is one of the two basic
directions you were referring to.

Ed Porter

-----Original Message-----
From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 14, 2007 9:15 AM
To: [email protected]
Subject: Re: [agi] What best evidence for fast AI?


I think that there are two basic directions to better the Novamente
architecture:
the one Mark talks about
more integration of MOSES with PLN and RL theory

On 11/13/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> Response to Mark Waser  Mon 11/12/2007 2:42 PM post.
>
>
>
> >MARK>>>>  Remember that the brain is *massively* parallel.  Novamente
> >MARK>>>> and
> any other linear (or minorly-parallel) system is *not* going to work
> in the same fashion as the brain.  Novamente can be parallelized to
> some degree but *not* to anywhere near the same degree as the brain.
> I love your speculation and agree with it -- but it doesn't match
> near-term reality.  We aren't going to have brain-equivalent
> parallelism anytime in the near future.
>
>
>
> ED>>>> I think in five to ten years there could be computers capable
> ED>>>> of
> providing every bit as much parallelism as the brain at prices that
> will allow thousands or hundreds of thousands of them to be sold.
>
>
>
> But it is not going to happen overnight.  Until then the lack of brain
> level hardware is going to limit AGI. But there are still a lot of
> high value system that could be built on say $100K to $10M of
> hardware.
>
>
>
> You claim we really need experience with computing and controlling
> activation over large atom tables.  I would argue that obtaining such
> experience should be a top priority for government funders.
>
>
>
> >MARK>>>>  The node/link architecture is very generic and can be used
> >MARK>>>> for
> virtually anything.  There is no rational way to attack it.  It is, I
> believe, going to be the foundation for any system since any system
> can easily be translated into it.  Attacking the node/link
> architecture is like attacking assembly language or machine code.  Now
> -- are you going to write your AGI in assembly language?  If you're
> still at the level of arguing node/link, we're not communicating well.
>
>
>
> ED>>>>  nodes and links are what patterns are made of, and each static
> pattern can have an identifying node associated with it as well as the
> nodes and links representing its sub-patterns, elements, the
> compositions of which it is part, it associations, etc.  The system
> automatically organize patterns into a gen/comp hierarchy.  So, I am
> not just dealing at a node and link level, but they are the basic
> building blocks.
>
>
>
>
>
> >MARK>>>> ... I *AM* saying that the necessity of using probabilistic
> reasoning for day-to-day decision-making is vastly over-rated and has
> been a horrendous side-road for many/most projects because they are
> attempting to do it in situations where it is NOT appropriate.  The
> "increased, almost ubiquitous adaptation of probabilistic methods" is
> the herd mentality in action (not to mention the fact that it is
> directly orthogonal to work thirty years older).  Most of the time,
> most projects are using probabilistic methods to calculate a tenth
> place decimal of a truth value when their data isn't even sufficient
> for one.  If you've got a heavy-duty discovery system, probabilistic
> methods are ideal.  If you're trying to derive probabilities from a
> small number of English statements (like "this raven is white" and
> "most ravens are black"), you're seriously on the wrong track.  If you
> go on and on about how humans don't understand Bayesian reasoning,
> you're both correct and clueless in not recognizing that your very
> statement points out how little Bayesian reasoning has to do with most
> general intelligence.  Note, however, that I *do* believe that
> probabilistic methods *are* going to be critically important for
> activation for attention, etc.
>
>
>
> ED>>>>  I agree that many approaches accord too much importance to the
> numerical accuracy and Bayesian purity of their approach, and not
> enough importance on the justification for the Bayesian formulations
> they use. I know of one case where I suggested using information that
> would almost certainly have improved a perception process and the
> suggestion was refused because it would not fit within the system's
probabilistic
> framework.   At an AAAI conference in 1997 I talked to a programmer for
a
> big defense contractor who said he as a fan of fuzzy logic system;
> that they were so much more simple to get up an running because you
> didn't have to worry about probabilistic purity.  He said his group
> that used fuzzy logic was getting things out the door that worked
> faster than the more probability limited competition.  So obviously
> there is something to say for not letting probabilistic purity get in
> the way of more reasonable approaches.
>
>
>
> But I still think probabilities are darn important. Even your "this
> raven is white" and "most ravens are black" example involves notions
> of probability.  We attribute probabilities to such statements based
> on experience with the source of such statements or similar sources of
> information, and the concept "most" is a probabilistic one.  The
> reason we humans are so good at reasoning from small data is based on
> our ability to estimate rough probabilities from similar or generic
> patterns.
>
>
>
> >MARK>>>>  ....The problem with probability-based conflict resolution
> >MARK>>>> is
> that it is a hack to get around insufficient knowledge rather than an
> attempt to figure out how to get more knowledge....
>
>
>
> ED>>>> This agrees with what I said above about not putting enough
> emphasis on selecting what probabilistic formulas are appropriate.
> But it doesn't argue against the importance of probabilities  It
> argues against using them blindly.
>
>
>
>
> >>ED>>>>  So by "operating with small amounts of data" how small, very
> roughly, are you talking about.  And are you only talking about the
> active goals or sources of activation, that will be small or are you
> saying that all the computation in the system will only be dealing
> with a small amount of data within, for example,  one second of the
> processing of  human-level system operating at human-level speed?
>
>
>
> >MARK>>>>  I mean like the way humans reason, there is only
> >MARK>>>> concentration
> on a small number of objects -- which are only one link away from an
> almost inconceivable number of related things -- and then the brain
> can jump at least three of these links with lightning rapidity.
>
>
>
> ED>>>> So this implies you are not arguing against the idea that AGI
> ED>>>> will
> be dealing with massive data, just that that use will be focused by a
> concentration on a relatively small number of sources of activation at
> once.
>
>
>
>
>
> >MARK>>>>  Ask Ben how much actual work has been done on activation
> control in very large, very sparse atom spaces in Novamente.  He'll
> tell you that it's a project for when he's further along.  I'll insist
> (as will
> Richard) that if it isn't baked in from the very beginning, you're
> probably going to have to go back to the beginning to repair the lack.
>
>
>
> ED>>>>  It is exactly such research I want to see funded.  It strikes
> ED>>>> me
> as one of the key things we must learn to do well to make powerful
> AGI. But I think even with some fairly dumb activation control systems
> you could get useful results.  Such results would not be at all
> human-level in may ways, but in other ways they could be much more
> powerful because such systems could deal with many more explicit facts
> and could input and output information at a much higher rate than
> humans.
>
>
>
> For example, what is the equivalent of the activation control (or
> search) algorithm in Google sets.  They operate over huge data.  I bet
> the algorithm for calculating their search or activation is relatively
> simple (much, much, much less than a PhD theses) and look what they
> can do.  So I think one path is to come up with applications that can
> use and reason with large data, having roughly world knowledge-like
> sparseness, (such as NL data) and start with relatively simple
> activation algorithms and develop then from the ground up.
>
>
>
> >MARK>>>>  P.S.  Oh yeah -- if you were public enemy number one, I
> wouldn't bother answering you (and I probably should lay off of the
> fan-boy crap :-).
>
>
>
> ED>>>>  Thanks.
>
>
>
> I admit I am impressed with Novamente.  Since it's the best AGI
> architecture I currently know of; I am impressed with Ben; believe
> there is a high probability all the gaps you address could be largely
> fixed within five years with deep funding (which may never come); and
> since I want to get such deep funding for just the type of large
> atom-base work you say is so critical,  I think it is important to
> focus on the potential for greatness that Novamente and somewhat
> similar systems have, rather than only think of its current gaps and
> potential problems.
>
>
>
> But of course, at the same time, we must look for and try to
> understand its gaps and potential problems so that we can remove them.
>
>
>
> Ed Porter
>
>
>
>
> -----Original Message-----
> From: Mark Waser [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 12, 2007 2:42 PM
> To: [email protected]
> Subject: Re: [agi] What best evidence for fast AI?
>
>
> >> It is NOT clear that Novamente documentation is NOT enabling, or
> >> could
> not be made enabling, with, say, one man year of work.  Strong
> argument could be made both ways.
>
>     I believe that Ben would argue that Novamente documentation is NOT
> enabling even with one man-year of work.  Ben?  There is still way to
> much
> *research* work to be done.
>
> >>  But the standard for non-enablement is very arguably weaker than
> >> not
> requiring a miracle.  It would be more like "not requiring a leap of
> creativity that is outside the normal skill of talented PhDs trained
> in related fields".
>
> >> So although your position is reasonable, I hope you understand so
> >> is
> that on the other side.
>
>
>     My meant-to-be-humorous miracle phrasing is clearly throwing you.
> The phrase "not requiring a leap of creativity that is outside the
> normal skill of talented PhDs trained in related fields" works for me.
> Novamente is *definitely* not there yet.  I'm rather sure that Ben
> would agree -- as in, I'm not on the other side, *you* are on the
> other side from the system's designer.  Again, Ben please feel free to
> chime in.
>
> >> <much scaling stuff>
>
>     Remember that the brain is *massively* parallel.  Novamente and
> any other linear (or minorly-parallel) system is *not* going to work
> in the same fashion as the brain.  Novamente can be parallelized to
> some degree but *not* to anywhere near the same degree as the brain.
> I love your speculation and agree with it -- but it doesn't match
> near-term reality. We aren't going to have brain-equivalent
> parallelism anytime in the near future.
>
> >> "with regard to serious review of memory design" I don't know what
> >> you
> mean.   Are you attacking the node, link architecture, or what?
>
>     The node/link architecture is very generic and can be used for
> virtually anything.  There is no rational way to attack it.  It is, I
> believe, going to be the foundation for any system since any system
> can easily be translated into it.  Attacking the node/link
> architecture is like attacking assembly language or machine code.  Now
> -- are you going to write your AGI in assembly language?  If you're
> still at the level of arguing node/link, we're not communicating well.
>
> >> I don't understand this.  If there as been one major transformation
> >> in
> AI since the mid-80's it is the increased, almost ubiquitous
> adaptation of probabilistic methods.  Are you claiming probabilistic
> reasoning is not important?.
>
>     It depends upon what you mean by probabilistic reasoning.  I *AM*
> saying that the necessity of using probabilistic reasoning for
> day-to-day decision-making is vastly over-rated and has been a
> horrendous side-road for many/most projects because they are
> attempting to do it in situations where it is NOT appropriate.  The
> "increased, almost ubiquitous adaptation of probabilistic methods" is
> the herd mentality in action (not to mention the fact that it is
> directly orthogonal to work thirty years older).  Most of the time,
> most projects are using probabilistic methods to calculate a tenth
> place decimal of a truth value when their data isn't even sufficient
> for one.  If you've got a heavy-duty discovery system, probabilistic
> methods are ideal.  If you're trying to derive probabilities from a
> small number of English statements (like "this raven is white" and
> "most ravens are black"), you're seriously on the wrong track.  If you
> go on and on about how humans don't understand Bayesian reasoning,
> you're both correct and clueless in not recognizing that your very
> statement points out how little Bayesian reasoning has to do with most
> general intelligence.  Note, however, that I *do* believe that
> probabilistic methods *are* going to be critically important for
> activation for attention, etc.
>
> >> With regard to knowledge-conflict-resolution, Novamente's
> >> probabilistic
> reasoning is designed to deal with it.  Most of the other system I
> know of that deal with knowledge-conflict-resolution, such as
> constraint relaxation techniques, are probability based.
>
>     This is where I believe that probabilistic reasoning is most often
> improperly used though I don't believe that "most"
> constraint-relaxation systems are probability-based (except,
> occasionally as an add-on to just why a given constraint was relaxed
> rather than another).  The problem with probability-based conflict
> resolution is that it is a hack to get around insufficient knowledge
> rather than an attempt to figure out how to get more knowledge.  It
> works because you always take the highest probability choice -- except
> when the system tells you that the sauna is hot because it doesn't
> know about the ice frozen over the top.  In data-rich constrained
> environments, probabilistic reasoning works (and neural networks are
> very successful).  In every day life . . . . it still works because
> all your probabilities are near 100% . . . . except when they suddenly
> aren't.
>
> >> So by "operating with small amounts of data" how small, very
> >> roughly,
> are you talking about.  And are you only talking about the active
> goals or sources of activation, that will be small or are you saying
> that all the computation in the system will only be dealing with a
> small amount of data within, for example,  one second of the
> processing of  human-level system operating at human-level speed?
>
>     I mean like the way humans reason, there is only concentration on
> a small number of objects -- which are only one link away from an
> almost inconceivable number of related things -- and then the brain
> can jump at least three of these links with lightning rapidity.  Once
> again, the brain is *massively* parallel and operates with a *huge*
> sparse matrix. Activation is *far* more important than truth
> probabilities and much of the focus is the other way (and activation
> is a really tough nut to solve as you rightly point out with your
> comments about activation control). Ask Ben how much actual work has
> been done on activation control in very large, very sparse atom spaces
> in Novamente.  He'll tell you that it's a project for when he's
> further along.  I'll insist (as will Richard) that if it isn't baked
> in from the very beginning, you're probably going to have to go back
> to the beginning to repair the lack.
>
>         Mark
>
> P.S.  Oh yeah -- if you were public enemy number one, I wouldn't
> bother answering you (and I probably should lay off of the fan-boy
> crap :-).
>
>   _____
>
> This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/? <http://v2.listbox.com/member/?&;
> > &
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email To
> unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64934868-3f0682

Reply via email to