So, do you or don't you model uncertainty, contradictory evidence, degree
of similarity, and all those good things?

And what is a "CA", or don't i want to know?



Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 20, 2007 10:39 PM
To: agi@v2.listbox.com
Subject: RE: [agi] An AGI Test/Prize



Hi Edward,



I don’t see any problems dealing with either discrete or continuous. In
fact in some ways it’d be nice to eliminate discrete and just operate in
continuous mode. But discrete maps very well with binary computers.
Continuous is just a lot of discrete, the density depending on resources
or defined as ranges in sets, other descriptors, etc. different ways.



I’m not really well versed on NARS and Novamente so can’t comment on them
and they are light years down the road. They are basically in
implementation stage, closer to realized utility, more than just theories.



Oh those 55(80),000 lines of code are an AI product I am making so it is
not AGI but the thing has basically stubs for AGI or could be used by AGI.



But the methodology I am talking about seems to be very well workable with
data from the real world. It’s hard for me to find things that it doesn’t
work with although real tests need to be performed. BTW this type of
thinking I’m sure is well analyzed by many abstract algebra
mathematicians. Computability issues exist and these may make the theory
not workable to a certain degree. I actually don’t know enough about a lot
of this math to really work it through deeply for a feasibility study
(yet) and much of it is still up in the air…



John







What I found interesting is that, described at this very general level,
what this is saying is actually related to my view of AGI, except that it
appears to be based on a totally crisp, 1 or 0 view of the world.  If that
is correct, it may be very valuable in certain domains, with are
themselves totally or almost totally crisp, but it won’t work for most
human-like thinking, because most human concepts and what they describe in
the real world are not crisp.



THAT IS, UNLESS, YOU PLAN TO MODEL CONCEPTUAL FLUIDITY, ITSELF, IN A
TOTALLY CRISP, UNCERTAINTY-BASED, WAY, which is obviously doable at some
level.  I guess that is what you are referring to by saying our mind does
crisp thinking all the time.  Even most of us anti-crispies, plan to
implement our fluid system on digital machinery using binary
representation, which we hope will be crisp (but at the 22nm node it might
be a little less than totally crisp.)



But the issue is: do your crisp techniques efficiently learn and represent
the fluidity of mental concepts, the non-literal similarity, and the many
apparent contradictions, and the uncertainty that dominate in human
thinking and sensory information about the real world?



And if so, how is your approach different than that of the Novamente/Pei
Wang-like approaches?



And if so, how well are your (was it) 80,000 lines of code of working at
actually representing and making sense of the shadows projected on the
walls of your AGI’s cave by sensations (or data) from the “real” world.



Ed Porter,



P.S. Re “CA”:  maybe I am well versed in them but I don’t know what the
acronym stands for.  If it wouldn’t be too much trouble could you please
educate me on the subject?



  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=56007871-ae3472

Reply via email to