I see a singularity, if it occurs at all, to be at least a hundred years
out.

To use Kurzweil's language, you're not thinking in "exponential time"  ;-)

The artificial intelligence problem is much more difficult
than most people imagine it to be.

"Most people" have close to zero basis to even think about the topic
in a useful way.

And most professional, academic or industry AI folks are more
pessimistic than you are.

 But what is it about
Novamente that will allow it in a few years time to comprehend its own
computer code and intelligently re-write it (especially a system as complex
as Novamente)?

I'm not going to try to summarize the key ideas underlying Novamente
in an email.  I have been asked to write a nontechnical overview of
the NM approach to AGI for a popular website, and may find time for it
later this month... if so, I'll post a link to this list.

Obviously, I think I have solved some fundamental issues related to
implementing general cognition on contemporary computers.  I believe
the cognitive mechanisms designed for NM will be adequate to lead to
the emergence within the system of the key emergent structures of mind
(self, will, focused awareness), and from these key emergent
structures comes the capability for ever-increasing intelligence.

Specific timing estimates for NM are hard to come by -- especially
because of funding vagaries (currently progress is steady but slow for
this reason), and because of the general difficulty of estimating the
rate of progress of any large-scale software project .. not to mention
various research uncertainties.  But 100 years is way off.

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to