Ben and Mike,

WOW, two WONDERFUL in-your-face postings that CLEARLY delimit a central AGI
issue. Since my original posting ended with a question and Ben took a shot
at the question, I would like to know a little more...

On 6/8/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> Those of us w/ experience in the field have heard the objections you
> and Tintner are making hundreds or thousands of times before.  We have
> already processed the arguments you're making and found them wanting.
> And we have already gotten tired of arguing those same points, back in
> our undergrad or grad school days (or analogous time periods for those
> who didn't get PhD's...).


I think that the underlying problem here is that Mike and I haven't yet
really heard the other side. Since you and others are presumably looking for
financing, you too will need these arguments encapsulated in some sort of
"read this" form you can throw at disbelievers.

If your statement above is indeed true (and I believe that it is), then you
ARE correct that we shouldn't be arguing this here. You should simply throw
an article at us to make your point. If this article doesn't yet exist, then
you MUST create it if you are ever to have ANY chance at funding. You might
want to invite Mike and I to "wring it out" before you publish it.

The points you guys are making are not as original as you seem to
> think.


I don't think we made any claim of originality, except perhaps in
expression.

And the reason we don't take time to argue against them in
> detail is that it's boring and we're busy.  These points have already
> been extensively argued by others in the published literature over the
> past few decades; but I also don't want to take the time to dig up
> citations for you....


You need just ONE GOOD citation on which to hang your future hopes at
funding. More than that and your funding will disappear in a pile of paper.

I'm not saying that I have an argument in favor of my approach, that
> would convince a skeptic.


I have actually gotten funding for a project where the "expert" was a
skeptic who advised against funding! My argument went something like "Note
the lack of any technical objections in his report. What he is REALLY saying
is that HE (the Director of an EE Department at a major university) cannot
do this, and I agree. However, my team has a fresh approach and the energy
to succeed that he simply does not have."

I know I don't.  The only argument that
> will convince a skeptic is to complete a functional human-level AGI.


You are planning to first succeed, and then go for funding?! This sounds
suicidal.

And even that won't be enough for some skeptics.  (Maybe a fully
> rigorous formal theory of how to create an AGI with a certain
> intelligence level given specific resource constraints would convince
> some skeptics, but not many I suppose -- discussions would devolve
> into quibbles over the definition of intelligence, and other
> particular mathematical assumptions of the sort that any formal
> analysis must make.)


I suspect that whatever you write will be good for something, even though it
may fall far short of AGI.

OK.  Back to work on the OpenCog Prime documentation, which IMO is a
> better use of my time than endlessly repeating the arguments from
> philosophy-of-mind and cog-sci class on an email list ;-)


Again, please don't repeat anything here, just show us what you would
obviously have to show someone considering funding your efforts.

Sorry if my tone seems obnoxious, but I didn't find your description
> of those of us working on actual AI systems as having a "herd
> mentality" very appealing.


Oops, sorry about that. I meant no disrespect.

The truth is, one of the big problems in
> the field is that nearly everyone working on a concrete AI system has
> **their own** particular idea of how to do it, and wants to proceed
> independently rather than compromising with others on various design
> points.


YES. The lack of usable software interfaces does indeed cut deeply. A good
proposal here could go a LONG way to propelling the AGI programming field to
success.

It's hardly a herd mentality -- the different systems out
> there vary wildly in many respects.


While the details vary widely, Mike and I were addressing the very concept
of writing code to perform functions (e.g. "thinking") that apparently
develop on their own as emergent properties, and in the process foreclosing
on many opportunities, e.g. developing in variant ways to address problems
in new paradigms. Direct programming would seem to lead to lesser rather
than greater "intelligence". Am I correct that this is indeed a central
thread in all of the different systems that you had in mind?

Note in passing that "simulations" can sometimes be "compiled" into
executable code. Now that the bidirectional equivalence of NN and "fuzzy
logic" approaches has been established, and people often program fuzzy logic
methods directly into C/C++ code (especially economic models), there is now
a (contorted) path to compile NNs directly into code. From that, it would be
presumably possible to also program brain maps into code. Hence, I am NOT
rejecting the idea of code being intelligent, just the idea of us mere
mortals ever writing and maintaining such code, at least with our present
tools.

Steve Richfield
===============


> On Sun, Jun 8, 2008 at 3:28 PM, Steve Richfield
> <[EMAIL PROTECTED]> wrote:
> > Mike Tintner, et al,
> >
> > After failing to get ANY response to what I thought was an important
> point
> > (Paradigm Shifting regarding Consciousness) I went back through my AGI
> inbox
> > to see what other postings by others weren't getting any responses. Mike
> > Tintner was way ahead of me in no-response postings.
> >
> > A quick scan showed that these also tended to address high-level issues
> that
> > challenge the contemporary herd mentality. In short, most people on this
> > list appear to be interested only in HOW to straight-line program an AGI
> > (with the implicit assumption that we operate anything at all like we
> appear
> > to operate), but not in WHAT to program, and most especially not in any
> > apparent insurmountable barriers to successful open-ended capabilities,
> > where attention would seem to be crucial to ultimate success.
> >
> > Anyone who has been in high-tech for a few years KNOWS that success can
> come
> > only after you fully understand what you must overcome to succeed. Hence,
> > based on my own past personal experiences and present observations here,
> > present efforts here would seem to be doomed to fail - for personal if
> not
> > for technological reasons.
> >
> > Normally I would simply dismiss this as rookie error, but I know that at
> > least some of the people on this list have been around as long as I have
> > been, and hence they certainly should know better since they have
> doubtless
> > seen many other exuberant rookies fall into similar swamps of programming
> > complex systems without adequate analysis.
> >
> > Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
> > THINKING?
> >
> > Steve Richfield
> >
> > ________________________________
> > agi | Archives | Modify Your Subscription
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "If men cease to believe that they will one day become gods then they
> will surely become worms."
> -- Henry Miller
>
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to