Re: Cognitive Science 'unusable' for AGI [WAS Re: [agi] Pearls Before Swine...]

2008-06-12 Thread Steve Richfield
Richard,

On 6/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 I am using cognitive science as a basis for AGI development,


 If my fear of paradigm shifting proves to be unfounded, then you may well
be right. However, I would be surprised if there weren't a LOT of paradigm
shifting going on. It would sure be nice to know rather than taking such a
big gamble. Only time will tell for sure.


 and finding it not only appropriate, but IMO the only viable approach.


This really boils down to the meaning of viable. I was asserting that the
cost of gathering more information (e.g. with a scanning UV fluorescence
microscope) was probably smaller than even a single AGI development project
- if you count the true value of your very talented efforts. Hence, this
boils down to what your particular skills are, which I presume are in AI
programming. On the other hand, I have worked in a major university's
neurological surgery lab, wrote programs that interacted with individual
neurons, etc., and hence probably feel warmer about working the lab side
of this problem.

Note that no one has funded neuroscience research to determine information
processing functionality - it has ALL been to support research targeting
various illnesses. The IP feedback that has come out of those efforts is
byproduct and NOT the primary goal. It would take rather little
experimentation to make a BIG dent in the many unknowns relating to AGI if
that were the primary goal.

BTW, neuroscience researchers are in the SAME sort of employment warp as AI
people are. All of the research money is now going to genetic research,
leaving classical neuroscience research stalled. They aren't even working on
new operations that are needed to address various conditions that present
operations fail to address. A friend of mine now holds a dual post, as both
the chairman of a neurological surgery department and as the director of
research at a major university's health sciences complex. He is appalled at
where the research money is now being thrown, and how little will probably
ever come of it. He must administer this misdirected research, while also
administering a surgical team that still must often work in the dark due to
inadequate research. He feels helpless in this crazy situation.

The good news here is that even a few dollars put into IP-related research
would probably return a LOT of useful information for AGI folks. All I was
saying is that somehow, someone needs to do this work.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Cognitive Science 'unusable' for AGI [WAS Re: [agi] Pearls Before Swine...]

2008-06-11 Thread Richard Loosemore

Steve Richfield wrote:

Richard,

On 6/8/08, *Richard Loosemore* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


You also failed to address my own previous response to you:  I
basically said that you make remarks as if the whole of cognitive
science does not exist.

 
Quite the contrary. My point is that not only does cognitive 
science fail to provide adequate guidance to develop anything like an 
AGI, but further, paradigm shifting obfuscates things to the point that 
this vast wealth of knowledge is unusable for _DEVELOPMENT_.
 
BTW, your comments here suggested that I may not have made my point 
about paradigm shifting where the external observed functionality may 
be translated to/from a very different internal 
representation/functionality. This of course leads observations of 
cognition efforts astray, by derailing consideration of what might 
actually be happening.
 
However, TESTING is quite another matter, as cognitive science provides 
many touch points for capability to show whether an AGI is working 
anything at all like us.
 
So yes, cognitive science is alive and well, but probably unusable to 
provide a basis for AGI development.
 
Steve Richfield


What is this foolishness?

I am using cognitive science as a basis for AGI development, and finding 
it not only appropriate, but IMO the only viable approach.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Jim Bromer
Ben wrote:

I think that AGI, right now, could also be analyzed as having four
main approaches

1-- logic-based ... including a host of different logic formalisms

2-- neural net/ brain simulation based ... including some biologically
quasi-realistic systems and some systems that are more formal and
abstract

3-- integrative ... which itself is a very broad category with a lot
of heterogeneity ... including e.g. systems composed of wholly
distinct black boxes versus systems that have intricate real-time
feedbacks between different components' innards

4-- miscellaneous ... evolutionary learning, etc. etc.

It's hardly a herd, it's more of a chaos ;-p

-- Ben 
---

I think you have to include complexity.  Although complexity problems can be / 
should be seen as an issue relevant to all AGI paradigms, the significance of 
the problem makes it a primary concern to me.  I would say that I am interested 
in the problems of complexity and integration of concepts.
Jim Bromer



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Steve Richfield
Jim, Ben, et al,

On 6/10/08, Jim Bromer [EMAIL PROTECTED] wrote:

 Ben wrote:

 I think that AGI, right now,


The thing that stumbled me when I first got here, is understanding just
what is meant here by AGI. It is NOT the process that goes on behind our
eyeballs, as that is clearly an emergent property that can result in VERY
different functioning and brain mappings between individuals. Neither is it
anything that works because of the instant rejection of Dr. Eliza's
methods. No, it is something in between these two extremes, something like
programs that learn to behave intelligently. Perhaps Ben or someone else
could propose a better brief definition that would be widely accepted here.

could also be analyzed as having four
 main approaches

 1-- logic-based ... including a host of different logic formalisms


Mike and I have been challenging the overall feasibility of these
approaches, which is what started this thread. Hence, let's avoid thread
recursion.

2-- neural net/ brain simulation based ... including some biologically
 quasi-realistic systems and some systems that are more formal and
 abstract

 3-- integrative ... which itself is a very broad category with a lot
 of heterogeneity ... including e.g. systems composed of wholly
 distinct black boxes versus systems that have intricate real-time
 feedbacks between different components' innards


Isn't this just #1 expanded to cover some obvious shortcomings?

4-- miscellaneous ... evolutionary learning, etc. etc.


5.- Carefully analyzed and simply programmed approaches to accomplish tasks
that would seem to require intelligence, but (by most definitions) are not
intelligent. Chess playing programs and Dr. Eliza fall into this bin.
Apparently, Ben is intentionally excluding this bin from consideration. The
MAJOR importance of this particular bin is that other forms of AGI are as
worthless doing this sort of work as people are playing Chess, because
simple programs can easily do this sort of work RIGHT NOW, without further
development. Hence, many of AGI's stated hopes and dreams need to be
retargeted to doing things that can NOT be done by simple programs.

It's hardly a herd, it's more of a chaos ;-p


As we are discovering here, herds can always be subdivided into clusters.
But then, we start arguing about what should be clustered together.

-- Ben
 ---

 I think you have to include complexity.  Although complexity problems can
 be / should be seen as an issue relevant to all AGI paradigms, the
 significance of the problem makes it a primary concern to me.  I would say
 that I am interested in the problems of complexity and integration of
 concepts.


It is unclear how Dr. Eliza's methods fail to do this, except that people
must code the machine knowledge rather than having the program learn it from
observation/experience. Note that Dr. Eliza appears to be able to handle the
hand-coded machine knowledge of the entire world. Note that the big
problems in the world are generally NOT intelligence limited, but rather
appear to be approach limited. To illustrate, one man, Saddam Hussein, did
something in Iraq that the entire US military backed by the nearly limitless
wealth of the US government can't even come close to doing - keep the peace,
albeit by leaving a few dead bodies in his wake. The limitation in
intelligence was in failing to see that his methods were *necessary* to keep
the peace in that particular heterogeneous society, so our only rational
choices were to either leave him alone to run Iraq, or invade and adopt his
methods. Doing neither, things can only get worse, and Worse, and WORSE...
Now that we have killed him, we have no apparent way back out.

Alternatively, there are now programs (mostly hidden inside the CIA) to
recognize patterns in apparently random messages, used as the first step in
breaking secret codes.

Perhaps you could better define what you mean by complexity to obviate my
questions?

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Steve Richfield
Bob,

On 6/8/08, Bob Mottram [EMAIL PROTECTED] wrote:

 2008/6/8 Ben Goertzel [EMAIL PROTECTED]:
  Those of us w/ experience in the field have heard the objections you
  and Tintner are making hundreds or thousands of times before.  We have
  already processed the arguments you're making and found them wanting.


 I entirely agree with this response.  To anyone who does believe that
 they're ahead of the game and being ignored my advice would be to
 produce some working system which can be demonstrated - even if it's
 fairly minimalist.  It's much harder to people to ignore a working
 demo than mere philosophical debate or speculation.


Dr. Eliza does that rather well, showing how a really simple program can
deliver part of what AGI promises in the long distant future, with a good
user interface and no dangers of it taking over the world. Further, it
better delimits what an AGI must be able to do to be valuable, as
duplicating the function of a simple program should NOT be on the list of
hoped-for capabilities.

The BIG lesson of Dr. Eliza is that it hinges on one particular fragment of
machine knowledge that does NOT appear on Internet postings, casual
conversations, or even direct experience. That fragment is what people
typically say to demonstrate their ignorance of an issue. Every expert knows
these utterances, but they rarely if ever appear in text. Give authors
suitable blanks to fill in, and Dr. Eliza comes to life. Without that
level of information, I seriously doubt the future of any AGI system.

In short, I have produced my demo and presented it to International
audiences at AI conferences, and hereby return this particular ball to your
court.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Steve Richfield
Matthias,

On 6/8/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

  In short, most people on this
  list appear to be interested only in HOW to straight-line program an AGI
  (with the implicit assumption that we operate anything at all like we
 appear
  to operate), but not in WHAT to program, and most especially not in any
  apparent insurmountable barriers to successful open-ended capabilities,
  where attention would seem to be crucial to ultimate success.
 
  Anyone who has been in high-tech for a few years KNOWS that success can
 come
  only after you fully understand what you must overcome to succeed. Hence,
  based on my own past personal experiences and present observations here,
  present efforts here would seem to be doomed to fail - for personal if
 not
  for technological reasons.

 ---

 Philosophers, biologists, cognitive scientists  worked many many years to
 model the algorithms in the brain but only with success in some details.
 The
 overall
 model of human GI still does not exist.

 Should we really begin programming AGI only after fully understanding?


I was attempting to make two points that were apparently missed:
1.  A machine (e.g. a scanning UV fluorescence microscope) could be made for
about the cost of a single supercomputer, that would provide enormous clues
if not outright answers to many of the presently outstanding questions. The
lack of funding for THAT shows a general lack of interest in this field by
anyone with money.
2.  Hence, with a lack of monetary interest and a lack of a good story as to
why this should succeed, there would seem to be little prospect for success,
because even a completely successful AGI program would then need money to
develop its marketing and distribution. That Dr. Eliza has achieved some of
the more valuable goals, but has yet to raise any money, shows that the
world is NOT looking to beat a path to this better mousetrap.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Steve Richfield
Ben,

On 6/8/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 Nothing will ever be attempted if all possible objections must be
 first overcome   - Dr Samuel Johnson


... to whose satisfaction? Here on this forum, there are only two groups of
judges:
1.  The people who are actually writing the code, and
2.  People who might fund the above.

Note that *I* am NOT on this list. However, I believe that it is important
for you to be able to speak to objections, even though your words may not
dissuade the objectors, and to produce some sort of documentation of these
to throw at experts that future investors might bring in. As I have
mentioned on prior postings, it IS possible to overcome contrary opinions by
highly credentialed experts, but you absolutely MUST have your act
together to have a chance at this.

Note that when faced with two people, one of whom says that something is
impossible, and the other saying that he can do it, that (having been in
this spot myself on several occasions) I almost always bet on the guy who
says that he can do it. That having been said, just what are my objections
here?! They are that you haven't adequately explained (to me) just how you
are going to blow past the obvious challenges that lie ahead, which strongly
suggests that you haven't adequately considered them. It is that careful
consideration of challenges that separates the angels from the fools who
rush in. Given significant evidence of that careful consideration, I would
be inclined to bet on your success, even though I might disagree with some
of your evaluations.

Yes, I heard you explain how experimentation is still needed to figure out
what approaches might work, and which approaches should be consigned to the
bit bucket. That of course is research, and the vast majority of research
leads nowhere. Planned experimental research is NOT a substitute for careful
consideration of stated challenges, unless coupled with some sort of
explanation as to how the research should provide a path past those
challenges (the scientific method that tests theories). Hence, I was just
looking for some hopeful words to describe a potential success path, and not
any sort of proof of future success

I completely agree that words (e.g. mine) are no substitute for running
code, but neither is running code any substitute for explanatory words,
unless of course the code is to only exist on the author's computer.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Steve Richfield
Richard,

On 6/8/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 You also failed to address my own previous response to you:  I basically
 said that you make remarks as if the whole of cognitive science does not
 exist.


Quite the contrary. My point is that not only does cognitive science fail to
provide adequate guidance to develop anything like an AGI, but further,
paradigm shifting obfuscates things to the point that this vast wealth of
knowledge is unusable for *DEVELOPMENT*.

BTW, your comments here suggested that I may not have made my point about
paradigm shifting where the external observed functionality may be
translated to/from a very different internal representation/functionality.
This of course leads observations of cognition efforts astray, by derailing
consideration of what might actually be happening.

However, TESTING is quite another matter, as cognitive science provides many
touch points for capability to show whether an AGI is working anything at
all like us.

So yes, cognitive science is alive and well, but probably unusable to
provide a basis for AGI development.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Jim Bromer
- Original Message 

From: Steve Richfield [EMAIL PROTECTED]

3-- integrative ... which itself is a very broad category with a lot
of heterogeneity ... including e.g. systems composed of wholly
distinct black boxes versus systems that have intricate real-time
feedbacks between different components' innards
 
Isn't this just #1 expanded to cover some obvious shortcomings?
---
No.



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Stephen Reed
Hi Steve,
I'm thinking about the Texai bootstrap dialog system, and in particular about 
adding grammar rules and vocabulary for the utterance Compile a class.

Cheers.
-Steve

 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: Steve Richfield [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, June 8, 2008 2:28:07 AM
Subject: [agi] Pearls Before Swine...


Mike Tintner, et al,
 
After failing to get ANY response to what I thought was an important point 
(Paradigm Shifting regarding Consciousness) I went back through my AGI inbox to 
see what other postings by others weren't getting any responses. Mike Tintner 
was way ahead of me in no-response postings.
 
A quick scan showed that these also tended to address high-level issues that 
challenge the contemporary herd mentality. In short, most people on this list 
appear to be interested only in HOW to straight-line program an AGI (with the 
implicit assumption that we operate anything at all like we appear to operate), 
but not in WHAT to program, and most especially not in any apparent 
insurmountable barriers to successful open-ended capabilities, where attention 
would seem to be crucial to ultimate success.

Anyone who has been in high-tech for a few years KNOWS that success can come 
only after you fully understand what you must overcome to succeed. Hence, based 
on my own past personal experiences and present observations here, present 
efforts here would seem to be doomed to fail - for personal if not for 
technological reasons.
 
Normally I would simply dismiss this as rookie error, but I know that at least 
some of the people on this list have been around as long as I have been, and 
hence they certainly should know better since they have doubtless seen many 
other exuberant rookies fall into similar swamps of programming complex systems 
without adequate analysis.
 
Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU 
THINKING?
 
Steve Richfield
 


 
agi | Archives  | Modify Your Subscription  


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Mark Waser
Hi Steve,

I'm thinking about the solution to the Friendliness problem, and in 
particular desperately need to finish my paper on it for the AAAI Fall 
Symposium that is due by next Sunday.

What I would suggest, however, is that quickly formatted e-mail postings are 
exactly the wrong method for addressing high-level issues that challenge the 
contemporary herd mentality.  Part of the problem is that quick e-mails always 
(must) assume agreement on foundational issues and/or (must) assume that the 
reader will agree with (or take your word for) many points.  A much better way 
of getting your point across (and proving that it is a valid point) is to write 
yourself a nice six-to-twelve page publishable-quality scientific paper.  Doing 
so will be difficult and time-consuming but ultimately far more worthwhile than 
just throwing something out to be consumed and probably ultimately ignored by a 
mailing list of bigots.

Mark

P.S.  Mike Tintner is was ahead of everyone in no response postings not because 
he challenges the herd mentality but because he has no clue of what he is 
talking about and endlessly repeats variations of the same point *without* 
successfully proving it's foundations, successfully answering criticism, or 
even extending his point into something that is worthwhile and usable as 
opposed to just random speculation.  Also, bleating about the fact that you're 
not being answered because you're challenging the herd, even if true, is only 
counter-productive and whiny and more likely to get you ignored -- especially 
if you do it in all caps.

Crocker's rules as always (with the waste of my time exception :-)
  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Sunday, June 08, 2008 5:35 AM
  Subject: Re: [agi] Pearls Before Swine...


  Hi Steve,
  I'm thinking about the Texai bootstrap dialog system, and in particular about 
adding grammar rules and vocabulary for the utterance Compile a class.

  Cheers.
  -Steve


  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860



  - Original Message 
  From: Steve Richfield [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Sunday, June 8, 2008 2:28:07 AM
  Subject: [agi] Pearls Before Swine...


  Mike Tintner, et al,

  After failing to get ANY response to what I thought was an important point 
(Paradigm Shifting regarding Consciousness) I went back through my AGI inbox to 
see what other postings by others weren't getting any responses. Mike Tintner 
was way ahead of me in no-response postings.

  A quick scan showed that these also tended to address high-level issues that 
challenge the contemporary herd mentality. In short, most people on this list 
appear to be interested only in HOW to straight-line program an AGI (with the 
implicit assumption that we operate anything at all like we appear to operate), 
but not in WHAT to program, and most especially not in any apparent 
insurmountable barriers to successful open-ended capabilities, where attention 
would seem to be crucial to ultimate success.

  Anyone who has been in high-tech for a few years KNOWS that success can come 
only after you fully understand what you must overcome to succeed. Hence, based 
on my own past personal experiences and present observations here, present 
efforts here would seem to be doomed to fail - for personal if not for 
technological reasons.

  Normally I would simply dismiss this as rookie error, but I know that at 
least some of the people on this list have been around as long as I have been, 
and hence they certainly should know better since they have doubtless seen many 
other exuberant rookies fall into similar swamps of programming complex systems 
without adequate analysis.

  Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU 
THINKING?

  Steve Richfield


--
agi | Archives  | Modify Your Subscription  




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
Steve,

Those of us w/ experience in the field have heard the objections you
and Tintner are making hundreds or thousands of times before.  We have
already processed the arguments you're making and found them wanting.
And we have already gotten tired of arguing those same points, back in
our undergrad or grad school days (or analogous time periods for those
who didn't get PhD's...).

The points you guys are making are not as original as you seem to
think.  And the reason we don't take time to argue against them in
detail is that it's boring and we're busy.  These points have already
been extensively argued by others in the published literature over the
past few decades; but I also don't want to take the time to dig up
citations for you

I'm not saying that I have an argument in favor of my approach, that
would convince a skeptic.  I know I don't.  The only argument that
will convince a skeptic is to complete a functional human-level AGI.
And even that won't be enough for some skeptics.  (Maybe a fully
rigorous formal theory of how to create an AGI with a certain
intelligence level given specific resource constraints would convince
some skeptics, but not many I suppose -- discussions would devolve
into quibbles over the definition of intelligence, and other
particular mathematical assumptions of the sort that any formal
analysis must make.)

OK.  Back to work on the OpenCog Prime documentation, which IMO is a
better use of my time than endlessly repeating the arguments from
philosophy-of-mind and cog-sci class on an email list ;-)

Sorry if my tone seems obnoxious, but I didn't find your description
of those of us working on actual AI systems as having a herd
mentality very appealing.  The truth is, one of the big problems in
the field is that nearly everyone working on a concrete AI system has
**their own** particular idea of how to do it, and wants to proceed
independently rather than compromising with others on various design
points.  It's hardly a herd mentality -- the different systems out
there vary wildly in many respects.

-- Ben G

On Sun, Jun 8, 2008 at 3:28 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Mike Tintner, et al,

 After failing to get ANY response to what I thought was an important point
 (Paradigm Shifting regarding Consciousness) I went back through my AGI inbox
 to see what other postings by others weren't getting any responses. Mike
 Tintner was way ahead of me in no-response postings.

 A quick scan showed that these also tended to address high-level issues that
 challenge the contemporary herd mentality. In short, most people on this
 list appear to be interested only in HOW to straight-line program an AGI
 (with the implicit assumption that we operate anything at all like we appear
 to operate), but not in WHAT to program, and most especially not in any
 apparent insurmountable barriers to successful open-ended capabilities,
 where attention would seem to be crucial to ultimate success.

 Anyone who has been in high-tech for a few years KNOWS that success can come
 only after you fully understand what you must overcome to succeed. Hence,
 based on my own past personal experiences and present observations here,
 present efforts here would seem to be doomed to fail - for personal if not
 for technological reasons.

 Normally I would simply dismiss this as rookie error, but I know that at
 least some of the people on this list have been around as long as I have
 been, and hence they certainly should know better since they have doubtless
 seen many other exuberant rookies fall into similar swamps of programming
 complex systems without adequate analysis.

 Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
 THINKING?

 Steve Richfield

 
 agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Mike Tintner
Steve,

A quick response for now. I was going to reply to an earlier post of yours, in 
which you made the most important point for me:

The difficulties in proceeding in both neuroscience and AI/AGI is NOT a lack 
of technology or clever people to apply it, but is rather a lack of 
understanding of the real world and how to effectively interact within it.

I had already had a go at expounding this,and I think I've got a better way 
now. (It's actually v. important to philosophically conceptualise it precisely 
- and you're not quite managing it any more than I was).

I think it's this:

everyone in AGI is almost exclusively interested in general intelligence as 
INFORMATION PROCESSING - as opposed to KNOWLEDGE (about the world).

IOW everyone is mainly interested in the problems of storing and manipulating 
information via hardware and software, and what logic/maths/programs etc to 
use., which is of course, what they know all about, and is essential.

People aren't interested in, though, in what is also essential: the problems of 
acquiring knowledge about the world. For them knowledge is all data. 
Different kinds and forms of knowledge? Dude, they're just bandwidth.

To draw an analogy, it's like being interested only in developing a wonderfully 
powerful set of cameras, and not in photography. To be a photographer, you have 
to know about your subject as well as your machine and its s/ware. You have to 
know, say, human beings and how their faces change and express emotions, if you 
want to be a portrait photographer - or animals and their behaviour if you want 
to photograph them in the wild. You have to know the problems of acquiring 
knowledge re particular parts of the world. And the same is true of AGI.

This lack of interest in knowledge is at the basis of the fantasy of a superAGI 
taking off. That's an entirely mathematical fantasy derived from thinking 
purely about the information processing side of things. Computers are getting 
more and more powerful; as my computer starts to build a body of data, it will 
build faster and faster, get recursively better and better... and whoops.. 
it'll take over the world.  On an information processing basis, that seems 
reasonable - for computers definitely will keep increasing amazingly in 
processing power 

From a knowledge POV, though, it's an absurd fantasy. As soon as you think in 
terms of acquiring knowledge and solving problems about any particular area of 
the world, you realise that knowledge doesn't simply expand mathematically. 
Everywhere you look, you find messy problems and massive areas of ignorance, 
that can only be solved creatively. The brain - all this neuroscience and we 
still don't know the engram principle. The body - endless diseases we 
haven't solved. Women - what the heck *do* they want? And so on and on. And 
unfortunately the solution of these problems - creativity - doesn't run to 
mathematical timetables. If only..

And as soon as you think in knowledge as opposed to information terms, you 
realise that current AGI is based on an additional absurd fantasy - the 
bookroom fantasy. When you think just in terms of data, well, it seems 
reasonable that you can simply mine the texts of the world, esp. via the Net, 
and supplement that with instruction from human teachers, and become ever more 
superintelligent. You or your agent, says the fantasy, can just sit in a room 
with your books and net connection, and perhaps a few visitors, and learn all 
about the world.

Apparently, you don't actually have to go out in the world at all - you can 
learn all about Kazakhstan without ever having been there, or sex without ever 
having had sex, or sports without ever having played them, or diseases without 
ever having been in surgeries and hospitals and sickrooms etc. etc.

When you think in terms of knowledge, you quickly realise that to know and 
solve problems about the world or any part, you need not just information in 
texts, you need EXPERIENCE, OBSERVATION, INVESTIGATION, EXPERIMENT, and 
INTERACTION with the subject, and maybe a stiff drink. A computer sitting in a 
room, or a billion computers in a billion rooms, are not going to solve the 
problems of the world in magnificent isolation. (They'll help an awful lot, but 
they won't finally solve the problems).

Just thinking in terms of science as one branch of knowlege, and how science 
solves problems, would tell you this. Science without in-the-lab experiment and 
in-the-field observation is unthinkable.

The bookroom fantasy is truly absurd if you think about it in knowledge terms, 
but AGI-ers just aren't thinking in those terms.

You, Steve, it seems to me, are unusual here because you have had to think very 
extensively in terms of knowledge -  and a particular subject area, i.e. 
health, and so you're acutely and unusually aware of the problems of acquiring 
knowledge there rather than just data.

It has to be said, that it's v. hard to think about intelligence from 

Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
 The truth is, one of the big problems in
 the field is that nearly everyone working on a concrete AI system has
 **their own** particular idea of how to do it, and wants to proceed
 independently rather than compromising with others on various design
 points.  It's hardly a herd mentality -- the different systems out
 there vary wildly in many respects.

 -- Ben G

To analogize to another field, in his book Three Roads to Quantum Gravity,
Lee Smolin identifies three current approaches to quantum gravity:

1-- string theory

2-- loop quantum gravity

3-- miscellaneous mathematical approaches based on various odd formalisms
and ideas

I think that AGI, right now, could also be analyzed as having four
main approaches

1-- logic-based ... including a host of different logic formalisms

2-- neural net/ brain simulation based ... including some biologically
quasi-realistic systems and some systems that are more formal and
abstract

3-- integrative ... which itself is a very broad category with a lot
of heterogeneity ... including e.g. systems composed of wholly
distinct black boxes versus systems that have intricate real-time
feedbacks between different components' innards

4-- miscellaneous ... evolutionary learning, etc. etc.

It's hardly a herd, it's more of a chaos ;-p

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Bob Mottram
2008/6/8 Ben Goertzel [EMAIL PROTECTED]:
 Those of us w/ experience in the field have heard the objections you
 and Tintner are making hundreds or thousands of times before.  We have
 already processed the arguments you're making and found them wanting.


I entirely agree with this response.  To anyone who does believe that
they're ahead of the game and being ignored my advice would be to
produce some working system which can be demonstrated - even if it's
fairly minimalist.  It's much harder to people to ignore a working
demo than mere philosophical debate or speculation.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread A. T. Murray
The abnormalis sapiens Herr Doktor Steve Richfield wrote:


 Hey you guys with some gray hair and/or bald spots, 
 WHAT THE HECK ARE YOU THINKING?

prin Goertzel genesthai, ego eimi

http://www.scn.org/~mentifex/mentifex_faq.html

My hair is graying so much and such a Glatze is beginning,
that I went in last month and applied for US GOV AI Funding,
based on my forty+ quarters of work history for The Man.
In August of 2008 the US Government will start funding my AI.

ATM/Mentifex


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread Gary Miller
Steve Richfield asked:

 Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
THINKING?   
 
We're thinking Don't feed the Trolls!
 
  _  

agi | Archives http://www.listbox.com/member/archive/303/=now
http://www.listbox.com/member/archive/rss/303/  | Modify
http://www.listbox.com/member/?;
Your Subscriptionhttp://www.listbox.com   



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Steve Richfield
Ben and Mike,

WOW, two WONDERFUL in-your-face postings that CLEARLY delimit a central AGI
issue. Since my original posting ended with a question and Ben took a shot
at the question, I would like to know a little more...

On 6/8/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 Those of us w/ experience in the field have heard the objections you
 and Tintner are making hundreds or thousands of times before.  We have
 already processed the arguments you're making and found them wanting.
 And we have already gotten tired of arguing those same points, back in
 our undergrad or grad school days (or analogous time periods for those
 who didn't get PhD's...).


I think that the underlying problem here is that Mike and I haven't yet
really heard the other side. Since you and others are presumably looking for
financing, you too will need these arguments encapsulated in some sort of
read this form you can throw at disbelievers.

If your statement above is indeed true (and I believe that it is), then you
ARE correct that we shouldn't be arguing this here. You should simply throw
an article at us to make your point. If this article doesn't yet exist, then
you MUST create it if you are ever to have ANY chance at funding. You might
want to invite Mike and I to wring it out before you publish it.

The points you guys are making are not as original as you seem to
 think.


I don't think we made any claim of originality, except perhaps in
expression.

And the reason we don't take time to argue against them in
 detail is that it's boring and we're busy.  These points have already
 been extensively argued by others in the published literature over the
 past few decades; but I also don't want to take the time to dig up
 citations for you


You need just ONE GOOD citation on which to hang your future hopes at
funding. More than that and your funding will disappear in a pile of paper.

I'm not saying that I have an argument in favor of my approach, that
 would convince a skeptic.


I have actually gotten funding for a project where the expert was a
skeptic who advised against funding! My argument went something like Note
the lack of any technical objections in his report. What he is REALLY saying
is that HE (the Director of an EE Department at a major university) cannot
do this, and I agree. However, my team has a fresh approach and the energy
to succeed that he simply does not have.

I know I don't.  The only argument that
 will convince a skeptic is to complete a functional human-level AGI.


You are planning to first succeed, and then go for funding?! This sounds
suicidal.

And even that won't be enough for some skeptics.  (Maybe a fully
 rigorous formal theory of how to create an AGI with a certain
 intelligence level given specific resource constraints would convince
 some skeptics, but not many I suppose -- discussions would devolve
 into quibbles over the definition of intelligence, and other
 particular mathematical assumptions of the sort that any formal
 analysis must make.)


I suspect that whatever you write will be good for something, even though it
may fall far short of AGI.

OK.  Back to work on the OpenCog Prime documentation, which IMO is a
 better use of my time than endlessly repeating the arguments from
 philosophy-of-mind and cog-sci class on an email list ;-)


Again, please don't repeat anything here, just show us what you would
obviously have to show someone considering funding your efforts.

Sorry if my tone seems obnoxious, but I didn't find your description
 of those of us working on actual AI systems as having a herd
 mentality very appealing.


Oops, sorry about that. I meant no disrespect.

The truth is, one of the big problems in
 the field is that nearly everyone working on a concrete AI system has
 **their own** particular idea of how to do it, and wants to proceed
 independently rather than compromising with others on various design
 points.


YES. The lack of usable software interfaces does indeed cut deeply. A good
proposal here could go a LONG way to propelling the AGI programming field to
success.

It's hardly a herd mentality -- the different systems out
 there vary wildly in many respects.


While the details vary widely, Mike and I were addressing the very concept
of writing code to perform functions (e.g. thinking) that apparently
develop on their own as emergent properties, and in the process foreclosing
on many opportunities, e.g. developing in variant ways to address problems
in new paradigms. Direct programming would seem to lead to lesser rather
than greater intelligence. Am I correct that this is indeed a central
thread in all of the different systems that you had in mind?

Note in passing that simulations can sometimes be compiled into
executable code. Now that the bidirectional equivalence of NN and fuzzy
logic approaches has been established, and people often program fuzzy logic
methods directly into C/C++ code (especially economic models), there is now
a (contorted) path to 

RE: [agi] Pearls Before Swine...

2008-06-08 Thread Derek Zahn
Gary Miller writes:
 
 We're thinking Don't feed the Trolls!
 
Yeah, typical trollish behavior -- upon failing to stir the pot with one 
approcah, start adding blanket insults.  I put Steve Richfield in my killfile a 
week ago or so, but I went back to the archive to read the message in question. 
 The reason it got no response is that it is incoherent.  Seriously, I couldn't 
even understand the point of it.  Something about dreams and brains being wired 
completely different and some thumbnail calculations which are not included 
but apparently conclude that AGI will need the entire population of the earth 
for software maintenance... um, that's just weird rambling crackpottery.  It is 
so far away from any sort of AGI nuts and bolts that it cannot even be parsed.  
 
There are people who do not believe they are crackpots (but are certainly 
perceived that way) who then transform into trolls spouting vague blanket 
insults and whining about being ignored.  That type of unsupported fringe 
wackiness is tolerated because, frankly, the whole field is fringe to most 
people.  When it turns into vague attacks, blanket condemnation, and insults (a 
la Tintner and now Richfield) it simply isn't worth reading any more.
 
For others in danger of spiraling down the same drain, I recommend:
* Be cordial.   Note: condescending is not cordial.
* Be specific and concise.  Stick to one point.
* Do not refer to decades-old universally ignored papers about character 
recognition as if they are AI-shaping revolutions.
* Do not drop names from some hazy good old days
* Attempt to limit rambling off-topic insights into marginally related material
* If you are going to criticize instead of putting forward positive ideas (why 
you'd bother criticizing this field is beyond me, but if you must): criticize 
specific things, not the herd or all of you researchers or the field of 
AGI... as Ben pointed out earlier, no two people in this area agree on much of 
anything and they cannot be lumped together.  Criticizing specific things means 
actually reading and attempting to understand the published works of AGI 
researchers -- the test for whether you belong here is whether you are willing 
and able to actually do that.
 
Mr. Richfield may find a more receptive audience here:
 
http://www.kurzweilai.net/mindx/frame.html


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread John G. Rose
 From: A. T. Murray [mailto:[EMAIL PROTECTED]
 
 The abnormalis sapiens Herr Doktor Steve Richfield wrote:
 
 
  Hey you guys with some gray hair and/or bald spots,
  WHAT THE HECK ARE YOU THINKING?
 
 prin Goertzel genesthai, ego eimi
 
 http://www.scn.org/~mentifex/mentifex_faq.html
 
 My hair is graying so much and such a Glatze is beginning,
 that I went in last month and applied for US GOV AI Funding,
 based on my forty+ quarters of work history for The Man.
 In August of 2008 the US Government will start funding my AI.
 

Does this mean that now maybe you can afford to integrate some AJAX into
that JavaScript AI mind of yours?

John




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
 While the details vary widely, Mike and I were addressing the very concept
 of writing code to perform functions (e.g. thinking) that apparently
 develop on their own as emergent properties, and in the process foreclosing
 on many opportunities, e.g. developing in variant ways to address problems
 in new paradigms. Direct programming would seem to lead to lesser rather
 than greater intelligence. Am I correct that this is indeed a central
 thread in all of the different systems that you had in mind?

Different AGI systems rely on emergence to varying extents ...

No one knows which brain functions rely on emergence to which extents ...
we're still puzzling this out even in relatively well-understood brain regions
like visual cortex.  (Feedforward connections in visual cortex are sorta
well understood, but feedback connections, which is where emergence might
play in, are very poorly understood as yet.)

For instance, the presence of a hierarchy of progressively more abstract
feature detectors in visual cortex clearly does NOT emerge in a strong sense...
it may emerge during fetal and early-childhood neural self-organization, but in
a way that is carefully genetically preprogrammed.

But, the neural structures that carry out object-recognition may well emerge
as a result of complex nonlinear dynamics involving learning in both the
feedback and feedforward connections...

so my point is, the brain is a mix of wired-in and emergent stuff, and we
don't know where the boundary lies...

as with vision, similarly e.g. for language understanding.  Read Jackendoff's
book

Jackendoff, Ray (2002). Foundations of Language: Brain, Meaning,
Grammar, Evolution.

and the multi-author book

mitpress.mit.edu/book-home.tcl?isbn=0262050528

for thoughtful treatments of the subtle relations btw programmed-in
and learned aspects of human intelligence ... much of the discussion
pertains implicitly to emergence too, though they don't use that word
much ... because emergence is key to learning...

In the Novamente design we've made some particular choices about what
to build in versus what to allow to emerge.  But, for sure, the notion
of emergence
from complex self-organizing dynamics has been a key part of our thinking in
making the design...

Neural net AGI approaches tend to leave more to emerge, whereas logic based
approaches tend to leave less... but that's just a broad generalization

In short there is a huge spectrum of choices in the AGi field regarding what
to build in versus what to allow to emerge ... not a herd mentality at all...

-- Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Mike Tintner
Ben: No one knows which brain functions rely on emergence to which extents 
...
we're still puzzling this out even in relatively well-understood brain 
regions
like visual cortex. ... But, the neural structures that carry out 
object-recognition may well emerge

as a result of complex nonlinear dynamics involving learning in both the
feedback and feedforward connections...



Ben,

Why, when you see this:

http://www.mediafire.com/imageview.php?quickkey=wtmjsxmmyhlthumb=4

do you also see something like this:

http://www.featurepics.com/FI/Thumb300V/20061110/Black-Swan-134875.jpg

Wtf is he on about? Well, you just effortlessly crossed domains - did some 
emergence. You solved the central problem of AGI - that underlies 
metaphor, analogy, creativity, conceptualisation/categorisation, and even, 
I'd argue, visual object recognition - how to cross domains.


How did you solve it?

We have a philosophical difference here - your approach is/was to consider 
ways of information processing - look at different kinds of logic, 
programming, neural networks and theories of neural processing, (as above) 
and set up your system on that basis, and hope the answer will emerge. (You 
also defined all 4 main approaches to AGI purely in terms of info. 
processing and not in any terms of how they propose to cross domains).


My approach is: first you look at the problem of crossing domains in its own 
terms - work out an ideal way to solve it - which will probably be close to 
the way the mind does solve it -  then think about how to implement your 
solution technically, because otherwise you're working blind. Isn't that (he 
asks from ignorance) what you guys do when called in to help design a 
company's IT system from scratch  - look first at the company's problems in 
their own terms, before making technical recommendations?(It's OK - I 
know minds won't meet here  :)  ).





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Jim Bromer
- Original Message 

From: Mike Tintner [EMAIL PROTECTED]

My approach is: first you look at the problem of crossing domains in its own 
terms - work out an ideal way to solve it - which will probably be close to 
the way the mind does solve it -  then think about how to implement your 
solution technically...
--
Instead of talking about what you would do,  do it. 

Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Jim Bromer
- Original Message 

From: Mike Tintner [EMAIL PROTECTED]

My approach is: first you look at the problem of crossing domains in its own 
terms - work out an ideal way to solve it - which will probably be close to 
the way the mind does solve it -  then think about how to implement your 
solution technically...
--
Instead of talking about what you would do,  do it. 

I mean, work out your ideal way to solve the questions of the mind and share it 
with us after you've have found some interesting results.

Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
Nothing will ever be attempted if all possible objections must be
first overcome   - Dr Samuel Johnson


-- Ben G

On Mon, Jun 9, 2008 at 7:41 AM, Jim Bromer [EMAIL PROTECTED] wrote:
 - Original Message 
 From: Mike Tintner [EMAIL PROTECTED]

 My approach is: first you look at the problem of crossing domains in its own
 terms - work out an ideal way to solve it - which will probably be close to
 the way the mind does solve it -  then think about how to implement your
 solution technically...
 --
 Instead of talking about what you would do,  do it.

 I mean, work out your ideal way to solve the questions of the mind and share
 it with us after you've have found some interesting results.

 Jim Bromer
 
 agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Richard Loosemore

Steve Richfield wrote:

Mike Tintner, et al,
 
After failing to get ANY response to what I thought was an important 
point (*Paradigm Shifting regarding Consciousness) *I went back through 
my AGI inbox to see what other postings by others weren't getting any 
responses. Mike Tintner was way ahead of me in no-response postings.
 
A quick scan showed that these also tended to address high-level issues 
that challenge the contemporary herd mentality. In short, most people on 
this list appear to be interested only in HOW to straight-line program 
an AGI (with the implicit assumption that we operate anything at all 
like we appear to operate), but not in WHAT to program, and most 
especially not in any apparent insurmountable barriers to successful 
open-ended capabilities, where attention would seem to be crucial to 
ultimate success.


Anyone who has been in high-tech for a few years KNOWS that success can 
come only after you fully understand what you must overcome to succeed. 
Hence, based on my own past personal experiences and present 
observations here, present efforts here would seem to be doomed to fail 
- for personal if not for technological reasons.
 
Normally I would simply dismiss this as rookie error, but I know that at 
least some of the people on this list have been around as long as I have 
been, and hence they certainly should know better since they have 
doubtless seen many other exuberant rookies fall into similar swamps of 
programming complex systems without adequate analysis.
 
Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE 
YOU THINKING?


I was thinking that your previous commentary was a stream of 
consciousness jumble that made no sense.


You also failed to address my own previous response to you:  I basically 
said that you make remarks as if the whole of cognitive science does not 
exist.  That kind of position makes me want to not take any notice of 
your comments.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread A. T. Murray
John G. Rose wrote:
 [...]
  Hey you guys with some gray hair and/or bald spots,
  WHAT THE HECK ARE YOU THINKING?
 
 prin Goertzel genesthai, ego eimi

Before Goertzel came to be, I am. (a Biblical allusion in Greek :-)

 
 http://www.scn.org/~mentifex/mentifex_faq.html

The above link is an update on 8 June 2008 of 
http://www.advogato.org/article/769.html from 2004.

 
 My hair is graying so much and such a Glatze is beginning,
 that I went in last month and applied for US GOV AI Funding,
 based on my forty+ quarters of work history for The Man.
 In August of 2008 the US Government will start funding my AI.

In other words, Soc. Sec. will henceforh finance Mentifex AI. 

 Does this mean that now maybe you can afford to integrate
 some AJAX into that JavaScript AI mind of yours?

 John

No, because I remain largely ignorant of Ajax.

http://mind.sourceforge.net/Mind.html
and the JavaScript Mind User Manual (JMUM) at 
http://mentifex.virtualentity.com/userman.html 
will remain in JavaScript and not Ajax.

As I continue to re-write the User Manual, I 
will press hard for the adoption of Mentifex AI
in high-school classes on artificial intelligence.

Arthur T. Murray


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread John G. Rose
 John G. Rose wrote:

  Does this mean that now maybe you can afford to integrate
  some AJAX into that JavaScript AI mind of yours?
 
  John
 
 No, because I remain largely ignorant of Ajax.
 
 http://mind.sourceforge.net/Mind.html
 and the JavaScript Mind User Manual (JMUM) at
 http://mentifex.virtualentity.com/userman.html
 will remain in JavaScript and not Ajax.
 


Oh OK just checkin'. AJAX is JavaScript BTW, and quite powerful.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com