Re: [agi] More brain scanning and language

2008-06-10 Thread J. Andrew Rogers


On Jun 3, 2008, at 8:44 AM, Mike Tintner wrote:
Thanks. I must confess to my usual confusion/ignorance here - but  
perhaps I should really have talked of "solid" rather than "3-D  
mapping."


When you sit in a familiar chair, you have, I presume, a solid  
mapping (or perhaps the word should be "moulding")  - distributed  
over your body, of how it can and will fit into that chair. And I'm  
presuming that the maps in the brain may have a similar solid  
structure. And when you're in a familiar room, you may also have  
brain maps [or "moulds"] that tell you automatically what is likely  
to be in front of you, at back, and on each side.


Does your sense of "3-D mapping" equate to this?



Humans are capable of constructing exquisite 3-dimensional models in  
their minds.  see: blind people.


Having that model and computing interactions with that model are two  
different things. Humans do not actually compute their relation to  
other objects with high precision, they approximate and iteratively  
make corrections later.  It turns out this may not be such a bad idea,  
computational topology and geometry is thin on computable high- 
precision results, but it kind of goes against the grain of computer  
science.


It is not obvious that having that 3-dimensional model and being able  
to compute extremely complex relationships on the fly are the same  
problem.  We can do the former, both as humans and on computers, but  
the latter is beyond both humans and computer science.  We have a  
model, but our poorly calibrated interactions with it are constantly  
moderated by real-world feedback.


It is an open question as to whether or not mathematics will arrive at  
an elegant solution that out-performs the sub-optimal wetware algorithm.


J. Andrew Rogers



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Jim Bromer
- Original Message 

From: Steve Richfield <[EMAIL PROTECTED]>

3-- integrative ... which itself is a very broad category with a lot
of heterogeneity ... including e.g. systems composed of wholly
distinct black boxes versus systems that have intricate real-time
feedbacks between different components' innards
 
Isn't this just #1 expanded to cover some obvious shortcomings?
---
No.



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Who here is located in the Seattle area?

2008-06-10 Thread A. T. Murray
Steve Richfield trolled like an Alaska fisherman:
>
> It has come to my attention that some of the mysterious 
> masked men here are located in the Seattle/Bellevue area, 
> as I now am. Perhaps we should get together face-to-face 
> and discuss rather than type our thoughts?
>
> Steve Richfield

Bellevue?! 'Fraid not, although I used to be a teacher 
of German and Latin at The Overlake School in Redmond.

Seattle?! Yes. If you ever go to Northgate or to Green 
Lake or to the University of Washington off-campus area, 
I can meet you there -- especially in a coffee shop, 
such as the University Book Store cafe, or the 
Solstice Cafe, or any of the coffee shops at Northgate.
To meet Mentifex at Green Lake in the summer, just
ask the Seattle lifeguards to point out "Arthur" 
a.k.a. Crawdad Man (my sobriquet).

Be carrying some kind of AI/neuroscience book, 
and the "qui vive?" challenge is "Dr. Eliza, I presume?" 
-- to be answered with "Tell me more about Dr. Eliza."

Arthur T. Murray/Mentifex
-- 
http://mentifex.virtualentity.com/mentifex_faq.html 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Steve Richfield
Richard,

On 6/8/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> You also failed to address my own previous response to you:  I basically
> said that you make remarks as if the whole of cognitive science does not
> exist.


Quite the contrary. My point is that not only does cognitive science fail to
provide adequate guidance to develop anything like an AGI, but further,
paradigm shifting obfuscates things to the point that this vast wealth of
knowledge is unusable for *DEVELOPMENT*.

BTW, your comments here suggested that I may not have made my point about
"paradigm shifting" where the external observed functionality may be
translated to/from a very different internal representation/functionality.
This of course leads observations of cognition efforts astray, by derailing
consideration of what might actually be happening.

However, TESTING is quite another matter, as cognitive science provides many
"touch points" for capability to show whether an AGI is working anything at
all like us.

So yes, cognitive science is alive and well, but probably unusable to
provide a basis for AGI development.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Steve Richfield
Ben,

On 6/8/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> "Nothing will ever be attempted if all possible objections must be
> first overcome "  - Dr Samuel Johnson


... to whose satisfaction? Here on this forum, there are only two groups of
"judges":
1.  The people who are actually writing the code, and
2.  People who might fund the above.

Note that *I* am NOT on this list. However, I believe that it is important
for you to be able to speak to objections, even though your words may not
dissuade the objectors, and to produce some sort of documentation of these
to "throw at" "experts" that future investors might bring in. As I have
mentioned on prior postings, it IS possible to overcome contrary opinions by
highly credentialed "experts", but you absolutely MUST "have your act
together" to have a chance at this.

Note that when faced with two people, one of whom says that something is
impossible, and the other saying that he can do it, that (having been in
this spot myself on several occasions) I almost always bet on the guy who
says that he can do it. That having been said, just what are my objections
here?! They are that you haven't adequately explained (to me) just how you
are going to blow past the obvious challenges that lie ahead, which strongly
suggests that you haven't adequately considered them. It is that careful
consideration of challenges that separates the "angels" from the "fools who
rush in". Given significant evidence of that careful consideration, I would
be inclined to bet on your success, even though I might disagree with some
of your evaluations.

Yes, I heard you explain how experimentation is still needed to figure out
what approaches might work, and which approaches should be consigned to the
bit bucket. That of course is "research", and the vast majority of research
leads nowhere. Planned experimental research is NOT a substitute for careful
consideration of stated challenges, unless coupled with some sort of
explanation as to how the research should provide a path past those
challenges (the "scientific method" that tests theories). Hence, I was just
looking for some hopeful words to describe a potential success path, and not
any sort of "proof of future success"

I completely agree that words (e.g. mine) are no substitute for running
code, but neither is running code any substitute for explanatory words,
unless of course the code is to only exist on the author's computer.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Steve Richfield
Matthias,

On 6/8/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
>
> > In short, most people on this
> > list appear to be interested only in HOW to straight-line program an AGI
> > (with the implicit assumption that we operate anything at all like we
> appear
> > to operate), but not in WHAT to program, and most especially not in any
> > apparent insurmountable barriers to successful open-ended capabilities,
> > where attention would seem to be crucial to ultimate success.
> >
> > Anyone who has been in high-tech for a few years KNOWS that success can
> come
> > only after you fully understand what you must overcome to succeed. Hence,
> > based on my own past personal experiences and present observations here,
> > present efforts here would seem to be doomed to fail - for personal if
> not
> > for technological reasons.
>
> ---
>
> Philosophers, biologists, cognitive scientists  worked many many years to
> model the algorithms in the brain but only with success in some details.
> The
> overall
> model of human GI still does not exist.
>
> Should we really begin programming AGI only after fully understanding?


I was attempting to make two points that were apparently missed:
1.  A machine (e.g. a scanning UV fluorescence microscope) could be made for
about the cost of a single supercomputer, that would provide enormous clues
if not outright answers to many of the presently outstanding questions. The
lack of funding for THAT shows a general lack of interest in this field by
anyone with money.
2.  Hence, with a lack of monetary interest and a lack of a good story as to
why this should succeed, there would seem to be little prospect for success,
because even a completely successful AGI program would then need money to
develop its marketing and distribution. That Dr. Eliza has achieved some of
the more valuable goals, but has yet to raise any money, shows that the
world is NOT looking to beat a path to this "better mousetrap".

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Steve Richfield
Bob,

On 6/8/08, Bob Mottram <[EMAIL PROTECTED]> wrote:
>
> 2008/6/8 Ben Goertzel <[EMAIL PROTECTED]>:
> > Those of us w/ experience in the field have heard the objections you
> > and Tintner are making hundreds or thousands of times before.  We have
> > already processed the arguments you're making and found them wanting.
>
>
> I entirely agree with this response.  To anyone who does believe that
> they're ahead of the game and being ignored my advice would be to
> produce some working system which can be demonstrated - even if it's
> fairly minimalist.  It's much harder to people to ignore a working
> demo than mere philosophical debate or speculation.


Dr. Eliza does that rather well, showing how a really simple program can
deliver part of what AGI promises in the long distant future, with a good
user interface and no dangers of it taking over the world. Further, it
better delimits what an AGI must be able to do to be valuable, as
duplicating the function of a simple program should NOT be on the list of
hoped-for capabilities.

The BIG lesson of Dr. Eliza is that it hinges on one particular fragment of
machine knowledge that does NOT appear on Internet postings, casual
conversations, or even direct experience. That fragment is what people
typically say to demonstrate their ignorance of an issue. Every expert knows
these utterances, but they rarely if ever appear in text. Give authors
suitable blanks to fill in, and Dr. Eliza "comes to life". Without that
level of information, I seriously doubt the future of any AGI system.

In short, I have produced my demo and presented it to International
audiences at AI conferences, and hereby return this particular ball to your
court.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Steve Richfield
Jim, Ben, et al,

On 6/10/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
>
> Ben wrote:
>
> I think that AGI, right now,


The thing that "stumbled" me when I first got here, is understanding just
what is meant here by "AGI". It is NOT the process that goes on behind our
eyeballs, as that is clearly an emergent property that can result in VERY
different functioning and brain mappings between individuals. Neither is it
"anything that works" because of the instant rejection of Dr. Eliza's
methods. No, it is something in between these two extremes, something like
"programs that learn to behave intelligently". Perhaps Ben or someone else
could propose a better brief definition that would be widely accepted here.

could also be analyzed as having four
> main approaches
>
> 1-- logic-based ... including a host of different logic formalisms


Mike and I have been challenging the overall feasibility of these
approaches, which is what started this thread. Hence, let's avoid thread
recursion.

2-- neural net/ brain simulation based ... including some biologically
> quasi-realistic systems and some systems that are more formal and
> abstract
>
> 3-- integrative ... which itself is a very broad category with a lot
> of heterogeneity ... including e.g. systems composed of wholly
> distinct black boxes versus systems that have intricate real-time
> feedbacks between different components' innards


Isn't this just #1 expanded to cover some obvious shortcomings?

4-- miscellaneous ... evolutionary learning, etc. etc.


5.- Carefully analyzed and simply programmed approaches to accomplish tasks
that would seem to require intelligence, but (by most definitions) are not
intelligent. Chess playing programs and Dr. Eliza fall into this bin.
Apparently, Ben is intentionally excluding this bin from consideration. The
MAJOR importance of this particular bin is that other forms of AGI are as
worthless doing this sort of work as people are playing Chess, because
simple programs can easily do this sort of work RIGHT NOW, without further
development. Hence, many of AGI's stated hopes and dreams need to be
retargeted to doing things that can NOT be done by simple programs.

It's hardly a herd, it's more of a chaos ;-p


As we are discovering here, herds can always be subdivided into clusters.
But then, we start arguing about what should be clustered together.

-- Ben
> ---
>
> I think you have to include complexity.  Although complexity problems can
> be / should be seen as an issue relevant to all AGI paradigms, the
> significance of the problem makes it a primary concern to me.  I would say
> that I am interested in the problems of complexity and integration of
> concepts.


It is unclear how Dr. Eliza's methods fail to do this, except that people
must code the machine knowledge rather than having the program learn it from
observation/experience. Note that Dr. Eliza appears to be able to handle the
hand-coded machine knowledge of the entire world. Note that the "big
problems" in the world are generally NOT "intelligence limited", but rather
appear to be "approach limited". To illustrate, one man, Saddam Hussein, did
something in Iraq that the entire US military backed by the nearly limitless
wealth of the US government can't even come close to doing - keep the peace,
albeit by leaving a few dead bodies in his wake. The limitation in
intelligence was in failing to see that his methods were *necessary* to keep
the peace in that particular heterogeneous society, so our only rational
choices were to either leave him alone to run Iraq, or invade and adopt his
methods. Doing neither, things can only get worse, and Worse, and WORSE...
Now that we have killed him, we have no apparent way back out.

Alternatively, there are now programs (mostly hidden inside the CIA) to
recognize patterns in apparently random messages, used as the first step in
breaking secret codes.

Perhaps you could better define what you mean by "complexity" to obviate my
questions?

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-10 Thread Jim Bromer
Ben wrote:

I think that AGI, right now, could also be analyzed as having four
main approaches

1-- logic-based ... including a host of different logic formalisms

2-- neural net/ brain simulation based ... including some biologically
quasi-realistic systems and some systems that are more formal and
abstract

3-- integrative ... which itself is a very broad category with a lot
of heterogeneity ... including e.g. systems composed of wholly
distinct black boxes versus systems that have intricate real-time
feedbacks between different components' innards

4-- miscellaneous ... evolutionary learning, etc. etc.

It's hardly a herd, it's more of a chaos ;-p

-- Ben 
---

I think you have to include complexity.  Although complexity problems can be / 
should be seen as an issue relevant to all AGI paradigms, the significance of 
the problem makes it a primary concern to me.  I would say that I am interested 
in the problems of complexity and integration of concepts.
Jim Bromer



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Ideological Interactions Need to be Studied

2008-06-10 Thread Vladimir Nesov
On Mon, Jun 9, 2008 at 11:20 PM, Ricky Loynd <[EMAIL PROTECTED]> wrote:
> Vladimir, that's a nice, tight overview of a design.  What drives the
> creation/deletion of nodes?
>

In current design, skills are extended through relearning and
fine-tuning of existing circuits. Roughly, new memories are expected
to form at the mutual boundaries of areas of network where usual
activation patterns are produced. At the boundaries, unusual
combinations of these different usual patterns are brought together,
which is captured in concepts of boundary nodes and can be
subsequently imitated and generalized by them. This way, new memories
can form anywhere, depending on typicality of activation in that area.
Of course, something is rewritten by new memories, but mainly concepts
that participate in them, inactive concepts are changed very rarely.
The same piece of knowledge forms in many places at the boundary, so
there is redundancy. And in general, network mainly imitates itself,
so I expect more redundancy at other levels. Gradual introduction of
new nodes over the whole inference surface or around the activity
areas may be useful.

Node removal is tricky. Strictly speaking, it is unnecessary, and can
provide only optimization. There are two kinds of nodes that are
candidates for removal: nodes that are inactive and will remain so
indefinitely, and nodes that provide unnecessary redundancy. Redundant
nodes can be limited by globally limiting the amount of concurrent
activation. If such limit is always present, and only changes slightly
over time, knowledge representation will adapt to keep necessary
information within budget, and so won't produce too much redundancy.
Inactive nodes may be controlled by adding some kind of requirement on
recall dynamics from newly formed concepts: e.g. recall at least once
in x tacts, then at least once in 4x tacts, then 16x tacts, etc. I
plan to apply such test to protecting nodes from rewriting, rather
then from removal, with unprotected concepts having higher chance of
being adjusted dramatically, capturing episodic memories. Or maybe
experiments will show that it's unnecessary, for example recalled
concepts may produce enough redundancy through secondary memories to
preserve the skill even in the face of constant-rate risk of node
reset.

One of the reasons why I use maximum margin clustering is that
inference needs to be resilient to changes in the network structure:
when something changes, a concept can adapt to that change, if it only
brings its input a little bit out of the usual range. This allows the
skillset to be adjusted at any level *locally*, without loosing
functionality in other dependent parts. The idea is to oppose
brittleness of software, while preserving some of its expressive
power. (This kind of automatic programming is not at the core of the
design, nor is it an extension of the design, but rather it's another
perspective from which to view it.)

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com