[agi] Pearls Before Swine...

2008-06-08 Thread Steve Richfield
Mike Tintner, et al,

After failing to get ANY response to what I thought was an important point (
*Paradigm Shifting regarding Consciousness) *I went back through my AGI
inbox to see what other postings by others weren't getting any responses.
Mike Tintner was way ahead of me in no-response postings.

A quick scan showed that these also tended to address high-level issues that
challenge the contemporary herd mentality. In short, most people on this
list appear to be interested only in HOW to straight-line program an AGI
(with the implicit assumption that we operate anything at all like we appear
to operate), but not in WHAT to program, and most especially not in any
apparent insurmountable barriers to successful open-ended capabilities,
where attention would seem to be crucial to ultimate success.

Anyone who has been in high-tech for a few years KNOWS that success can come
only after you fully understand what you must overcome to succeed. Hence,
based on my own past personal experiences and present observations here,
present efforts here would seem to be doomed to fail - for personal if not
for technological reasons.

Normally I would simply dismiss this as rookie error, but I know that at
least some of the people on this list have been around as long as I have
been, and hence they certainly should know better since they have doubtless
seen many other exuberant rookies fall into similar swamps of programming
complex systems without adequate analysis.

Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
THINKING?

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Stephen Reed
Hi Steve,
I'm thinking about the Texai bootstrap dialog system, and in particular about 
adding grammar rules and vocabulary for the utterance Compile a class.

Cheers.
-Steve

 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: Steve Richfield [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, June 8, 2008 2:28:07 AM
Subject: [agi] Pearls Before Swine...


Mike Tintner, et al,
 
After failing to get ANY response to what I thought was an important point 
(Paradigm Shifting regarding Consciousness) I went back through my AGI inbox to 
see what other postings by others weren't getting any responses. Mike Tintner 
was way ahead of me in no-response postings.
 
A quick scan showed that these also tended to address high-level issues that 
challenge the contemporary herd mentality. In short, most people on this list 
appear to be interested only in HOW to straight-line program an AGI (with the 
implicit assumption that we operate anything at all like we appear to operate), 
but not in WHAT to program, and most especially not in any apparent 
insurmountable barriers to successful open-ended capabilities, where attention 
would seem to be crucial to ultimate success.

Anyone who has been in high-tech for a few years KNOWS that success can come 
only after you fully understand what you must overcome to succeed. Hence, based 
on my own past personal experiences and present observations here, present 
efforts here would seem to be doomed to fail - for personal if not for 
technological reasons.
 
Normally I would simply dismiss this as rookie error, but I know that at least 
some of the people on this list have been around as long as I have been, and 
hence they certainly should know better since they have doubtless seen many 
other exuberant rookies fall into similar swamps of programming complex systems 
without adequate analysis.
 
Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU 
THINKING?
 
Steve Richfield
 


 
agi | Archives  | Modify Your Subscription  


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


AW: [agi] Consciousness vs. Intelligence

2008-06-08 Thread Dr. Matthias Heger

Mike Tintner [mailto:[EMAIL PROTECTED]  wrote

And that's the same mistake people are making with AGI generally - no one 
has a model of what general intelligence involves, or of the kind of 
problems it must solve - what it actually DOES - and everyone has left that 
till later, and is instead busy with all the technical programming that they

find exciting - with the how it works side -  without knowing whether 
anything they're doing is really necessary or relevant..

---

Some people have models but it is not clear whether they are right or how
many computational costs they have.
In this case it is useful to write the code and see what it can do and where
are the limits.

Intelligence is a very special problem. There is no well defined
input-output relation. For any problem which can be specified by a table of
input to output there is a trivial program which solves this problem: The
program reads the input from the table and returns its output. In this
sense, every well defined problem can be solved by a program, which is not
intelligent. 

If we accept, that intelligence can never be specified by a complete well
defined input-output relation, then intelligence must be a PROPERTY of the
algorithm which behaves intelligent. Especially GENERAL Intelligence cannot
be defined by black-box behavior (=complete input-output relation). It is a
white box problem. The turing test is a weak test, since if I ask n
questions and obtain n answers which seems to be human-like, then a table of
these questions and answers would do the same.
After the turing test, I will be never sure, if the human-like behavior
holds for question n+1, n+2, ... Therefore, we must know what is going on in
the machine, in order to be sure that it acts intelligent in most different
situations. The turing test was invented because we still have no complete
model of necessary and sufficient conditions of intelligence.


If you define the universe as a set of objects with relations among each
other and dynamic laws, then an important condition of a general intelligent
system is the ability to create representations of all kinds of objects, all
kinds of relations and all kind of dynamic laws which can be inferred from
sensory inputs the AGI-system perceives. You see, that we cannot give a
table of input-output pairs for this problem. We must define a general
mechanism which can extract the patterns from the input stream and creates
the representations. This is already a white-box problem but it is a problem
which can be solved and algorithms can be proven to solve it, I suppose.

The problem of consciousness is not only a hard problem because of unknown
mechanisms in the brain but it is a problem of finding the DEFINITION of
necessary conditions for consciousness. 
I think, consciousness without intelligence is not possible. Intelligence
without consciousness is possible. But I am not sure whether GENERAL
intelligence without consciousness is possible. In every case, consciousness
is even more a white-box problem than intelligence.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Mark Waser
Hi Steve,

I'm thinking about the solution to the Friendliness problem, and in 
particular desperately need to finish my paper on it for the AAAI Fall 
Symposium that is due by next Sunday.

What I would suggest, however, is that quickly formatted e-mail postings are 
exactly the wrong method for addressing high-level issues that challenge the 
contemporary herd mentality.  Part of the problem is that quick e-mails always 
(must) assume agreement on foundational issues and/or (must) assume that the 
reader will agree with (or take your word for) many points.  A much better way 
of getting your point across (and proving that it is a valid point) is to write 
yourself a nice six-to-twelve page publishable-quality scientific paper.  Doing 
so will be difficult and time-consuming but ultimately far more worthwhile than 
just throwing something out to be consumed and probably ultimately ignored by a 
mailing list of bigots.

Mark

P.S.  Mike Tintner is was ahead of everyone in no response postings not because 
he challenges the herd mentality but because he has no clue of what he is 
talking about and endlessly repeats variations of the same point *without* 
successfully proving it's foundations, successfully answering criticism, or 
even extending his point into something that is worthwhile and usable as 
opposed to just random speculation.  Also, bleating about the fact that you're 
not being answered because you're challenging the herd, even if true, is only 
counter-productive and whiny and more likely to get you ignored -- especially 
if you do it in all caps.

Crocker's rules as always (with the waste of my time exception :-)
  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Sunday, June 08, 2008 5:35 AM
  Subject: Re: [agi] Pearls Before Swine...


  Hi Steve,
  I'm thinking about the Texai bootstrap dialog system, and in particular about 
adding grammar rules and vocabulary for the utterance Compile a class.

  Cheers.
  -Steve


  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860



  - Original Message 
  From: Steve Richfield [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Sunday, June 8, 2008 2:28:07 AM
  Subject: [agi] Pearls Before Swine...


  Mike Tintner, et al,

  After failing to get ANY response to what I thought was an important point 
(Paradigm Shifting regarding Consciousness) I went back through my AGI inbox to 
see what other postings by others weren't getting any responses. Mike Tintner 
was way ahead of me in no-response postings.

  A quick scan showed that these also tended to address high-level issues that 
challenge the contemporary herd mentality. In short, most people on this list 
appear to be interested only in HOW to straight-line program an AGI (with the 
implicit assumption that we operate anything at all like we appear to operate), 
but not in WHAT to program, and most especially not in any apparent 
insurmountable barriers to successful open-ended capabilities, where attention 
would seem to be crucial to ultimate success.

  Anyone who has been in high-tech for a few years KNOWS that success can come 
only after you fully understand what you must overcome to succeed. Hence, based 
on my own past personal experiences and present observations here, present 
efforts here would seem to be doomed to fail - for personal if not for 
technological reasons.

  Normally I would simply dismiss this as rookie error, but I know that at 
least some of the people on this list have been around as long as I have been, 
and hence they certainly should know better since they have doubtless seen many 
other exuberant rookies fall into similar swamps of programming complex systems 
without adequate analysis.

  Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU 
THINKING?

  Steve Richfield


--
agi | Archives  | Modify Your Subscription  




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
Steve,

Those of us w/ experience in the field have heard the objections you
and Tintner are making hundreds or thousands of times before.  We have
already processed the arguments you're making and found them wanting.
And we have already gotten tired of arguing those same points, back in
our undergrad or grad school days (or analogous time periods for those
who didn't get PhD's...).

The points you guys are making are not as original as you seem to
think.  And the reason we don't take time to argue against them in
detail is that it's boring and we're busy.  These points have already
been extensively argued by others in the published literature over the
past few decades; but I also don't want to take the time to dig up
citations for you

I'm not saying that I have an argument in favor of my approach, that
would convince a skeptic.  I know I don't.  The only argument that
will convince a skeptic is to complete a functional human-level AGI.
And even that won't be enough for some skeptics.  (Maybe a fully
rigorous formal theory of how to create an AGI with a certain
intelligence level given specific resource constraints would convince
some skeptics, but not many I suppose -- discussions would devolve
into quibbles over the definition of intelligence, and other
particular mathematical assumptions of the sort that any formal
analysis must make.)

OK.  Back to work on the OpenCog Prime documentation, which IMO is a
better use of my time than endlessly repeating the arguments from
philosophy-of-mind and cog-sci class on an email list ;-)

Sorry if my tone seems obnoxious, but I didn't find your description
of those of us working on actual AI systems as having a herd
mentality very appealing.  The truth is, one of the big problems in
the field is that nearly everyone working on a concrete AI system has
**their own** particular idea of how to do it, and wants to proceed
independently rather than compromising with others on various design
points.  It's hardly a herd mentality -- the different systems out
there vary wildly in many respects.

-- Ben G

On Sun, Jun 8, 2008 at 3:28 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Mike Tintner, et al,

 After failing to get ANY response to what I thought was an important point
 (Paradigm Shifting regarding Consciousness) I went back through my AGI inbox
 to see what other postings by others weren't getting any responses. Mike
 Tintner was way ahead of me in no-response postings.

 A quick scan showed that these also tended to address high-level issues that
 challenge the contemporary herd mentality. In short, most people on this
 list appear to be interested only in HOW to straight-line program an AGI
 (with the implicit assumption that we operate anything at all like we appear
 to operate), but not in WHAT to program, and most especially not in any
 apparent insurmountable barriers to successful open-ended capabilities,
 where attention would seem to be crucial to ultimate success.

 Anyone who has been in high-tech for a few years KNOWS that success can come
 only after you fully understand what you must overcome to succeed. Hence,
 based on my own past personal experiences and present observations here,
 present efforts here would seem to be doomed to fail - for personal if not
 for technological reasons.

 Normally I would simply dismiss this as rookie error, but I know that at
 least some of the people on this list have been around as long as I have
 been, and hence they certainly should know better since they have doubtless
 seen many other exuberant rookies fall into similar swamps of programming
 complex systems without adequate analysis.

 Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
 THINKING?

 Steve Richfield

 
 agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Mike Tintner
Steve,

A quick response for now. I was going to reply to an earlier post of yours, in 
which you made the most important point for me:

The difficulties in proceeding in both neuroscience and AI/AGI is NOT a lack 
of technology or clever people to apply it, but is rather a lack of 
understanding of the real world and how to effectively interact within it.

I had already had a go at expounding this,and I think I've got a better way 
now. (It's actually v. important to philosophically conceptualise it precisely 
- and you're not quite managing it any more than I was).

I think it's this:

everyone in AGI is almost exclusively interested in general intelligence as 
INFORMATION PROCESSING - as opposed to KNOWLEDGE (about the world).

IOW everyone is mainly interested in the problems of storing and manipulating 
information via hardware and software, and what logic/maths/programs etc to 
use., which is of course, what they know all about, and is essential.

People aren't interested in, though, in what is also essential: the problems of 
acquiring knowledge about the world. For them knowledge is all data. 
Different kinds and forms of knowledge? Dude, they're just bandwidth.

To draw an analogy, it's like being interested only in developing a wonderfully 
powerful set of cameras, and not in photography. To be a photographer, you have 
to know about your subject as well as your machine and its s/ware. You have to 
know, say, human beings and how their faces change and express emotions, if you 
want to be a portrait photographer - or animals and their behaviour if you want 
to photograph them in the wild. You have to know the problems of acquiring 
knowledge re particular parts of the world. And the same is true of AGI.

This lack of interest in knowledge is at the basis of the fantasy of a superAGI 
taking off. That's an entirely mathematical fantasy derived from thinking 
purely about the information processing side of things. Computers are getting 
more and more powerful; as my computer starts to build a body of data, it will 
build faster and faster, get recursively better and better... and whoops.. 
it'll take over the world.  On an information processing basis, that seems 
reasonable - for computers definitely will keep increasing amazingly in 
processing power 

From a knowledge POV, though, it's an absurd fantasy. As soon as you think in 
terms of acquiring knowledge and solving problems about any particular area of 
the world, you realise that knowledge doesn't simply expand mathematically. 
Everywhere you look, you find messy problems and massive areas of ignorance, 
that can only be solved creatively. The brain - all this neuroscience and we 
still don't know the engram principle. The body - endless diseases we 
haven't solved. Women - what the heck *do* they want? And so on and on. And 
unfortunately the solution of these problems - creativity - doesn't run to 
mathematical timetables. If only..

And as soon as you think in knowledge as opposed to information terms, you 
realise that current AGI is based on an additional absurd fantasy - the 
bookroom fantasy. When you think just in terms of data, well, it seems 
reasonable that you can simply mine the texts of the world, esp. via the Net, 
and supplement that with instruction from human teachers, and become ever more 
superintelligent. You or your agent, says the fantasy, can just sit in a room 
with your books and net connection, and perhaps a few visitors, and learn all 
about the world.

Apparently, you don't actually have to go out in the world at all - you can 
learn all about Kazakhstan without ever having been there, or sex without ever 
having had sex, or sports without ever having played them, or diseases without 
ever having been in surgeries and hospitals and sickrooms etc. etc.

When you think in terms of knowledge, you quickly realise that to know and 
solve problems about the world or any part, you need not just information in 
texts, you need EXPERIENCE, OBSERVATION, INVESTIGATION, EXPERIMENT, and 
INTERACTION with the subject, and maybe a stiff drink. A computer sitting in a 
room, or a billion computers in a billion rooms, are not going to solve the 
problems of the world in magnificent isolation. (They'll help an awful lot, but 
they won't finally solve the problems).

Just thinking in terms of science as one branch of knowlege, and how science 
solves problems, would tell you this. Science without in-the-lab experiment and 
in-the-field observation is unthinkable.

The bookroom fantasy is truly absurd if you think about it in knowledge terms, 
but AGI-ers just aren't thinking in those terms.

You, Steve, it seems to me, are unusual here because you have had to think very 
extensively in terms of knowledge -  and a particular subject area, i.e. 
health, and so you're acutely and unusually aware of the problems of acquiring 
knowledge there rather than just data.

It has to be said, that it's v. hard to think about intelligence from 

Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
 The truth is, one of the big problems in
 the field is that nearly everyone working on a concrete AI system has
 **their own** particular idea of how to do it, and wants to proceed
 independently rather than compromising with others on various design
 points.  It's hardly a herd mentality -- the different systems out
 there vary wildly in many respects.

 -- Ben G

To analogize to another field, in his book Three Roads to Quantum Gravity,
Lee Smolin identifies three current approaches to quantum gravity:

1-- string theory

2-- loop quantum gravity

3-- miscellaneous mathematical approaches based on various odd formalisms
and ideas

I think that AGI, right now, could also be analyzed as having four
main approaches

1-- logic-based ... including a host of different logic formalisms

2-- neural net/ brain simulation based ... including some biologically
quasi-realistic systems and some systems that are more formal and
abstract

3-- integrative ... which itself is a very broad category with a lot
of heterogeneity ... including e.g. systems composed of wholly
distinct black boxes versus systems that have intricate real-time
feedbacks between different components' innards

4-- miscellaneous ... evolutionary learning, etc. etc.

It's hardly a herd, it's more of a chaos ;-p

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Bob Mottram
2008/6/8 Ben Goertzel [EMAIL PROTECTED]:
 Those of us w/ experience in the field have heard the objections you
 and Tintner are making hundreds or thousands of times before.  We have
 already processed the arguments you're making and found them wanting.


I entirely agree with this response.  To anyone who does believe that
they're ahead of the game and being ignored my advice would be to
produce some working system which can be demonstrated - even if it's
fairly minimalist.  It's much harder to people to ignore a working
demo than mere philosophical debate or speculation.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread A. T. Murray
The abnormalis sapiens Herr Doktor Steve Richfield wrote:


 Hey you guys with some gray hair and/or bald spots, 
 WHAT THE HECK ARE YOU THINKING?

prin Goertzel genesthai, ego eimi

http://www.scn.org/~mentifex/mentifex_faq.html

My hair is graying so much and such a Glatze is beginning,
that I went in last month and applied for US GOV AI Funding,
based on my forty+ quarters of work history for The Man.
In August of 2008 the US Government will start funding my AI.

ATM/Mentifex


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


AW: [agi] Pearls Before Swine...

2008-06-08 Thread Dr. Matthias Heger
Steve Richfield wrote


 In short, most people on this
 list appear to be interested only in HOW to straight-line program an AGI
 (with the implicit assumption that we operate anything at all like we
appear
 to operate), but not in WHAT to program, and most especially not in any
 apparent insurmountable barriers to successful open-ended capabilities,
 where attention would seem to be crucial to ultimate success.

 Anyone who has been in high-tech for a few years KNOWS that success can
come
 only after you fully understand what you must overcome to succeed. Hence,
 based on my own past personal experiences and present observations here,
 present efforts here would seem to be doomed to fail - for personal if not
 for technological reasons.

---

Philosophers, biologists, cognitive scientists  worked many many years to 
model the algorithms in the brain but only with success in some details. The
overall
model of human GI still does not exist. 

Should we really begin programming AGI only after fully understanding?

High tech success does not need to fully understand what you must overcome
to succeed.
High tech products of today have most often a long way of past evolution. 
Rodney Brooks suspects, that this will also be the case with AGI.

It is a process of trial and error. We build systems, evaluate their limits
and build better systems and so on.
Theoretical models are useful. But the more complex the problem is, the more
important is experimental experience with the subject. And you can get this
experience only from running programs.





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread Gary Miller
Steve Richfield asked:

 Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
THINKING?   
 
We're thinking Don't feed the Trolls!
 
  _  

agi | Archives http://www.listbox.com/member/archive/303/=now
http://www.listbox.com/member/archive/rss/303/  | Modify
http://www.listbox.com/member/?;
Your Subscriptionhttp://www.listbox.com   



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Steve Richfield
Ben and Mike,

WOW, two WONDERFUL in-your-face postings that CLEARLY delimit a central AGI
issue. Since my original posting ended with a question and Ben took a shot
at the question, I would like to know a little more...

On 6/8/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 Those of us w/ experience in the field have heard the objections you
 and Tintner are making hundreds or thousands of times before.  We have
 already processed the arguments you're making and found them wanting.
 And we have already gotten tired of arguing those same points, back in
 our undergrad or grad school days (or analogous time periods for those
 who didn't get PhD's...).


I think that the underlying problem here is that Mike and I haven't yet
really heard the other side. Since you and others are presumably looking for
financing, you too will need these arguments encapsulated in some sort of
read this form you can throw at disbelievers.

If your statement above is indeed true (and I believe that it is), then you
ARE correct that we shouldn't be arguing this here. You should simply throw
an article at us to make your point. If this article doesn't yet exist, then
you MUST create it if you are ever to have ANY chance at funding. You might
want to invite Mike and I to wring it out before you publish it.

The points you guys are making are not as original as you seem to
 think.


I don't think we made any claim of originality, except perhaps in
expression.

And the reason we don't take time to argue against them in
 detail is that it's boring and we're busy.  These points have already
 been extensively argued by others in the published literature over the
 past few decades; but I also don't want to take the time to dig up
 citations for you


You need just ONE GOOD citation on which to hang your future hopes at
funding. More than that and your funding will disappear in a pile of paper.

I'm not saying that I have an argument in favor of my approach, that
 would convince a skeptic.


I have actually gotten funding for a project where the expert was a
skeptic who advised against funding! My argument went something like Note
the lack of any technical objections in his report. What he is REALLY saying
is that HE (the Director of an EE Department at a major university) cannot
do this, and I agree. However, my team has a fresh approach and the energy
to succeed that he simply does not have.

I know I don't.  The only argument that
 will convince a skeptic is to complete a functional human-level AGI.


You are planning to first succeed, and then go for funding?! This sounds
suicidal.

And even that won't be enough for some skeptics.  (Maybe a fully
 rigorous formal theory of how to create an AGI with a certain
 intelligence level given specific resource constraints would convince
 some skeptics, but not many I suppose -- discussions would devolve
 into quibbles over the definition of intelligence, and other
 particular mathematical assumptions of the sort that any formal
 analysis must make.)


I suspect that whatever you write will be good for something, even though it
may fall far short of AGI.

OK.  Back to work on the OpenCog Prime documentation, which IMO is a
 better use of my time than endlessly repeating the arguments from
 philosophy-of-mind and cog-sci class on an email list ;-)


Again, please don't repeat anything here, just show us what you would
obviously have to show someone considering funding your efforts.

Sorry if my tone seems obnoxious, but I didn't find your description
 of those of us working on actual AI systems as having a herd
 mentality very appealing.


Oops, sorry about that. I meant no disrespect.

The truth is, one of the big problems in
 the field is that nearly everyone working on a concrete AI system has
 **their own** particular idea of how to do it, and wants to proceed
 independently rather than compromising with others on various design
 points.


YES. The lack of usable software interfaces does indeed cut deeply. A good
proposal here could go a LONG way to propelling the AGI programming field to
success.

It's hardly a herd mentality -- the different systems out
 there vary wildly in many respects.


While the details vary widely, Mike and I were addressing the very concept
of writing code to perform functions (e.g. thinking) that apparently
develop on their own as emergent properties, and in the process foreclosing
on many opportunities, e.g. developing in variant ways to address problems
in new paradigms. Direct programming would seem to lead to lesser rather
than greater intelligence. Am I correct that this is indeed a central
thread in all of the different systems that you had in mind?

Note in passing that simulations can sometimes be compiled into
executable code. Now that the bidirectional equivalence of NN and fuzzy
logic approaches has been established, and people often program fuzzy logic
methods directly into C/C++ code (especially economic models), there is now
a (contorted) path to 

RE: [agi] Pearls Before Swine...

2008-06-08 Thread Derek Zahn
Gary Miller writes:
 
 We're thinking Don't feed the Trolls!
 
Yeah, typical trollish behavior -- upon failing to stir the pot with one 
approcah, start adding blanket insults.  I put Steve Richfield in my killfile a 
week ago or so, but I went back to the archive to read the message in question. 
 The reason it got no response is that it is incoherent.  Seriously, I couldn't 
even understand the point of it.  Something about dreams and brains being wired 
completely different and some thumbnail calculations which are not included 
but apparently conclude that AGI will need the entire population of the earth 
for software maintenance... um, that's just weird rambling crackpottery.  It is 
so far away from any sort of AGI nuts and bolts that it cannot even be parsed.  
 
There are people who do not believe they are crackpots (but are certainly 
perceived that way) who then transform into trolls spouting vague blanket 
insults and whining about being ignored.  That type of unsupported fringe 
wackiness is tolerated because, frankly, the whole field is fringe to most 
people.  When it turns into vague attacks, blanket condemnation, and insults (a 
la Tintner and now Richfield) it simply isn't worth reading any more.
 
For others in danger of spiraling down the same drain, I recommend:
* Be cordial.   Note: condescending is not cordial.
* Be specific and concise.  Stick to one point.
* Do not refer to decades-old universally ignored papers about character 
recognition as if they are AI-shaping revolutions.
* Do not drop names from some hazy good old days
* Attempt to limit rambling off-topic insights into marginally related material
* If you are going to criticize instead of putting forward positive ideas (why 
you'd bother criticizing this field is beyond me, but if you must): criticize 
specific things, not the herd or all of you researchers or the field of 
AGI... as Ben pointed out earlier, no two people in this area agree on much of 
anything and they cannot be lumped together.  Criticizing specific things means 
actually reading and attempting to understand the published works of AGI 
researchers -- the test for whether you belong here is whether you are willing 
and able to actually do that.
 
Mr. Richfield may find a more receptive audience here:
 
http://www.kurzweilai.net/mindx/frame.html


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-06-08 Thread John G. Rose
 From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED]
 
 The problem of consciousness is not only a hard problem because of
 unknown
 mechanisms in the brain but it is a problem of finding the DEFINITION of
 necessary conditions for consciousness.
 I think, consciousness without intelligence is not possible.
 Intelligence
 without consciousness is possible. But I am not sure whether GENERAL
 intelligence without consciousness is possible. In every case,
 consciousness
 is even more a white-box problem than intelligence.
 

For general intelligence some components and sub-components of consciousness
need to be there and some don't. And some could be replaced with a human
operator as in an augmentation-like system. Also some components could be
designed drastically different from their human consciousness counterparts
in order to achieve more desirous effects in one area or another. ALSO there
may be consciousness components integrated into AGI that humans don't have
or that are almost non-detectable in humans. And I think that the different
consciousness components and sub-components could be more dynamically
resource allocated in the AGI software than in the human mind.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread John G. Rose
 From: A. T. Murray [mailto:[EMAIL PROTECTED]
 
 The abnormalis sapiens Herr Doktor Steve Richfield wrote:
 
 
  Hey you guys with some gray hair and/or bald spots,
  WHAT THE HECK ARE YOU THINKING?
 
 prin Goertzel genesthai, ego eimi
 
 http://www.scn.org/~mentifex/mentifex_faq.html
 
 My hair is graying so much and such a Glatze is beginning,
 that I went in last month and applied for US GOV AI Funding,
 based on my forty+ quarters of work history for The Man.
 In August of 2008 the US Government will start funding my AI.
 

Does this mean that now maybe you can afford to integrate some AJAX into
that JavaScript AI mind of yours?

John




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


AW: [agi] Consciousness vs. Intelligence

2008-06-08 Thread Dr. Matthias Heger


John G. Rose [mailto:[EMAIL PROTECTED] wrote


For general intelligence some components and sub-components of consciousness
need to be there and some don't. And some could be replaced with a human
operator as in an augmentation-like system. Also some components could be
designed drastically different from their human consciousness counterparts
in order to achieve more desirous effects in one area or another. ALSO there
may be consciousness components integrated into AGI that humans don't have
or that are almost non-detectable in humans. And I think that the different
consciousness components and sub-components could be more dynamically
resource allocated in the AGI software than in the human mind.



Can neither say 'yes' nor 'no'. Depends on how we DEFINE consciousness as a
physical or algorithm-phenomenon. Until now we each have only an idea of
consciousness by intrinsic phenomena of our own mind. We cannot prove the
existence of consciousness in any other individual because of the lack of a
better definition.
I do not believe, that consciousness is located in a small sub-component.
It seems to me, that it is an emergent behavior of a special kind of huge
network of many systems. But without any proper definition this can only be
a philosophical thought.






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
 While the details vary widely, Mike and I were addressing the very concept
 of writing code to perform functions (e.g. thinking) that apparently
 develop on their own as emergent properties, and in the process foreclosing
 on many opportunities, e.g. developing in variant ways to address problems
 in new paradigms. Direct programming would seem to lead to lesser rather
 than greater intelligence. Am I correct that this is indeed a central
 thread in all of the different systems that you had in mind?

Different AGI systems rely on emergence to varying extents ...

No one knows which brain functions rely on emergence to which extents ...
we're still puzzling this out even in relatively well-understood brain regions
like visual cortex.  (Feedforward connections in visual cortex are sorta
well understood, but feedback connections, which is where emergence might
play in, are very poorly understood as yet.)

For instance, the presence of a hierarchy of progressively more abstract
feature detectors in visual cortex clearly does NOT emerge in a strong sense...
it may emerge during fetal and early-childhood neural self-organization, but in
a way that is carefully genetically preprogrammed.

But, the neural structures that carry out object-recognition may well emerge
as a result of complex nonlinear dynamics involving learning in both the
feedback and feedforward connections...

so my point is, the brain is a mix of wired-in and emergent stuff, and we
don't know where the boundary lies...

as with vision, similarly e.g. for language understanding.  Read Jackendoff's
book

Jackendoff, Ray (2002). Foundations of Language: Brain, Meaning,
Grammar, Evolution.

and the multi-author book

mitpress.mit.edu/book-home.tcl?isbn=0262050528

for thoughtful treatments of the subtle relations btw programmed-in
and learned aspects of human intelligence ... much of the discussion
pertains implicitly to emergence too, though they don't use that word
much ... because emergence is key to learning...

In the Novamente design we've made some particular choices about what
to build in versus what to allow to emerge.  But, for sure, the notion
of emergence
from complex self-organizing dynamics has been a key part of our thinking in
making the design...

Neural net AGI approaches tend to leave more to emerge, whereas logic based
approaches tend to leave less... but that's just a broad generalization

In short there is a huge spectrum of choices in the AGi field regarding what
to build in versus what to allow to emerge ... not a herd mentality at all...

-- Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-06-08 Thread John G. Rose
 From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED]
 
 For general intelligence some components and sub-components of
 consciousness
 need to be there and some don't. And some could be replaced with a human
 operator as in an augmentation-like system. Also some components could
 be
 designed drastically different from their human consciousness
 counterparts
 in order to achieve more desirous effects in one area or another. ALSO
 there
 may be consciousness components integrated into AGI that humans don't
 have
 or that are almost non-detectable in humans. And I think that the
 different
 consciousness components and sub-components could be more dynamically
 resource allocated in the AGI software than in the human mind.
 
 
 
 Can neither say 'yes' nor 'no'. Depends on how we DEFINE consciousness
 as a
 physical or algorithm-phenomenon. Until now we each have only an idea of
 consciousness by intrinsic phenomena of our own mind. We cannot prove
 the
 existence of consciousness in any other individual because of the lack
 of a
 better definition.
 I do not believe, that consciousness is located in a small sub-
 component.
 It seems to me, that it is an emergent behavior of a special kind of
 huge
 network of many systems. But without any proper definition this can only
 be
 a philosophical thought.
 
 

Given that other humans have similar DNA it is fair to assume that they are
conscious like us. Not 100% proof but probably good enough. Sure the whole
universe may still be rendered for the purpose of one conscious being, and
in a way that is true, and potentially that is something to take into
account.

Consciousness has multiple definitions by multiple different people. But
even without an exact definition you can still extract properties and
behaviors from it and from those, extrapolations can be made and the
beginnings of a model can be established.

Even if it is an emergent behavior of a huge network of many systems doesn't
preclude it from being described in a non-emergent way. And if it is only
uniquely describable through emergent behavior it still has some general
commonly accepted components or properties.

John






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Mike Tintner
Ben: No one knows which brain functions rely on emergence to which extents 
...
we're still puzzling this out even in relatively well-understood brain 
regions
like visual cortex. ... But, the neural structures that carry out 
object-recognition may well emerge

as a result of complex nonlinear dynamics involving learning in both the
feedback and feedforward connections...



Ben,

Why, when you see this:

http://www.mediafire.com/imageview.php?quickkey=wtmjsxmmyhlthumb=4

do you also see something like this:

http://www.featurepics.com/FI/Thumb300V/20061110/Black-Swan-134875.jpg

Wtf is he on about? Well, you just effortlessly crossed domains - did some 
emergence. You solved the central problem of AGI - that underlies 
metaphor, analogy, creativity, conceptualisation/categorisation, and even, 
I'd argue, visual object recognition - how to cross domains.


How did you solve it?

We have a philosophical difference here - your approach is/was to consider 
ways of information processing - look at different kinds of logic, 
programming, neural networks and theories of neural processing, (as above) 
and set up your system on that basis, and hope the answer will emerge. (You 
also defined all 4 main approaches to AGI purely in terms of info. 
processing and not in any terms of how they propose to cross domains).


My approach is: first you look at the problem of crossing domains in its own 
terms - work out an ideal way to solve it - which will probably be close to 
the way the mind does solve it -  then think about how to implement your 
solution technically, because otherwise you're working blind. Isn't that (he 
asks from ignorance) what you guys do when called in to help design a 
company's IT system from scratch  - look first at the company's problems in 
their own terms, before making technical recommendations?(It's OK - I 
know minds won't meet here  :)  ).





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Jim Bromer
- Original Message 

From: Mike Tintner [EMAIL PROTECTED]

My approach is: first you look at the problem of crossing domains in its own 
terms - work out an ideal way to solve it - which will probably be close to 
the way the mind does solve it -  then think about how to implement your 
solution technically...
--
Instead of talking about what you would do,  do it. 

Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Jim Bromer
- Original Message 

From: Mike Tintner [EMAIL PROTECTED]

My approach is: first you look at the problem of crossing domains in its own 
terms - work out an ideal way to solve it - which will probably be close to 
the way the mind does solve it -  then think about how to implement your 
solution technically...
--
Instead of talking about what you would do,  do it. 

I mean, work out your ideal way to solve the questions of the mind and share it 
with us after you've have found some interesting results.

Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
Nothing will ever be attempted if all possible objections must be
first overcome   - Dr Samuel Johnson


-- Ben G

On Mon, Jun 9, 2008 at 7:41 AM, Jim Bromer [EMAIL PROTECTED] wrote:
 - Original Message 
 From: Mike Tintner [EMAIL PROTECTED]

 My approach is: first you look at the problem of crossing domains in its own
 terms - work out an ideal way to solve it - which will probably be close to
 the way the mind does solve it -  then think about how to implement your
 solution technically...
 --
 Instead of talking about what you would do,  do it.

 I mean, work out your ideal way to solve the questions of the mind and share
 it with us after you've have found some interesting results.

 Jim Bromer
 
 agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Ideological Interactions Need to be Studied

2008-06-08 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Jun 7, 2008, at 5:06 PM, Richard Loosemore wrote:
But that is a world away from the idea that neurons, as they are, are 
as simple as transistors.  I do not believe this was a simple 
misunderstanding on my part:  the claim that neurons are as simple as 
transistors is an unsupportable one.



Richard, you reliably ignore what I actually write, selectively parsing 
it in some bizarre context that I don't recognize. There is a reading 
comprehension issue, or at the very least you don't follow what I 
consider to be the dead obvious theoretical implications. 
Metaphorically, you are arguing that the  latex sheet model of 
gravitational curvature is stupid because astronomers have never seen 
latex in space, and then wonder why the physicists are giving you funny 
looks.


Are you arguing that the function that is a neuron is *not* an 
elementary operator for whatever computational model it is that 
describes the brain?


I directly and exactly *quoted* several passages that you wrote.

But you don't call that quoting, you call it reliably ignor[ing] what 
I actually write, selectively parsing it in some bizarre context that I 
don't recognize.


H.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Ideological Interactions Need to be Studied

2008-06-08 Thread J. Andrew Rogers


On Jun 8, 2008, at 7:27 PM, Richard Loosemore wrote:


I directly and exactly *quoted* several passages that you wrote.



And completely ignored both the context and intended semantics.  Hence  
why I might be under the impression that there is a reading  
comprehension issue.


But enough of that, let's get to the meat of it:  Are you arguing that  
the function that is a neuron is not an elementary operator for  
whatever computational model describes the brain?


J. Andrew Rogers



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Richard Loosemore

Steve Richfield wrote:

Mike Tintner, et al,
 
After failing to get ANY response to what I thought was an important 
point (*Paradigm Shifting regarding Consciousness) *I went back through 
my AGI inbox to see what other postings by others weren't getting any 
responses. Mike Tintner was way ahead of me in no-response postings.
 
A quick scan showed that these also tended to address high-level issues 
that challenge the contemporary herd mentality. In short, most people on 
this list appear to be interested only in HOW to straight-line program 
an AGI (with the implicit assumption that we operate anything at all 
like we appear to operate), but not in WHAT to program, and most 
especially not in any apparent insurmountable barriers to successful 
open-ended capabilities, where attention would seem to be crucial to 
ultimate success.


Anyone who has been in high-tech for a few years KNOWS that success can 
come only after you fully understand what you must overcome to succeed. 
Hence, based on my own past personal experiences and present 
observations here, present efforts here would seem to be doomed to fail 
- for personal if not for technological reasons.
 
Normally I would simply dismiss this as rookie error, but I know that at 
least some of the people on this list have been around as long as I have 
been, and hence they certainly should know better since they have 
doubtless seen many other exuberant rookies fall into similar swamps of 
programming complex systems without adequate analysis.
 
Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE 
YOU THINKING?


I was thinking that your previous commentary was a stream of 
consciousness jumble that made no sense.


You also failed to address my own previous response to you:  I basically 
said that you make remarks as if the whole of cognitive science does not 
exist.  That kind of position makes me want to not take any notice of 
your comments.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Ideological Interactions Need to be Studied

2008-06-08 Thread Ben Goertzel
Regarding how much of the complexity of real neurons we would need to
put into a computational neural net model in order to make a model
displaying a realistic  emulation of neural behavior -- the truth is
we JUST DON'T KNOW

Izhikevich for instance

http://vesicle.nsi.edu/users/izhikevich/human_brain_simulation/Blue_Brain.htm

gets more detailed than standard formal neural net models, but is it
detailed enough?  We really don't know.  I like his work for its use
of nonlinear dynamics and emergence though.

Until we understand the brain better, we can only speculate about what
level of detail is needed...

This is part of the reason why I'm not working on a closely
brain-based AGI approach...

I find neuroscience important and fascinating, and I try to keep up
with the most relevant parts of the field, but I don't think it's
mature enough to really usefully guide AGI development yet.

-- Ben G



On Mon, Jun 2, 2008 at 6:15 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Mon, Jun 2, 2008 at 2:03 AM, Mark Waser [EMAIL PROTECTED] wrote:
 No, this is not a variant of the analog is fundamentally different from
 digital category.

 Each of the things that I mentioned could be implemented digitally --
  however, they are entirely new classes of things to consider and require a
 lot more data and processing.

 I find it very interesting that you can't even answer a straight yes-or-no
 question without resorting to obscuring BS and inventing strawmen.

 Are you actually claiming that neurotransmitter levels are irrelevant or are
 you implementing them?

 Are you claiming that leakage along the axons and dendrites is irrelevant or
 are you modeling it?


 Mark, I think the point is that there should be a simple model that
 produces the same capabilities as a neuron (or brain). Most of these
 biological particulars are important for biological brain, but it
 should be possible to engineer them away on computational substrate
 when we have a high-level model of what they are actually for.

 --
 Vladimir Nesov
 [EMAIL PROTECTED]


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be
first overcome   - Dr Samuel Johnson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Ideological Interactions Need to be Studied

2008-06-08 Thread Ben Goertzel

 But enough of that, let's get to the meat of it:  Are you arguing that the
 function that is a neuron is not an elementary operator for whatever
 computational model describes the brain?


We don't know which function that describes a neuron we need to use --
are Izhikevich's nonlinear dynamics models of ion channels good
enough, or do we need to go deeper?

Also we don't know about the importance of extracellular charge
diffusion... computation/memory happening in the glial network ...
etc. ... phenomena which suggest that the neuron-functions are not the
only elementary operators at play in brain dynamics...

Lots of fun stuff still to be learned ;-)

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread A. T. Murray
John G. Rose wrote:
 [...]
  Hey you guys with some gray hair and/or bald spots,
  WHAT THE HECK ARE YOU THINKING?
 
 prin Goertzel genesthai, ego eimi

Before Goertzel came to be, I am. (a Biblical allusion in Greek :-)

 
 http://www.scn.org/~mentifex/mentifex_faq.html

The above link is an update on 8 June 2008 of 
http://www.advogato.org/article/769.html from 2004.

 
 My hair is graying so much and such a Glatze is beginning,
 that I went in last month and applied for US GOV AI Funding,
 based on my forty+ quarters of work history for The Man.
 In August of 2008 the US Government will start funding my AI.

In other words, Soc. Sec. will henceforh finance Mentifex AI. 

 Does this mean that now maybe you can afford to integrate
 some AJAX into that JavaScript AI mind of yours?

 John

No, because I remain largely ignorant of Ajax.

http://mind.sourceforge.net/Mind.html
and the JavaScript Mind User Manual (JMUM) at 
http://mentifex.virtualentity.com/userman.html 
will remain in JavaScript and not Ajax.

As I continue to re-write the User Manual, I 
will press hard for the adoption of Mentifex AI
in high-school classes on artificial intelligence.

Arthur T. Murray


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread John G. Rose
 John G. Rose wrote:

  Does this mean that now maybe you can afford to integrate
  some AJAX into that JavaScript AI mind of yours?
 
  John
 
 No, because I remain largely ignorant of Ajax.
 
 http://mind.sourceforge.net/Mind.html
 and the JavaScript Mind User Manual (JMUM) at
 http://mentifex.virtualentity.com/userman.html
 will remain in JavaScript and not Ajax.
 


Oh OK just checkin'. AJAX is JavaScript BTW, and quite powerful.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Paradigm Shifting regarding Consciousness

2008-06-08 Thread John G. Rose
I don't think anyone anywhere on this list ever suggested time sequential
was required for consciousness. Now as data streams in from sensory
receptors that initially is time sequential. But as it is processed that
changes to where time is changed. And time is sort of like an index eh? Or
is time just an illusion? For consciousness  though there is this
non-synchronous concurrent processing of components that gives it, at least
for me, some of its characteristic behavior. Different things happening at
the same time but all slightly off or lagging. If everything was happening
at the same instant that might negative some of the self-detectability of
consciousness.

 

John

 

 

From: Steve Richfield [mailto:[EMAIL PROTECTED] 



To all,

 

In response to the many postings regarding consciousness, I would like to
make some observations:

 

1.  Computation is often done best in a shifted paradigm, where the
internals are NOT one-to-one associated with external entities. A good
example are modern chess playing programs, which usually play chess on an
80-square long linear strip with 2 out of every 10 squares being
unoccupyable. Knights can move +21, +19, +12, +8, -8, -12, -19, and -21. The
player sees a 2-D space, but the computer is entirely in a 1-D space. I
suspect (and can show neuronal characteristics that strongly suggest) that
much the same is happening with the time dimension. There appears to be
little different with this 4th dimension, except how it is interfaced with
the outside world.

2.  Paradigm mapping is commonplace in computing, e.g. the common practice
of providing stream of consciousness explanations for AI program
operation, to aid in debugging. Are such program NOT conscious because the
logic they followed was NOT time-sequential?! When asked why I made a
particular move in a chess game, it often takes me a half hour to explain a
decision that I made in seconds. Clearly, my own thought processes are NOT
time-sequential consciousness as others' here on this forum apparently are.
I believe that designing for time-sequential conscious operation is
starting from a VERY questionable premise.

3.  Note that dreams can span years of seemingly real experience in the
space of seconds/minutes. Clearly this process is NOT time-sequential.

4.  Note that individual brains can be organized COMPLETELY differently,
especially in multilingual people. Hence, our wiring almost certainly
comes from experience and not from genetics. This would seem to throw a
monkey wrench into AGI efforts to manually program such systems.

5.  I have done some thumbnail calculations as to what it would take to
maintain a human-scale AI/AGI system. These come out on the order of needing
the entire population of the earth just for software maintenance, with no
idea what might be needed to initially create such a working system. Without
poisoning a discussion with my own pessimistic estimates, I would like to
see some optimistic estimates for such maintenance, to see if a case could
be made that such systems might actually be maintainable.

Reinforcing my thoughts on other threads, observation of our operation is
probably NOT enough to design a human-scale AGI from, ESPECIALLY when
paradigm shifting is being done that effectively hides our actual operation.
I believe that more information is necessary, though hopefully not an entire
readout of a brain.

Steve Richfield




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com