[agi] Re: Solomonoff Machines – up close and personal

2007-11-11 Thread Shane Legg
Hi Ed,

So is the real significance of the universal prior, not its probability
 value given in a given probability space (which seems relatively
 unimportant, provided is not one or close to zero), but rather the fact that
 it can model almost any kind of probability space?


It just takes a binary string as input.  If you can express your problem as
one in which
a binary string represents what has been observed so far, and a continuation
of this
string represents what happens next, then Solomonoff induction can deal with
it.
So you don't have to pick the space.  You do however have to take your
problem
and represent it as binary data and feed it in, just as you do when you put
any kind
of data into a computer.

The power of the universal prior comes from the fact that it takes all
computable
distributions into account.  In a sense it contains all well defined
hypotheses about
what the structure in the string could be.  This is a point that is worth
contemplating
for awhile.  If there is any structure in there and this structure can be
described by
a program on a computer, even a probabilistic one, then it's already
factored into
the universal prior and the Solomonoff predictor is already taking it into
account.

How does the Kolmogorov complexity help deal with this problem?


The key thing that Kolmogorov complexity provides is that it assigns a
weighting to
each hypothesis in the universal prior that is inversely proportional to the
complexity
of the hypothesis.  This means that the Solomonoff predictor respects, in
some sense,
the principle of Occam's razor.  That is, a priori, simpler things are
considered more
likely than complex ones.

ED## ??Shane??, what are the major ways programs are used in a
 Solomonoff machine?  Are they used for generating and matching patterns? Are
 they used for generating and creating context specific instantiations of
 behavioral patterns?

Keep in mind that Solomonoff induction is not computable.  It is not an
algorithm.
The role that programs play is that they are used to construct the
universal prior.
Once this is done, the Solomonoff predictor just takes the prior and
conditions on
the observed string so far to work out the distribution over the next bit.
That's all.


Lukasz## The programs are generally required to exactly match in AIXI
 (but not in AIXItl I think).
 ED## ??Shane??, could you please give us an assist on this one? Is
 exact matching required?  And if so, is this something that could be
 loosened in a real machine?

Exact pattern matching is required in the sense that if a hypothesis says
that
something cannot happen, and it does, then that hypothesis is effectively
discarded.

A real machine might have to loosen this, and many other things.  Note that
nobody
I know is trying to build a real AGI machine based on Solomonoff's model.


Isn't there a large similarity between a Solomonoff machine that could learn
 a hierarchy of pattern representing programs and Jeff Hawking's hierarchical
 learning (as represented in the Serre paper).  One could consider the
 patterns at each level of the higherarchy as sub-routines.  The system is
 designed to increase its representational efficiency by having
 representational subroutines available for use by multiple different
 patterns at higher compositional levels.  To the extent that a MOSES-type
 evolutionary system could be set to work making such representations more
 compact, it would become clear how semi-Solomonoff machines could be made to
 work in the practical world.


In think the point is that if you can do really really good general sequence
prediction (via something
impractical like Solomonoff induction, or practical like the cortex) then
you're a long way towards
being able to build a pretty impressive AGI.  Some of Hutter's students are
interested in the latter.



 The def of Solomonoff induction on the web and even in Shane Legg's paper
 Solomonoff induction make it sound like it is merely Bayesian induction,
 using the picking of priors based on Kolmogorov complexity.

Yes, that's all it is.

But statements made by Shane and Lukasz appears to imply that a Solomonoff
 machine uses programming and programming size as a tool for pattern
 representation, generalization, learning, inference, and more.

All these programs are weighted into that universal prior.


 So I think (but I could well be wrong) I know what that means.
 Unfortunately I am a little fuzzy about whether NCD would take what
 information, what-with-what or binding information, or frequency
 information sufficiently into account to be an optimal measure of
 similarity.  Is this correct?

NCD is just a computable approximation.  The universal similarity metric (in
the Li and Vitanyi book
that I cited) gives the pure incomputable version.  The pure version
basically takes all effective
similarity metrics into account when working out how similar two things
are.  So if you have some
concept of similarity that you're 

Re: [agi] Re: What best evidence for fast AI?

2007-11-11 Thread Bryan Bishop
Excellent post, and I hope that I may come across enough time to give it 
a more thorough reading.

Is it possible that at the moment our working with 'intelligence' is 
just like flapping in an attempt to fly? It seems like the concept of 
intelligence is a good way to preserve the nonabsurdity of the future.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63962021-7b03c5


RE: [agi] What best evidence for fast AI?

2007-11-11 Thread Edward W. Porter
Ben said -- the possibility of dramatic, rapid, shocking success in
robotics is LOWER
than in cognition

That's why I tell people the value of manual labor will not be impacted as
soon by the AGI revolution as the value of mind labor.

Ed Porter



-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 10, 2007 5:29 PM
To: agi@v2.listbox.com
Subject: Re: [agi] What best evidence for fast AI?






I'm impressed with the certainty of some of the views expressed here,
nothing like I get talking to people actually building robots.

- Jef




Robotics involves a lot of difficulties regarding sensor and actuator
mechanics and data-processing. Whether these need to be solved to
create AGI is a matter of much contention.  Some, like Rodney Brooks,
think so.  Others, like me, doubt it -- though I think embodiment does
have
a lot to offer an AGI system, hence my current focus on virtual
embodiment...

Still, in spite of the hurdles, the solvability of the problems facing
humanoid
robotics w/in the next few decades seems pretty clear to me --- if
sufficient
resources are devoted to the problem (and it's not clear they will be).

I think that, compared to fundamental progress in AGI cognition,

-- our certitude in dramatic robotics progress can be greater, under
assumptions
of adequate funding

-- the possibility of dramatic, rapid, shocking success in robotics is
LOWER
than in cognition

-- Ben G

  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63975170-cc0347

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Jef Allbright
On 11/11/07, Edward W. Porter [EMAIL PROTECTED] wrote:

 Ben said -- the possibility of dramatic, rapid, shocking success in
 robotics is LOWER than in cognition

 That's why I tell people the value of manual labor will not be impacted as
 soon by the AGI revolution as the value of mind labor.

Both valid points -- emphasizing possibility leading to dramatic,
shocking success -- but this does not invalidate the (in my opinion)
greater near-term *probability* of accelerating development and
practical deployment of robotics and its broad impact on society.

Robotics (like all physical technologies) will hit a ceiling defined
by intelligence.

Machine intelligence surpassing human capabilities in general will be
far more dramatic, rapid, and shocking than any previous technology.

But we do not yet have a complete, verifiable theory, let alone a
practical design.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63984519-51ebc9


Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Benjamin Goertzel


 But we do not yet have a complete, verifiable theory, let alone a
 practical design.

 - Jef


To be more accurate, we don't have a practical design that is commonly
accepted in the AGI research community.

I believe that I *do* have a practical design for AGI and I am working hard
toward getting it implemented.

This practical design is based on a theory that is fairly complete, but not
easily verifiable using current technology.  The verification, it seems,
will
come via actually getting the AGI built!

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63987650-f9a81b

Re: [agi] question about algorithmic search

2007-11-11 Thread Charles D Hixson

YKY (Yan King Yin) wrote:
 
I have the intuition that Levin search may not be the most efficient 
way to search programs, because it operates very differently from 
human programming.  I guess better ways to generate programs can be 
achieved by imitating human programming -- using techniques such as 
deductive reasoning and planning.  This second method may be faster 
than Levin-style searching, especially for complex 
programming problems, yet technically it is still a search algorithm.
 
My questions are:
 
Is deductive-style programming more efficient than Levin-search?
 
If so, why is it faster?
 
YKY


Deduction can only be used a very constrained circumstances.  In such 
circumstances, it's exponentially slow (or super-exponentially?) with 
the number of cases to be handled.


I don't know anything about Levin searches, but heuristic searches are 
much faster at finding things in large search spaces than is deduction, 
even if deduction can be brought to bear (which is unusual). 
OTOH, if deduction can be brought to bear, then it is guaranteed to find 
the most correct solution.  Heuristic searches stop with something 
that's good enough, and rarely will do an exhaustive search.


That said, why do you think that people generally operate deductively?  
This is something that some people have been trained to do with inferior 
accuracy.  I still don't know anything about Levin searches, but people 
don't search for things deductively except in unusual circumstances, so 
that it's not deductive is not saying that it doesn't do things the way 
that people do.  (I think that people do a kind of pattern 
matching...possibly several different kinds.  Actually, I think that 
even when people are doing something that can be mapped onto the rules 
of deduction, what they're actually doing is matching against learned 
patterns.)  One reason that computers are so much better than people at 
logic is that that's what they were built to do.  People weren't and 
aren't.  But whenever one steps outside the bounds of logic and math, 
computers really start showing how little capability they actually have 
compared to people.   But computers will do what they are told to do 
with incredible fidelity.  (Another part of how they were designed.  So 
they can even emulate heuristic algorithms...slowly.  You just don't 
notice most of what you are doing  thinking.  Only a small fraction 
that can easily be serialized (plus a few random snap-shots with low 
fidelity [lossy compression?]).


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63993815-f7d737


Re: [agi] Connecting Compatible Mindsets

2007-11-11 Thread Charles D Hixson

Bryan Bishop wrote:

On Saturday 10 November 2007 14:10, Charles D Hixson wrote:
  

Bryan Bishop wrote:


On Saturday 10 November 2007 13:40, Charles D Hixson wrote:
  

OTOH, to make a go of this would require several people willing to
dedicate a lot of time consistently over a long duration.


A good start might be a few bibliographies.
http://liinwww.ira.uka.de/bibliography/

- Bryan
  

Perhaps you could elaborate?  I can see how those contributing to the
proposed wiki who also had access to a comprehensive mathcomp-sci
library might find that useful, but I don't see it as a good way to
start.



Bibliography + paper archive, then.
http://arxiv.org/ (perhaps we need one for AGI)


  

It seems to me that better way would be to put up a few pages with


(snip) Yes- that too would be useful.


  

create. For this kind of a wiki reliability is probably crucial, so



Or deadly considering the majority of AI reputation comes from I 
*think* that guy over there, the one in the corner, might be doing 
something interesting.


- Bryan

  
Reputation in *this* context means a numeric score that is attached to 
the user account at the wiki.  How it gets modified is crucial, but it 
must be seen as fair by the user community.  Everybody (except the 
founders  sysAdmins) should start equal.  A decent system is to start 
everyone at 0.1 and have all scores range between (1, 0) .. a doubly 
open interval.  At discrete steps along the way new moderation 
capabilities should become available.  If your score drops much below 
0.1, your account becomes deactivated. 

It seems to me that a good system would increase the score for every 
article posted and accepted...but it seems dubious that all postings 
should be considered equal.  Perhaps individual pages could be voted on, 
and that vote used to weigh the delta to the account.  There should also 
be a bonus for continued participation, at even the reader level.  Etc.  
LOTS of details.


Also, some systems have proven vulnerable to manipulation via the 
creation of large numbers of throwaway accounts.  This would need to 
be guarded against.  (This is part of the rationale for increased 
weight for continued *active* participation, at even the reader level.  
Dormant accounts should not accrue status, and neither should 
hyperactive accounts.)


OTOH, considering the purpose of this wiki, perhaps there should be a 
section which is open for bots, and in this section hyperactive 
might well have a very different meaning. 

If you're planning on implementing this, these are just some ideas to 
think about.  Personally I've never administered a wiki, and don't have 
access to a reasonable host if I wanted to.  Also, I don't know Perl 
(though I understand that some are written in Python or Ruby).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63999495-e194e4


[agi] Re: What best evidence for fast AI?

2007-11-11 Thread Robin Hanson


At 05:48 PM 11/10/2007, Eliezer S. Yudkowsky wrote:

The anchor that I start with is
my rough estimate of how long whole brain emulation will take, and so I'm
most interesting in comparing AGI to that anchor. The fact that
people are prone to take these estimate questions as attitude surveys is
all the more reason to seek concrete arguments, rather than yet more
attitudes.
If you want to compare AGI *relative* to whole brain emulation -
unanchoring the actual time and hence tossing any pretense of futuristic
prophecy out the window - then that's a whole separate
story.
Well to the extent that I do think we have grounds for rough estimates of
emulation dates, comparative estimates for AGI would allow date estimates
for AGI as well. 
I would begin by asking if there
was ever, in the whole history of technology, a single case where someone
*first* duplicated a desirable effect by emulating biology at a lower
level of organization, without understanding the principles of that
effect's production from that low level of organization.

I know of no important cases, but we do often emulate non-biological
systems this way, when they are complex and we mainly care about computed
I/O behavior. We record musics and movies, and we port
software. We also often reverse-engineer physical
devices by copying complex designs we don't fully understand.
Organizations also often copy procedures from other organizations they
don't understand. I agree that a lack of biological examples
should give us pause, but have we ever really wanted to reproduce the I/O
behavior of complex biological software before?
Looking at history, we find two
lessons:
1) Extremely mysterious-seeming desirable natural phenomena are
eventually understood and duplicated by engineering;
2) Because they have ceased to be mysterious by the time they are
duplicated, humans design them by engineering backward from the desired
results, rather than by exactly emulating the lower levels of
organization of a black box in Nature whose mysteriousness remains intact
even as it is emulated.
Cars don't emulate horse biochemistry, sonar doesn't emulate bat
biochemistry, compasses don't emulate pigeon biochemistry, suspension
bridges don't emulate spider biochemistry, dams don't emulate beaver
building techniques, and *certainly* none of these things emulate biology
*without understanding why the resulting product
works*.
But again, these aren't examples of trying to reproduce complex computed
I/O behavior. 

Robin Hanson [EMAIL PROTECTED]
http://hanson.gmu.edu 
Research Associate, Future of Humanity Institute at Oxford
University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-
703-993-2326 FAX: 703-993-2323
 
This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/?member_id=8660244_secret=64006530-809d9a




Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Benjamin Goertzel
Richard,




 Even Ben Goertzel, in a recent comment, said something to the effect
 that the only good reason to believe that his model is going to function
 as advertised is that *when* it is working we will be able to see that
 it really does work:


The above paragraph is a distortion of what I said, and misrepresents my
own thoughts and beliefs.

I think that, after the Novamente design and the ideas underlying it are
carefully studied by a suitably trained individiual, the hypothesis that it
will
lead to a human-level AI comes to seem plausible.  But, there is no
solid proof, it's in part a matter of educated intuition.


The following quote which you gave is accurate:


 Ben Goertzel wrote:
  This practical design is based on a theory that is fairly complete, but
 not
  easily verifiable using current technology.  The verification, it seems,
 will
  come via actually getting the AGI built!

 This is a million miles short of a declaration that there are no hard
 problems left in AI.



Whether there are hard problems left in AI, conditional on the assumption
that
the Novamente design is workable, comes down to a question of semantic
interpretation.

In the completion of the detailed-design and implementation of the Novamente
system,
there are around a half-dozen research problems on the PhD thesis level
to be solved.

This means there is some hard thinking left, yet if the Novamente design is
correct, it
pertains some well-defined and well-delimited technical questions, which
seem very likely
to be solvable.

As an example, there is the task of generalizing the MOSES algorithm (see
metacog.org)
to handle general programmatic constructs at the nodes of its internal
program trees.  Of
course this is a hard problem, yet it's a well-defined computer science
problem which
(after a lot of things) doesn't
seem likely to be hiding any deep gotchas.

But this is research and development -- not pure development -- so one never
knows for sure...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64055433-fe7f04

Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Richard Loosemore

Benjamin Goertzel wrote:



Richard,
 




Even Ben Goertzel, in a recent comment, said something to the effect
that the only good reason to believe that his model is going to function
as advertised is that *when* it is working we will be able to see that
it really does work:


The above paragraph is a distortion of what I said, and misrepresents my
own thoughts and beliefs.


When pressed, you always resort to a phrase equivalent to the one you 
give below:  I think that, after the Novamente design and the ideas 
underlying it are carefully studied by a suitably trained individiual, 
the hypothesis that it will lead to a human-level AI comes to seem 
plausible


When you look carefully at this phrasing, its core is a statement that 
the best reason to believe that it will work is the *intuition* of 
someone who studies the design ... and you state that you believe that 
anyone who is suitably trained, who studies it, will have the same 
intuition that you do.  This is all well and good, but it contains no 
metric, no new analysis of the outstanding problems that we can all 
scrutinize and assess.


I would consider an appeal to the intuition of suitably trained 
individuals to be very much less than a good reason to believe that 
the model is going to function as advertised.


Thus: if someone wanted volunteers to fly in their brand-new aircraft 
design, but all they could do to reassure people that it was going to 
work were the intuitions of suitably trained individuals, then most 
rational people would refuse to fly - they would want more than intuitions.


In this light, my summary would not be a distortion of your position at 
all, but only a statement about whether an appeal to intuition counts as 
a good reason to believe.


And, of course, there are some suitably trained individuals who do not 
share your intuitions, even given the limited access they have to your 
detailed design.


I respect your optimism, and applaud your single-minded commitment to 
the project:  if it is going to work, that is the way to get it done.  I 
certainly wish you luck with it.





Richard Loosemore


I think that, after the Novamente design and the ideas underlying it are
carefully studied by a suitably trained individiual, the hypothesis that 
it will

lead to a human-level AI comes to seem plausible.  But, there is no
solid proof, it's in part a matter of educated intuition.
 


The following quote which you gave is accurate:


Ben Goertzel wrote:
  This practical design is based on a theory that is fairly
complete, but not
  easily verifiable using current technology.  The verification, it
seems, will
  come via actually getting the AGI built!

This is a million miles short of a declaration that there are no hard
problems left in AI.



Whether there are hard problems left in AI, conditional on the 
assumption that

the Novamente design is workable, comes down to a question of semantic
interpretation.

In the completion of the detailed-design and implementation of the 
Novamente system,
there are around a half-dozen research problems on the PhD thesis 
level to be solved.
 
This means there is some hard thinking left, yet if the Novamente design 
is correct, it
pertains some well-defined and well-delimited technical questions, which 
seem very likely

to be solvable.

As an example, there is the task of generalizing the MOSES algorithm 
(see metacog.org http://metacog.org)
to handle general programmatic constructs at the nodes of its internal 
program trees.  Of
course this is a hard problem, yet it's a well-defined computer science 
problem which

(after a lot of things) doesn't
seem likely to be hiding any deep gotchas.

But this is research and development -- not pure development -- so one 
never knows for sure...


-- Ben

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64076724-00fae4


Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Richard Loosemore

Edward W. Porter wrote:

Richard,

Geortzel claims his planning indicates it is rougly 6 years x 15 
excellent, hard-working programmers, or 90 man years to getting his 
architecture up an running.  I assume that will involve a lot of “hard” 
mental work.


By “hard problem” I mean a problem for which we don’t have what seems -- 
within the  Novemente model -- to be a way for handling it at, at least, 
a roughly human-level.  We won’t have proof that the problem is not hard 
until we actually get the part of the system that deals with that 
problem up and running successfully. 

Until then, you have every right to be skeptical.  But you also have the 
right, should you so choose, to open your mind up to the tremendous 
potential of the Novamente approach.




RICHARD What would be the solution of the grounding problem?
ED Not hard. As one linguist said “Words are defined by the company 
they keep”.  Kinda like I am guessing Google sets work, but at more 
different levels in the gen/comp pattern hierarchy and with more cross 
inferencing between different google-set seeds.  The same goes not only 
for words, but for almost all concepts and sub-concepts.  Grounding is 
made out of a life-time of experience recording such associations and 
the dynamic reactivation of those associations both in the subconscious 
and conscious in response to current activations.


RICHARD What would be the solution of the problem of autonomous, 

unsupervised learning of concepts?
ED Not hard! Read Novamente (or for a starter my prior summaries of 
it).  That’s one of its main focus.


RICHARD Can you find proofs that inference control engines will not 
show divergent behavior under heavy load (i.e. will they degrade 
gracefully when forced to provide answers in real time)?


ED Not totally clear.  Brain level hardware will really help here, 
but what is six orders of magnitude to the potential of combinatorial 
explosion in dynamic activations of something as large and 
high-dimensional as world knowledge?. 

This issue falls under the 
getting-it-all-to-work-together-well-automatically heading, which I said 
is non-trivial.  But Novamente directs a lot of attention to this 
problems, by among other approaches (a) using long and short term 
importance metrics to guide computational resource allocation, (b) 
having a deep memory of which computational patterns have proven 
appropriate in prior similar circumstances, (c) having a gen/comp 
hierarchy of such prior computational patterns which allows them to be 
instantiated in a given case in a context appropriate way, and (d) 
providing powerful inferencing mechanisms that go way beyond those 
commonly used in most current AIs.


I am totally confident we could get something very useful out of the 
system even if it was not as well tuned as a human brain.  There as all 
sorts of ways you could dampen the potential not only for combinatorial 
explosion, but also for instability.  We probably would start it out 
with a lot of such damping, but over time give it more freedom to 
control its own parameters.


RICHARD Are there solutions to the problems of flexible, abstract 

analogy building?
Language learning?
ED Not hard!  A Novamente class machine would be like Hofstadter’s 
CopyCat on steroids when it comes to making analogies. 

The gen/comp hierarchy of patterns would not only apply to all the 
concepts that fall directly within what we think of as NL, but also to 
the system’s world-knowledge, itself, of which such NL concepts and 
their contexts would be a part.  This includes knowledge about its own 
life-history, behavior, and the feedback it has received.  Thus, it 
would be fully capable of representing and matching concepts at the 
level humans do when understanding and communicating with NL.  The deep 
contextual grounding contained within such world knowledge and the 
ability to make inferences from it in real time would largely solve the 
hard disambiguation problems in natural language recognition, and allow 
language generation to be performed rapidly in a way that is appropriate 
to all the levels of context that humans use when speaking.



RICHARD Pragmatics?
ED Not hard! Follows from the above answer.  Understanding of 
pragmatics would result from the ability to dynamically generalize from 
prior similar statements in prior similar contexts, of what those prior 
contexts contained.





RICHARD Ben Goertzel wrote:
Goertzel This practical design is based on a theory that is 
fairly complete, but not easily verifiable using current technology.  
The verification, it seems, will come via actually getting the AGI built!


ED  You and Ben are totally correct.  None of this will be proven 
until it has actually been shown to work.  But significant pieces of it 
have already been shown to work. 

I think Ben believes it will work, as do I, but we both agree it will 
not be “verifiable” until it actually does.



Re: [agi] What best evidence for fast AI?

2007-11-11 Thread Benjamin Goertzel
Richard,



 Thus: if someone wanted volunteers to fly in their brand-new aircraft
 design, but all they could do to reassure people that it was going to
 work were the intuitions of suitably trained individuals, then most
 rational people would refuse to fly - they would want more than
 intuitions.


Yeah, sure.  I wouldn't trust the Novamente design's AGI potential, at
this stage, nearly enough to allow the life of one of my kids to depend on
it.

But I trust cars and airplanes in this manner every day.

Novamente is a promising-looking RD project, not a proven technology;
that's obvious.




 In this light, my summary would not be a distortion of your position at
 all, but only a statement about whether an appeal to intuition counts as
 a good reason to believe.



Just to be clear: the whole design doesn't have to be taken in one big
gulp of mysterious intuition.  There are plenty of well-substantiated
aspects, substantiated by math or by prototype experiments or
functionalities
of various system components.  But there are some aspects whose ability
to deliver the desired functionality is not yet
well substantiated, also.



 And, of course, there are some suitably trained individuals who do not
 share your intuitions, even given the limited access they have to your
 detailed design.


So far, no one who has taken the time to carefully study the detailed design
has
come forward and told me  I think that ain't gonna work.   Varying levels
of confidence have been expressed; and most of all, the opinion has been
expressed that the design is complicated and even though the whole thing
seems to make a lot of sense, there are a heck of a lot of details to be
resolved.



 I respect your optimism, and applaud your single-minded commitment to
 the project:  if it is going to work, that is the way to get it done.  I
 certainly wish you luck with it.


Thanks!
Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64085576-1e462a