Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Matt Mahoney

--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
   We just need to control AGIs goal system.
 
  You can only control the goal system of the first iteration.
 
 
 ..and you can add rules for it's creations (e.g. stick with the same
 goals/rules unless authorized otherwise)

You can program the first AGI to program the second AGI to be friendly.  You
can program the first AGI to program the second AGI to program the third AGI
to be friendly.  But eventually you will get it wrong, and if not you, then
somebody else, and evolutionary pressure will take over.

But if consciousness does not exist...
  
   obviously, it does exist.
 
  Belief in consciousness exists.  There is no test for the truth of this
  belief.
 
 Consciousness is basically an awareness of certain data and there are
 tests for that.

autobliss passes tests for awareness of its inputs and responds as if it has
qualia.  How is it fundamentally different from human awareness of pain and
pleasure, or is it just a matter of degree?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64515425-65dd64


Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Richard Loosemore

Bryan Bishop wrote:

On Monday 12 November 2007 22:16, Richard Loosemore wrote:

If anyone were to throw that quantity of resources at the AGI problem
(recruiting all of the planet), heck, I could get it done in about 3
years. ;-)


I have done some research on this topic in the last hour and have found 
that a Connectome Project is in fact in the very early stages out 
there on the internet:


http://iic.harvard.edu/projects/connectome.html
http://acenetica.blogspot.com/2005/11/human-connectome.html
http://acenetica.blogspot.com/2005/10/mission-to-build-simulated-brain.html
http://www.indiana.edu/~cortex/connectome_plos.pdf


This is the whole brain emulation approach, I guess (my previous 
comments were about evolution of brains rather than neural level 
duplication).


But (switching topics to whole brain emulation) there are serious 
problems with this.


It seems quite possible that what we need is a detailed map of every 
synapse, exact layout of dendritic tree structures, detailed knowledge 
of the dynamics of these things (they change rapidly) AND wiring between 
every single neuron.


When I say it seems possible I mean that the chance of this 
information being absolutely necessary in order to understand what the 
neural system is doing, is so high that we would not want to gamble on 
them NOT being necessary.


So are the researchers working at that level of detail?

Egads, no!  Here's a quote from the PLOS Computational Biology paper you 
referenced (above):


Attempting to assemble the human connectome at the level
of single neurons is unrealistic and will remain infeasible at
least in the near future.

They are not even going to do it at the resolution needed to see 
individual neurons?!


I think that if they did the whole project at that level of detail it 
would amount to a possibly interesting hint at some of the wiring, of 
peripheral interest to people doing work at the cognitive system level. 
 But that is all.


I think it would be roughly equivalent to the following:  You say to me 
I want to understand how computers work, in enough detail to build my 
own and I reply with I can get a you a photo of a motherboard and a 
500 by 500 pixel image of the inside of an Intel chip...




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64523531-24742d


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Richard Loosemore

Matt Mahoney wrote:

--- Jiri Jelinek [EMAIL PROTECTED] wrote:


On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

We just need to control AGIs goal system.

You can only control the goal system of the first iteration.


..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)


You can program the first AGI to program the second AGI to be friendly.  You
can program the first AGI to program the second AGI to program the third AGI
to be friendly.  But eventually you will get it wrong, and if not you, then
somebody else, and evolutionary pressure will take over.


This statement has been challenged many times.  It is based on 
assumptions that are, at the very least, extremely questionable, and 
according to some analyses, extremely unlikely.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64528236-2fa800


[agi] Human uploading

2007-11-13 Thread Benjamin Goertzel
Richard,

I recently saw a talk by Todd Huffman at the Foresight Unconference on the
topic of
mind uploading technology, and he was specifically showing off techniques
for imaging slices of brain, that *do* give the level of biological detail
you're thinking of.  Topics of discussions were, for example, inferring
synaptic strength indirectly from mitochondrial activity.

So, the Connectome people may not be taking a sufficiently fine-grained
approach to support mind-uploading, but others are trying...

Obviously, a detailed map of the brain at the level Todd is thinking of,
would be of more than peripheral interest to cognitive scientists.  It would
not resolve cognitive questions in itself, but would be a wonderful trove
of data to use to help validate or refute cognitive theories.

-- Ben G



On Nov 13, 2007 10:11 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Bryan Bishop wrote:
  On Monday 12 November 2007 22:16, Richard Loosemore wrote:
  If anyone were to throw that quantity of resources at the AGI problem
  (recruiting all of the planet), heck, I could get it done in about 3
  years. ;-)
 
  I have done some research on this topic in the last hour and have found
  that a Connectome Project is in fact in the very early stages out
  there on the internet:
 
  http://iic.harvard.edu/projects/connectome.html
  http://acenetica.blogspot.com/2005/11/human-connectome.html
 
 http://acenetica.blogspot.com/2005/10/mission-to-build-simulated-brain.html
  http://www.indiana.edu/~cortex/connectome_plos.pdfhttp://www.indiana.edu/%7Ecortex/connectome_plos.pdf

 This is the whole brain emulation approach, I guess (my previous
 comments were about evolution of brains rather than neural level
 duplication).

 But (switching topics to whole brain emulation) there are serious
 problems with this.

 It seems quite possible that what we need is a detailed map of every
 synapse, exact layout of dendritic tree structures, detailed knowledge
 of the dynamics of these things (they change rapidly) AND wiring between
 every single neuron.

 When I say it seems possible I mean that the chance of this
 information being absolutely necessary in order to understand what the
 neural system is doing, is so high that we would not want to gamble on
 them NOT being necessary.

 So are the researchers working at that level of detail?

 Egads, no!  Here's a quote from the PLOS Computational Biology paper you
 referenced (above):

 Attempting to assemble the human connectome at the level
 of single neurons is unrealistic and will remain infeasible at
 least in the near future.

 They are not even going to do it at the resolution needed to see
 individual neurons?!

 I think that if they did the whole project at that level of detail it
 would amount to a possibly interesting hint at some of the wiring, of
 peripheral interest to people doing work at the cognitive system level.
  But that is all.

 I think it would be roughly equivalent to the following:  You say to me
 I want to understand how computers work, in enough detail to build my
 own and I reply with I can get a you a photo of a motherboard and a
 500 by 500 pixel image of the inside of an Intel chip...



 Richard Loosemore

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64558273-86797b

Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Richard Loosemore

Mark Waser wrote:
I'm going to try to put some words into Richard's mouth here since 
I'm curious to see how close I am . . . . (while radically changing the 
words).
 
I think that Richard is not arguing about the possibility of 
Novamente-type solutions as much as he is arguing about the 
predictability of *very* flexible Novamente-type solutions as they grow 
larger and more complex (and the difficulty in getting it to not 
instantaneously crash-and-burn).  Indeed, I have heard a very faint 
shadow of Richard's concerns in your statements about the tuning 
problems that you had with BioMind.


This is true, but not precise enough to capture the true nature of my worry.

Let me focus on one aspect of the problem.  My goal here is to describe 
in a little detail how the Complex Systems Problem actually bites in a 
particular case.


Suppose that in some significant part of Novamente there is a 
representation system that uses probability or likelihood numbers to 
encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75) 
is supposed to express the idea that the statement [I like cats] is in 
some sense 75% true.


[Quick qualifier:  I know that this oversimplifies the real situation in 
Novamente, but I need to do this simplification in order to get my point 
across, and I am pretty sure this will not affect my argument, so bear 
with me].


We all know that this p value is not quite a probability or 
likelihood or confidence factor.  It plays a very ambigous role in 
the system, because on the one hand we want it to be very much like a 
probability in the sense that we want to do calculations with it:  we 
NEED a calculus of such values in order to combine facts in the system 
to make inferences.  But we also do not want to lock ourselves into a 
particular interpretation of what it means, because we know full well 
that we do not really have a clear semantics for these numbers.


Either way, we have a problem:  a fact like [I like cats](p=0.75) is 
ungrounded because we have to interpret it.  Does it mean that I like 
cats 75% of the time?  That I like 75% of all cats?  75% of each cat? 
Are the cats that I like always the same ones, or is the chance of an 
individual cat being liked by me something that changes?  Does it mean 
that I like all cats, but only 75% as much as I like my human family, 
which I like(p=1.0)?  And so on and so on.


Digging down to the root of this problem (and this is the point where I 
am skipping from baby stuff to hard core AI) we want these numbers to be 
semantically compositional and interpretable, but in order to make sure 
they are grounded, the system itself is going to have to build them 
interpret them without our help ... and it is not clear that this 
grounding can be completely implemented.  Why is it not clear?  Because 
when you try to build the entire grounding mechanism(s) you are forced 
to become explicit about what these numbers mean, during the process of 
building a grounding system that you can trust to be doing its job:  you 
cannot create a mechanism that you *know* is constructing sensible p 
numbers and facts during all of its development *unless* you finally 
bite the bullet and say what the p numbers really mean, in fully cashed 
out terms.


[Suppose you did not do this.  Suppose you built the grounding mechanism 
but remained ambiguous about the meaning of the p numbers.  What would 
the resulting system be computing?  From end to end it would be building 
facts with p numbers, but you the human observer would still be imposing 
an interpretation on the facts.  And if you are still doing anything to 
interpret, it cannot be grounded].


Now, as far as I understand it, the standard approach to this condundrum 
is that researchers (in Novamente and elsewhere) do indeed make an 
attempt to disambiguate the p numbers, but they do it by developing more 
sophisticated logical systems.  First, perhaps, error-value bands of p 
values instead of sharp values.  And temporal logic mechanisms to deal 
with time.  Perhaps clusters of p and q and r and s values, each with 
some slightly different zones of applicability.  More generally, people 
try to give structure to the qualifiers that are appended to the facts: 
[I like cats](qualfier=value) instead of [I like cats](p=0.75).


The question is, does this process of refinement have an end?  Does it 
really lead to a situation where the qualifier is disambiguated and the 
semantics is clear enough to build a trustworthy grounding system?  Is 
there a closed-form solution to the problem of building a logic that 
disambiguates the qualifiers?


Here is what I think will happen if this process is continued.  In order 
to make the semantics unambiguous enough to let the system ground its 
own knowledge without the interpretation of p values, researchers will 
develop more and more sophisticated logics (with more and more 
structured replacements for that simple p value), until they are 

RE: [agi] What best evidence for fast AI?

2007-11-13 Thread Edward W. Porter
Response to Mark Waser  Mon 11/12/2007 2:42 PM post.



MARK  Remember that the brain is *massively* parallel.  Novamente and
any other linear (or minorly-parallel) system is *not* going to work in
the same fashion as the brain.  Novamente can be parallelized to some
degree but *not* to anywhere near the same degree as the brain.  I love
your speculation and agree with it -- but it doesn't match near-term
reality.  We aren't going to have brain-equivalent parallelism anytime in
the near future.



ED I think in five to ten years there could be computers capable of
providing every bit as much parallelism as the brain at prices that will
allow thousands or hundreds of thousands of them to be sold.



But it is not going to happen overnight.  Until then the lack of brain
level hardware is going to limit AGI. But there are still a lot of high
value system that could be built on say $100K to $10M of hardware.



You claim we really need experience with computing and controlling
activation over large atom tables.  I would argue that obtaining such
experience should be a top priority for government funders.



MARK  The node/link architecture is very generic and can be used for
virtually anything.  There is no rational way to attack it.  It is, I
believe, going to be the foundation for any system since any system can
easily be translated into it.  Attacking the node/link architecture is
like attacking assembly language or machine code.  Now -- are you going to
write your AGI in assembly language?  If you're still at the level of
arguing node/link, we're not communicating well.



ED  nodes and links are what patterns are made of, and each static
pattern can have an identifying node associated with it as well as the
nodes and links representing its sub-patterns, elements, the compositions
of which it is part, it associations, etc.  The system automatically
organize patterns into a gen/comp hierarchy.  So, I am not just dealing at
a node and link level, but they are the basic building blocks.





MARK ... I *AM* saying that the necessity of using probabilistic
reasoning for day-to-day decision-making is vastly over-rated and has been
a horrendous side-road for many/most projects because they are attempting
to do it in situations where it is NOT appropriate.  The increased,
almost ubiquitous adaptation of probabilistic methods is the herd
mentality in action (not to mention the fact that it is directly
orthogonal to work thirty years older).  Most of the time, most projects
are using probabilistic methods to calculate a tenth place decimal of a
truth value when their data isn't even sufficient for one.  If you've got
a heavy-duty discovery system, probabilistic methods are ideal.  If you're
trying to derive probabilities from a small number of English statements
(like this raven is white and most ravens are black), you're seriously
on the wrong track.  If you go on and on about how humans don't understand
Bayesian reasoning, you're both correct and clueless in not recognizing
that your very statement points out how little Bayesian reasoning has to
do with most general intelligence.  Note, however, that I *do* believe
that probabilistic methods *are* going to be critically important for
activation for attention, etc.



ED  I agree that many approaches accord too much importance to the
numerical accuracy and Bayesian purity of their approach, and not enough
importance on the justification for the Bayesian formulations they use.
I know of one case where I suggested using information that would almost
certainly have improved a perception process and the suggestion was
refused because it would not fit within the system’s probabilistic
framework.   At an AAAI conference in 1997 I talked to a programmer for a
big defense contractor who said he as a fan of fuzzy logic system; that
they were so much more simple to get up an running because you didn't have
to worry about probabilistic purity.  He said his group that used fuzzy
logic was getting things out the door that worked faster than the more
probability limited competition.  So obviously there is something to say
for not letting probabilistic purity get in the way of more reasonable
approaches.



But I still think probabilities are darn important. Even your “this raven
is white” and “most ravens are black” example involves notions of
probability.  We attribute probabilities to such statements based on
experience with the source of such statements or similar sources of
information, and the concept “most” is a probabilistic one.  The reason we
humans are so good at reasoning from small data is based on our ability to
estimate rough probabilities from similar or generic patterns.



MARK  The problem with probability-based conflict resolution is
that it is a hack to get around insufficient knowledge rather than an
attempt to figure out how to get more knowledge



ED This agrees with what I said above about not putting enough
emphasis on selecting what 

[agi] advice-level dev collaboration

2007-11-13 Thread Jiri Jelinek
I'm looking for a skilled coder from the AGI community who is well
familiar with Java/JEE, SWT/JFace, JWS, PHP, Ajax, MySQL, PostgreSQL -
under Windows  Linux platforms + familiar with Eclipse as well as
NetBeans IDE + who has a good sense of application security (e.g. the
Acegi stuff and/or other alternatives for handling authentication,
authorization, instance-based access control, RBAC, channel security,
human user detection capabilities etc). Having also some linguistics
related skills would be awesome. I'm NOT offering a paid job and I'm
[currently] not planning to ask the developer to write any code for
me. I'm relatively skilled developer myself, familiar enough with the
above mentioned technology to use it. But using it is one thing and
making important architecture decisions is another. I have done lots
of coding and some architecture in the M$ world (=significant part of
my tech-background). When it comes to the vast open source dev world,
I have done coding (I'm a Java/Oracle pro now) but not much of the
architecture yet (even though I'm not really clueless). So the help
I'm looking for would be mostly architecture-advice level,
occasionally slipping into specific coding details. Nothing terribly
time-demanding (I'm also busy with lots of other stuff). Just
occasional email exchange about highly technical topics. Results of
the online research are sometimes too ambiguous and not that easy to
evaluate. If you are the all-knowing guru I'm looking for and willing
to help, please get in touch through my private gmail account.

Thanks,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64603883-e8db13


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
Richard,

The idea of the PLN semantics underlying Novamente's probabilistic
truth values is that we can have **both**

-- simple probabilistic truth values without highly specific interpretation

-- more complex, logically refined truth values, when this level of
precision is necessary

To make the discussion more concrete, I'll use a specfic example
to do with virtual animals in Second Life.  Our first version of the
virtual pets won't use PLN in this sort of way, it'll be focused on MOSES
evolutionary learning; but, this is planned for the second version and
is within the scope of what Novamente can feasibly be expected to
do with modest effort.

Consider an avatar identified as Bob_Yifu

And, consider the concept of friend, which is a ConceptNode

-- associated to the WordNode friend via a learned ReferenceLink
-- defined operationally via a number of links such as

ImplicationLink
   AND
  InheritanceLink X friend
  EvaluationLink near (I, X)
   Pleasure

(this one just says that being near a friend confers pleasure.  Other
links about friendship may contain knowledge such as that friends
often give one food, friends help one find things, etc.)

 The concept of friend may be learned, via mining of the animal's
experience-base --
basically, this is a matter of learning that there are certain predicates
whose SatisfyingSets (the set of Atoms that fulfill the predicate)
have significant intersection, and creating a ConceptNode to denote
that intersection.

Then, once the concept of friend has been formed, more links pertaining
to it may be learned via mining the experience base and via inference rules.

Then, we can may find that

InheritanceLink Bob_Yifu friend .9,1

(where the .9,1 is an interval probability, interpreted according to
the indefinite probabilities framework) and this link mixes intensional
and extensional inheritance, and thus is only useful for heuristic
reasoning (which however is a very important kind).

What this link means is basically that Bob_Yifu's node in the memory
has a lot of the same links as the friend node -- or rather, that it
**would**, if all its links were allowed to exist rather than being
pruned to save memory.  So, note that the semantics are actually
tied to the mind itself.

Or we can make more specialized logical constructs if we really
want to, denoting stuff like

-- at certain times Bob_Yifu is a friend
-- Bob displays some characteristics of friendship very strongly,
and others not at all
-- etc.

We can also do crude, heuristic contextualization like

ContextLink .7,.8
 home
 InheritanceLink Bob_Yifu friend

which suggests that Bob is less friendly at home than
in general.

Again this doesn't capture all the subtleties of Bob's friendship in
relation to being at home -- and one could do so if one wanted to, but it
would
require introducing a larger complex of nodes and links, which is
not always the most appropriate
thing to do.

The PLN inference rules are designed to give heuristically
correct conclusions based on heuristically interpreted links;
or more precise conclusions based on more precisely interpreted
links.

Finally, the semantics of PLN relationships is explicitly an
**experiential** semantics.  (One of the early chapters in the PLN
book, to appear via Springer next year, is titled Experiential
Semantics.)  So, all node and link truth values in PLN are
intended to be settable and adjustable via experience, rather than
via programming or importation from databases or something like
that.

Now, the above example is of course a quite simple one.
Discussing a more complex example would go beyond the scope
of what I'm willing to do in an email conversation, but the mechanisms
I've described are not limited to such simple examples.

I am aware that identifying Bob_Yifu as a coherent, distinct entity is a
problem
faced by humans and robots, and eliminated via the simplicity of the SL
environment.  However, there is detailed discussion in the (proprietary) NM
book of
how these same mechanisms may be used to do object recognition and
classification, as well.

You may of course argue that these mechanisms won't scale up
to large knowledge bases and rich experience streams.  I believe that
they will, and have arguments but not rigorous proofs that they will.

-- Ben G



On Nov 13, 2007 12:34 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Mark Waser wrote:
  I'm going to try to put some words into Richard's mouth here since
  I'm curious to see how close I am . . . . (while radically changing the
  words).
 
  I think that Richard is not arguing about the possibility of
  Novamente-type solutions as much as he is arguing about the
  predictability of *very* flexible Novamente-type solutions as they grow
  larger and more complex (and the difficulty in getting it to not
  instantaneously crash-and-burn).  Indeed, I have heard a very faint
  shadow of Richard's concerns in your statements about the tuning
  problems that you had 

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Benjamin Goertzel



 For example, what is the equivalent of the activation control (or search)
 algorithm in Google sets.  They operate over huge data.  I bet the
 algorithm for calculating their search or activation is relatively simple
 (much, much, much less than a PhD theses) and look what they can do.  So I
 think one path is to come up with applications that can use and reason with
 large data, having roughly world knowledge-like sparseness, (such as NL
 data) and start with relatively simple activation algorithms and develop
 then from the ground up.



Google, I believe, does reasoning about word and phrase co-occurrence using
a combination of Bayes net learning with EM clustering (this is based on
personal conversations with folks who have worked on related software
there).

The use of EM helps the Bayes net approach scale.

Bayes nets are good for domains like word co-occurence probabilities, in
which the relevant data is relatively static.  They are not much good for
real-time learning.

Unlike Bayes nets, the approach taken in PLN and NARS allows efficient
uncertain reasoning in dynamic environments based on large knowledge bases
(at least in principle, based on the math, algorithms and structures; we
haven't proved it yet).

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64609544-b69ea5

Re: [agi] Human uploading

2007-11-13 Thread Benjamin Goertzel



 Yes, I thought I had heard of people trying more ambitious techniques,
 but in the cases I heard of (can't remember where now) the tradeoffs
 always left the approach hanging on one of the issues:  for example, was
 he talking about scanning microchondrial activity in vivo, in real time,
 across the whole brain?!!  The mind boggles.  [Uh, and it probably
 would, if you were the subject].  Some people think they can do very
 thin slices, but they are in defuncto, not in vivo.



Yes, Todd believes (like most mind uploading experts) that the most
practical
approach to mind uploading in the near term is to slice a dead brain and
scan
it in.  Doing uploading on live brains is bound to be far more
technologically
demanding, so it makes sense to focus on uploading fresh-killed brains
first.





 Couldn't see any good references to this.



It was a talk, not a publication.  Not sure if it was videotaped or not.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64610913-6e5f3d

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Richard Loosemore

Mike Tintner wrote:

RL:Suppose that in some significant part of Novamente there is a
representation system that uses probability or likelihood numbers to
encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75)
is supposed to express the idea that the statement [I like cats] is in
some sense 75% true.

This essay seems to be a v.g. demonstration of why the human system 
almost certainly does not use numbers or anything like,  as stores of 
value - but raw, crude emotions.  How much do you like cats [or 
marshmallow ice cream]? Miaow//[or yummy] [those being an expression 
of internal nervous and muscular impulses] And black cats [or 
strawberry marshmallow] ? Miaow-miaoww![or yummy yummy] . It's crude 
but it's practical.


It is all a question of what role the numbers play.  Conventional AI 
wants them at the surface, and transparently interpretable.


I am not saying that there are no numbers, but only that they are below 
the surface, and not directly interpretable.  that might or might not 
gibe with what you are saying ... although I would not go so far as to 
put it in the way you do.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64636829-14d428


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Richard Loosemore


Ben,

Unfortunately what you say below is tangential to my point, which is 
what happens when you reach the stage where you cannot allow any more 
vagueness or subjective interpretation of the qualifiers, because you 
have to force the system to do its own grounding, and hence its own 
interpretation.


What you gave below was a sketch of some more elaborate 'qualifier' 
mechanisms.  But I described the process of generating more and more 
elaborate qualifier mechanisms in the body of the essay, and said why 
this process was of no help in resolving the issue.




Richard Loosemore





Benjamin Goertzel wrote:


Richard,

The idea of the PLN semantics underlying Novamente's probabilistic
truth values is that we can have **both**

-- simple probabilistic truth values without highly specific interpretation

-- more complex, logically refined truth values, when this level of
precision is necessary

To make the discussion more concrete, I'll use a specfic example
to do with virtual animals in Second Life.  Our first version of the
virtual pets won't use PLN in this sort of way, it'll be focused on MOSES
evolutionary learning; but, this is planned for the second version and
is within the scope of what Novamente can feasibly be expected to
do with modest effort.

Consider an avatar identified as Bob_Yifu

And, consider the concept of friend, which is a ConceptNode

-- associated to the WordNode friend via a learned ReferenceLink
-- defined operationally via a number of links such as

ImplicationLink
   AND
  InheritanceLink X friend
  EvaluationLink near (I, X)
   Pleasure

(this one just says that being near a friend confers pleasure.  Other
links about friendship may contain knowledge such as that friends
often give one food, friends help one find things, etc.)

 The concept of friend may be learned, via mining of the animal's 
experience-base --

basically, this is a matter of learning that there are certain predicates
whose SatisfyingSets (the set of Atoms that fulfill the predicate)
have significant intersection, and creating a ConceptNode to denote
that intersection. 


Then, once the concept of friend has been formed, more links pertaining
to it may be learned via mining the experience base and via inference rules.

Then, we can may find that

InheritanceLink Bob_Yifu friend .9,1

(where the .9,1 is an interval probability, interpreted according to
the indefinite probabilities framework) and this link mixes intensional
and extensional inheritance, and thus is only useful for heuristic
reasoning (which however is a very important kind).

What this link means is basically that Bob_Yifu's node in the memory
has a lot of the same links as the friend node -- or rather, that it
**would**, if all its links were allowed to exist rather than being
pruned to save memory.  So, note that the semantics are actually
tied to the mind itself.

Or we can make more specialized logical constructs if we really
want to, denoting stuff like

-- at certain times Bob_Yifu is a friend
-- Bob displays some characteristics of friendship very strongly,
and others not at all
-- etc.

We can also do crude, heuristic contextualization like

ContextLink .7,.8
 home
 InheritanceLink Bob_Yifu friend

which suggests that Bob is less friendly at home than
in general.

Again this doesn't capture all the subtleties of Bob's friendship in
relation to being at home -- and one could do so if one wanted to, but 
it would

require introducing a larger complex of nodes and links, which is
not always the most appropriate
thing to do.

The PLN inference rules are designed to give heuristically
correct conclusions based on heuristically interpreted links;
or more precise conclusions based on more precisely interpreted
links. 


Finally, the semantics of PLN relationships is explicitly an
**experiential** semantics.  (One of the early chapters in the PLN
book, to appear via Springer next year, is titled Experiential
Semantics.)  So, all node and link truth values in PLN are
intended to be settable and adjustable via experience, rather than
via programming or importation from databases or something like
that.

Now, the above example is of course a quite simple one.
Discussing a more complex example would go beyond the scope
of what I'm willing to do in an email conversation, but the mechanisms
I've described are not limited to such simple examples.

I am aware that identifying Bob_Yifu as a coherent, distinct entity is a 
problem

faced by humans and robots, and eliminated via the simplicity of the SL
environment.  However, there is detailed discussion in the (proprietary) 
NM book of

how these same mechanisms may be used to do object recognition and
classification, as well.

You may of course argue that these mechanisms won't scale up
to large knowledge bases and rich experience streams.  I believe that
they will, and have arguments but not rigorous proofs that they will.

-- Ben G



On Nov 13, 2007 12:34 PM, Richard Loosemore 

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel
On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote:


 Ben,

 Unfortunately what you say below is tangential to my point, which is
 what happens when you reach the stage where you cannot allow any more
 vagueness or subjective interpretation of the qualifiers, because you
 have to force the system to do its own grounding, and hence its own
 interpretation.



I don't see why you talk about forcing the system to do its own grounding
--
the probabilities in the system are grounded in the first place, as they
are calculated based on experience.

The system observes, records what it sees, abstracts from it, and chooses
actions that it guess will fulfill its goals.  Its goals are ultimately
grounded in in-built
feeling-evaluation routines, measuring stuff like amount of novelty
observed,
amount of food in system etc.

So, the system sees and then acts ... and the concepts it forms and uses
are created/used based on their utility in deriving appropriate actions.

There is no symbol-grounding problem except in the minds of people who
are trying to interpret what the system does, and get confused.  Any symbol
used within the system, and any probability calculated by the system, are
directly grounded in the system's experience.

There is nothing vague about an observation like Bob_Yifu was observed
at time-stamp 599933322, or a fact Command 'wiggle ear' was sent
at time-stamp 54.  These perceptions and actions are the root of the
probabilities the system calculated, and need no further grounding.



 What you gave below was a sketch of some more elaborate 'qualifier'
 mechanisms.  But I described the process of generating more and more
 elaborate qualifier mechanisms in the body of the essay, and said why
 this process was of no help in resolving the issue.


So, if a system can achieve its goals based on choosing procedures that
it thinks are likely to achieve its goals, based on the knowledge it
gathered
via its perceived experience -- why do you think it has a problem?

I don't really understand your point, I guess.  I thought I did -- I thought
your point was that precisely specifying the nature of a conditional
probability
is a rats-nest of complexity.  And my response was basically that in
Novamente we don't need to do that, because we define conditional
probabilities
based on the system's own knowledge-base, i.e.

Inheritance A B .8

means

If A and B were reasoned about a lot, then A would (as measred by an
weighted
average) have 80% of the relationships that B does

But apparently you were making some other point, which I did not grok,
sorry...

Anyway, though, Novamente does NOT require logical relations of escalating
precision and complexity to carry out reasoning, which is one thing you
seemed
to be assuming in your post.

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64644318-8bbdee

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Linas Vepstas
On Mon, Nov 12, 2007 at 08:44:58PM -0500, Mark Waser wrote:
 
 So perhaps the AGI question is, what is the difference between
 a know-it-all mechano-librarian, and a sentient being?
 
 I wasn't assuming a mechano-librarian.  I was assuming a human that could 
 (and might be trained to) do some initial translation of the question and 
 some final rephrasing of the answer.

I'm surprised by your answer. 

I don't see that the hardest part of agi is NLP i/o. To put it into
perspective: one can fake up some trivial NLP i/o now, and with a bit of
effort, one can improve significantly on that.  Sure, it would be
child-like conversation, and the system would be incapable of learning
new idioms, expressions, etc., but I don't see that you'd need a human
to translate the question into some formal reasoning-engine language.

The hard part of NLP is being able to read complex texts, whether
Alexander Pope or Karl Marx; but a basic NLP i/o interface stapled to
a reasoning engine doesn't need to really do that, or at least not well.
Yet, these two stapled toegether would qualify as a mechano-librarian
for me.

To me, the hard part is still the reasoning engine itself, and the 
pruning, and the tailoring of responses to the topic at hand. 

So let me rephrase the question: If one had
1) A reasoing engine that could provide short yet appropriate responses
   to questions,
2) A simple NLP interface to the reasoning engine

would that be AGI?  I imagine most folks would say no, so let me throw
in: 

3) System can learn new NLP idioms, so that it can eventually come to
understand those sentences and paragraphs that make Karl Marx so hard to
read.

With this enhanced reading ability, it could then presumably become a
know-it-all ultra-question-answerer. 

Would that be AGI? Or is there yet more? Well, of course there's more:
one expects creativity, aesthetics, ethics. But we know just about nothing
about that.

This is the thing that I think is relevent to Robin Hanson's original
question.  I think we can build 1+2 is short order, and maybe 3 in a
while longer. But the result of 1+2+3 will almost surely be an
idiot-savant: knows everything about horses, and can talk about them
at length, but, like a pedantic lecturer, the droning will put you
asleep.  So is there more to AGI, and exactly how do way start laying
hands on that?

--linas






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64661358-af169f


Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Benjamin Goertzel


 This is the thing that I think is relevent to Robin Hanson's original
 question.  I think we can build 1+2 is short order, and maybe 3 in a
 while longer. But the result of 1+2+3 will almost surely be an
 idiot-savant: knows everything about horses, and can talk about them
 at length, but, like a pedantic lecturer, the droning will put you
 asleep.  So is there more to AGI, and exactly how do way start laying
 hands on that?

 --linas



I think that evolutionary-learning-type methods play a big role in
creativity.

I elaborated on this quite a bit toward the end of my 1997 book From
Complexity to Creativity.

Put simply, inference is ultimately a local search method -- inference
rules, even heuristic and speculative ones, always lead you step by step
from what you know into the unknown.  This makes you, as you say, like
a pedantic lecturer.

OTOH, evolutionary algorithms can take big creative leaps.  This is one
reason why the MOSES evolutionary algorithm plays a big role in the
Novamente design (the other, related reason being that evolutionary learning
is
better than logical inference for many kinds of procedure learning).

Integrating evolution with logic is key to intelligence.  The brain does it,
I believe, via

-- implementing logic via Hebbian learning (neuron-level Hebb stuff leading
to
PLN-like logic stuff on the neural-assembly level)
-- implementing evolution via Edelman-style Neural Darwinist neural map
evolution (which ultimately bottoms out in Hebbian learning too)

Novamente seeks to enable this integration
 via grounding both inference and evolutionary
learning in probability theory.

-- Ben G


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64667888-a48aa3

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Jiri Jelinek [EMAIL PROTECTED] wrote:
  
  On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  We just need to control AGIs goal system.
  You can only control the goal system of the first iteration.
 
  ..and you can add rules for it's creations (e.g. stick with the same
  goals/rules unless authorized otherwise)
  
  You can program the first AGI to program the second AGI to be friendly. 
 You
  can program the first AGI to program the second AGI to program the third
 AGI
  to be friendly.  But eventually you will get it wrong, and if not you,
 then
  somebody else, and evolutionary pressure will take over.
 
 This statement has been challenged many times.  It is based on 
 assumptions that are, at the very least, extremely questionable, and 
 according to some analyses, extremely unlikely.

I guess it will continue to be challenged until we can do an experiment to
prove who is right.  Perhaps you should challenge SIAI, since they seem to
think that friendliness is still a hard problem.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64668559-1aacd3


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Linas Vepstas
On Tue, Nov 13, 2007 at 12:34:51PM -0500, Richard Loosemore wrote:
 
 Suppose that in some significant part of Novamente there is a 
 representation system that uses probability or likelihood numbers to 
 encode the strength of facts, as in [I like cats](p=0.75).  The (p=0.75) 
 is supposed to express the idea that the statement [I like cats] is in 
 some sense 75% true.
 
 Either way, we have a problem:  a fact like [I like cats](p=0.75) is 
 ungrounded because we have to interpret it.  Does it mean that I like 
 cats 75% of the time?  That I like 75% of all cats?  75% of each cat? 
 Are the cats that I like always the same ones, or is the chance of an 
 individual cat being liked by me something that changes?  Does it mean 
 that I like all cats, but only 75% as much as I like my human family, 
 which I like(p=1.0)?  And so on and so on.

Eh?

You are standing at the proverbial office water coooler, and Aneesh 
says Wen likes cats. On your drive home, you mind races .. does this
mean that Wen is a cat fancier?  You were planning on taking Wen out
on a date, and this tidbit of information could be useful ... 

 when you try to build the entire grounding mechanism(s) you are forced 
 to become explicit about what these numbers mean, during the process of 
 building a grounding system that you can trust to be doing its job:  you 
 cannot create a mechanism that you *know* is constructing sensible p 
 numbers and facts during all of its development *unless* you finally 
 bite the bullet and say what the p numbers really mean, in fully cashed 
 out terms.

But has a human, asking Wen out on a date, I don't really know what 
Wen likes cats ever really meant. It neither prevents me from talking 
to Wen, or from telling my best buddy that ...well, I know, for
instance, that she likes cats...  

Lack of grounding is what makes humour funny, you can do a whole 
Pygmalion / Seinfeld episode on she likes cats.

--linas 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64672202-2af80e


Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel


 But has a human, asking Wen out on a date, I don't really know what
 Wen likes cats ever really meant. It neither prevents me from talking
 to Wen, or from telling my best buddy that ...well, I know, for
 instance, that she likes cats...


yes, exactly...

The NLP statement Wen likes cats is vague in the same way as the
Novamente or NARS relationship

EvaluationLink
likes
ListLink
   Wen
cats


is vague  The vagueness passes straight from NLP into the internal KR,
which is how it should be.

And that same vagueness may be there if the relationship is learned via
inference based on experience, rather than acquired by natural language.

I.e., if the above relationship is inferred, it may just mean that

 {the relationship between Wen and cats} shares many relationships with
other person/object relationships that have been categorized as 'liking'
before

In this case, the system can figure out that Wen likes cats without ever
actually making explicit what this means.  All it knows is that, whatever it
means,
it's the same thing that was meant in other circumstances where liking
was used as a label.

So, vagueness can not only be important into an AI system from natural
language,
but also propagated around the AI system via inference.

This is NOT one of the trickier things about building probabilistic AGI,
it's really
kind of elementary...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64674694-3ada83

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-13 Thread Benjamin Goertzel



 So, vagueness can not only be important


imported, I meant


 into an AI system from natural language,
 but also propagated around the AI system via inference.

 This is NOT one of the trickier things about building probabilistic AGI,
 it's really
 kind of elementary...

 -- Ben G




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64674943-4b25e0

Re: [agi] Human uploading

2007-11-13 Thread Bob Mottram
  It seems quite possible that what we need is a detailed map of every
  synapse, exact layout of dendritic tree structures, detailed knowledge
  of the dynamics of these things (they change rapidly) AND wiring between
  every single neuron.


An example of automatic detection of neurons and their processes from
BrainMaps data.  This is from layer 6 of the cortex of a monkey.
Green indicates the detected cell bodies.

http://farm1.static.flickr.com/137/360938913_6b7ffb9cbe_o.jpg

I think the first structural upload of an entire brain may not be far
away.  There are significant computational resources required (there's
a lot of data and multiple slices need to be carefully registered
since they distort non-uniformly) but I think the necessary compute
power and storage will be available cheaply before this decade is out.

Reverse engineering the detailed structure of the brain won't give us
a mind upload, but it will be a useful first step in that direction,
greatly assisting with the development of plausible theories about how
the brain really operates.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64677853-23d7fb


Re: [agi] Human uploading

2007-11-13 Thread Benjamin Goertzel
Bob,

The two biologists I know who are deep into mind uploading
(Randal Koene and Todd Huffman) both agree with your basic assessment,
I believe...

ben g

On Nov 13, 2007 4:37 PM, Bob Mottram [EMAIL PROTECTED] wrote:

   It seems quite possible that what we need is a detailed map of every
   synapse, exact layout of dendritic tree structures, detailed knowledge
   of the dynamics of these things (they change rapidly) AND wiring
 between
   every single neuron.


 An example of automatic detection of neurons and their processes from
 BrainMaps data.  This is from layer 6 of the cortex of a monkey.
 Green indicates the detected cell bodies.

 http://farm1.static.flickr.com/137/360938913_6b7ffb9cbe_o.jpg

 I think the first structural upload of an entire brain may not be far
 away.  There are significant computational resources required (there's
 a lot of data and multiple slices need to be carefully registered
 since they distort non-uniformly) but I think the necessary compute
 power and storage will be available cheaply before this decade is out.

 Reverse engineering the detailed structure of the brain won't give us
 a mind upload, but it will be a useful first step in that direction,
 greatly assisting with the development of plausible theories about how
 the brain really operates.

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64678817-6521b4

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Mark Waser

I don't see that the hardest part of agi is NLP i/o.


I didn't say that i/o was the hardest part of agi.  Truly understanding NLP 
is agi-complete though.  And please, get off this kick of just faking 
something up and thinking that because you can create a shallow toy example 
that holds for ten seconds that you've answered *anything*.  That's the 
*narrow ai* approach.


- Original Message - 
From: Linas Vepstas [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 13, 2007 4:01 PM
Subject: Re: [agi] What best evidence for fast AI?



On Mon, Nov 12, 2007 at 08:44:58PM -0500, Mark Waser wrote:


So perhaps the AGI question is, what is the difference between
a know-it-all mechano-librarian, and a sentient being?

I wasn't assuming a mechano-librarian.  I was assuming a human that could
(and might be trained to) do some initial translation of the question and
some final rephrasing of the answer.


I'm surprised by your answer.

I don't see that the hardest part of agi is NLP i/o. To put it into
perspective: one can fake up some trivial NLP i/o now, and with a bit of
effort, one can improve significantly on that.  Sure, it would be
child-like conversation, and the system would be incapable of learning
new idioms, expressions, etc., but I don't see that you'd need a human
to translate the question into some formal reasoning-engine language.

The hard part of NLP is being able to read complex texts, whether
Alexander Pope or Karl Marx; but a basic NLP i/o interface stapled to
a reasoning engine doesn't need to really do that, or at least not well.
Yet, these two stapled toegether would qualify as a mechano-librarian
for me.

To me, the hard part is still the reasoning engine itself, and the
pruning, and the tailoring of responses to the topic at hand.

So let me rephrase the question: If one had
1) A reasoing engine that could provide short yet appropriate responses
  to questions,
2) A simple NLP interface to the reasoning engine

would that be AGI?  I imagine most folks would say no, so let me throw
in:

3) System can learn new NLP idioms, so that it can eventually come to
understand those sentences and paragraphs that make Karl Marx so hard to
read.

With this enhanced reading ability, it could then presumably become a
know-it-all ultra-question-answerer.

Would that be AGI? Or is there yet more? Well, of course there's more:
one expects creativity, aesthetics, ethics. But we know just about nothing
about that.

This is the thing that I think is relevent to Robin Hanson's original
question.  I think we can build 1+2 is short order, and maybe 3 in a
while longer. But the result of 1+2+3 will almost surely be an
idiot-savant: knows everything about horses, and can talk about them
at length, but, like a pedantic lecturer, the droning will put you
asleep.  So is there more to AGI, and exactly how do way start laying
hands on that?

--linas






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64683060-82d4be


Re: [agi] advice-level dev collaboration

2007-11-13 Thread Benjamin Johnston


Hi Jiri,

The [agi] list is billed as being for more technical discussions 
about current AGI projects. I joined this particular list hoping to 
find all levels of discussions of technical details of AGI construction 
and theory.


I would therefore hope that many of your questions would/should be 
on-topic for this mailing list.


Why not try this list, and then move to the private discussion model (or 
start an [agi-developer] list) if there's a backlash?


-Benjamin Johnston

Jiri Jelinek wrote:


I'm looking for a skilled coder from the AGI community who is well
familiar with Java/JEE, SWT/JFace, JWS, PHP, Ajax, MySQL, PostgreSQL -
under Windows  Linux platforms + familiar with Eclipse as well as
NetBeans IDE + who has a good sense of application security (e.g. the
Acegi stuff and/or other alternatives for handling authentication,
authorization, instance-based access control, RBAC, channel security,
human user detection capabilities etc). Having also some linguistics
related skills would be awesome. I'm NOT offering a paid job and I'm
[currently] not planning to ask the developer to write any code for
me. I'm relatively skilled developer myself, familiar enough with the
above mentioned technology to use it. But using it is one thing and
making important architecture decisions is another. I have done lots
of coding and some architecture in the M$ world (=significant part of
my tech-background). When it comes to the vast open source dev world,
I have done coding (I'm a Java/Oracle pro now) but not much of the
architecture yet (even though I'm not really clueless). So the help
I'm looking for would be mostly architecture-advice level,
occasionally slipping into specific coding details. Nothing terribly
time-demanding (I'm also busy with lots of other stuff). Just
occasional email exchange about highly technical topics. Results of
the online research are sometimes too ambiguous and not that easy to
evaluate. If you are the all-knowing guru I'm looking for and willing
to help, please get in touch through my private gmail account.

Thanks,
Jiri Jelinek
 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64729218-e0cbdf


Re: [agi] Human uploading

2007-11-13 Thread Bryan Bishop
Ben, 

This is all very interesting work. I have heard of brain slicing before, 
as well as viral gene therapy to add a way for our neurons to debug 
themselves into the blood stream, which is not yet technologically 
possible (or here yet, rather), and the age-old concept of using MNT 
to signal data about our neurons, synapses, etc. There is also the 
concept of incrementally replacing the brain, component by component, 
also requiring MNT, or possibly taking out regions of the brain and 
replacing them with equivalents and re-training those portions 
somehow, obviously less effective with memories. 

I have been thinking that if we do not care for *pure* mind uploading, 
we should also be focusing on how long we can keep regions of the brain 
alive on life support with MEAs or DNIs (a type of BCI) to connect it 
back to the rest of the brain or a digitized brain. If we can do this 
well enough, we can keep our minds alive long enough to see the day 
when we have more options for mind uploading.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64756752-3c621b


Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Bryan Bishop
On Tuesday 13 November 2007 09:11, Richard Loosemore wrote:
 This is the whole brain emulation approach, I guess (my previous
 comments were about evolution of brains rather than neural level
 duplication).

Ah, you are right. But this too is an interesting topic. I think that 
the order of magnitudes for whole brain emulation, connectome, and 
similar evolutionary methods, are roughly the same, but I haven't done 
any calculations.

 It seems quite possible that what we need is a detailed map of every
 synapse, exact layout of dendritic tree structures, detailed
 knowledge of the dynamics of these things (they change rapidly) AND
 wiring between every single neuron.

Hm. It would seem that we could have some groups focusing on neurons, 
another on types of neurons, another on dendritic tree structures, some 
more on the abstractions of dendritic trees, etc. in an up-*and*-down 
propagation hierarchy so that the abstract processes of the brain are 
studied just as well as the in-betweens of brain architecture.

 I think that if they did the whole project at that level of detail it
 would amount to a possibly interesting hint at some of the wiring, of
 peripheral interest to people doing work at the cognitive system
 level. But that is all.

You see no more possible value of such a project?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64757679-f3c1ec


Re: [agi] advice-level dev collaboration

2007-11-13 Thread Bryan Bishop
On Tuesday 13 November 2007 17:12, Benjamin Johnston wrote:
 Why not try this list, and then move to the private discussion model
 (or start an [agi-developer] list) if there's a backlash?

I'd certainly join.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64758692-14dcfa


[agi] Relativistic irrationalism

2007-11-13 Thread Stefan Pernar
Would be great if people could poke the following with their metaphorical
sticks:

Imagine two agents A(i) each one with a utility function F(i), capability
level C(i) and no knowledge as to the other agents F and C values. Both
agents are given equal resources and are tasked with devising the most
efficient and effective way to maximize their respective utility with said
resources.

Scenario 1: Both agents have fairly similar utility functions F(1) = F(2),
level of knowledge, cognitive complexity, experience - in short capability
C(1) = C(2) - and a high level of mutual trust T(1-2) = T(2-1) = 1. They
will quickly agree on the way forward, pool their resources and execute
their joint plan. Rather boring.

Scenario 2: Again we assume F(1) = F(2), however C(1)  C(2) - again T(1-2)
= T(2-1) = 1. The more capable agent will devise a plan, the less capable
agent will provide its resources and execute the plan trusted by C(2). A bit
more interesting.

Scenario 3: F(1) = F(2), C(1)  C(2) but this time T(1-2) = 1 and T(2-1) =
0.5 meaning the less powerful agent assumes with a probability of 50% that
A(1) is in fact a self serving optimizer who's difference in plan will turn
out to be decremental to A(2) while A(1) is certain that this is all just
one big misunderstanding. The optimal plan devised under scenario 2 will now
face opposition by A(2) although it would be in A(2)'s best interest to
actually support it with its resources to maximize (F2) while A(1) will see
A(2)'s objection as being detrimental to maximizing their shared utility
function. Fairly interesting: based on lack of trust and differences in
capability each agent perceives the other agent's plan as being irrational
from their respective points of view.

Under scenario 3, both agents now have a variety of strategies at their
disposal:

   1. deny pooling of part or all of ones resources = If we do not do it
   my way you can do it alone.
   2. use resources to sabotage the other agent's plan = I must stop him
   with these crazy ideas!
   3. deceive the other agent in order to skew how the other agent is
   deploying strategies 1 and 2
   4. spend resources to explain the plan to the other agent = Ok - let's
   help him see the light
   5. spend resources on self improvement to understand the other agent's
   plan better = Let's have a closer look, the plan might not be so bad after
   all
   6. strike a compromise to ensure a higher level of pooled resources =
   If we don't compromise we both loose out

Number 1 is a given under scenario 3. Number 2 is risky, particularly as it
would cause a further reduction in trust on both sides if this strategy gets
deployed assuming the other party would find out similarly with number 3.
Number 4 seems like the way to go but may not always work particularly with
large differences in C(i) among the agents. Number 5 is a likely strategy
with a fairly high level of trust. Most likely however is strategy 6.

Striking a compromise is trust building in repeated encounters and thus
promises less objection and thus higher total payoff the next times around.

Assuming the existence of an arguably optimal path leading to a maximally
possible satisfaction of a given utility function anything else would be
irrational. Actually such a maximally intelligent algorithm exists in the
form of Hutter http://www.hutter1.net/ai/ai.htm's universal algorithmic
agent AIXI http://citeseer.ist.psu.edu/555887.html. The only problem being
however that the execution of said algorithm requires infinite resources and
is thus rather unpractical as every decision will always have to be made
under resource constrains.

Consequentially every decision will be irrational to that degree that it
differs from the unknowable optimal path that AIXI would produce. Throw in a
lack of trust and varying levels of capability among the agents and all
agents will always have to adopt their plans and strike a compromise based
on the other agent's relativistic irrationality independent of their
capabilities in oder to minimize the other agents objection cost and thus
maximizing their respective utility function.
-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64805839-967aa4