Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread Brian Atkins

Huh that doesn't look right when I received it back. Here's a rewritten 
sentence:

Whatever the size of that group, do you claim that _all_ of these learning 
universalists would be capable of coming up with Einstein-class (or take your 
pick) ideas if they had been in his shoes during his lifetime?

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread Brian Atkins

Small correction:

Brian Atkins wrote:


So there is some group of humans you would say don't pass your learning 
universal test. Now, of the group that does pass, how big is that group 
roughly? The majority of humans? (IQ 100 and above) Whatever the size of 
that group, do you claim that any of these learning universalists would 

^^^
Should be "all" I suppose.
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread Brian Atkins

J. Storrs Hall wrote:

On Monday 04 December 2006 07:55, Brian Atkins wrote:

Also, what is really the difference between an Einstein/Feynman brain, and 
someone with an 80 IQ?


I think there's very likely a significant structural difference and the IQ80 
one is *not* learning universal in my sense.  


So there is some group of humans you would say don't pass your learning 
universal test. Now, of the group that does pass, how big is that group roughly? 
The majority of humans? (IQ 100 and above) Whatever the size of that group, do 
you claim that any of these learning universalists would be capable of coming up 
with Einstein-class (or take your pick) ideas if they had been in his shoes 
during his lifetime? In other words, if they had access to his experiences, 
education, etc.


I would say no. I'm not saying that Einstein is the sole human who could have 
come up with his ideas, but I'm also saying that it's unlikely that someone with 
an IQ of 110 would be able to do so even if given every help. I would say there 
are yet more differences in human minds beyond your learning universal idea 
which separate us, which make the difference for example between a 110 IQ and 140.




For instance, let's say I want to design a new microprocessor. As part of 
that 
process I may need to design a multitude of different circuits, test them, 
and 
then integrate them. To humans, this is not a task that can run on 

autopilot
What if though I find doing this job kinda boring after a while and wish I 
could 
split off a largish chunk of my cognitive resources to chew away on it at a 
somewhat slower speed unconsciously in the background and get the work done 
while the conscious part of my mind surfs the web? Humans can't do this, but 
an 

AGI likely could.


At any given level, a mind will have some tasks that require all its attention 
and resources. If the task is simple enough that it can be done with a 
fraction of the resources, (e.g. driving) we learn to turn it into a habit / 
skill and to it more or less subconsciously. An AI might do that faster but 
we're assuming it could lots of things faster. On the other hand, it would 
still have to pay attention to tasks that require all its resources.


This isn't completely addressing my particular scenario, where let's say we have 
a roughly human level AGI, it has to work on a semi-repetitive design task, the 
kind of thing a human is forced to stare a monitor at yet doesn't take their 
full absolute maximum brainpower. The AGI should theoretically be able to divide 
its resources in such a way that the design task can be done unconsciously in 
the background, while it can use what resources remain to do other stuff at the 
same time.


The point being although this task takes only part of the human's max abilities, 
by their nature they can't split it off, automate it, or otherwise escape 
letting some brain cycles go to "waste". The human mind is too monolithic in 
such cases which go beyond simple habits, yet are below max output.




Again, aiming off the bullseye. Attempting to explain to someone about the 
particular clouds you saw yesterday, the particular colors of the sunrise, 
etc. 
you can of course not transfer the full information to them. A second 
example 
would be with skills, which could easily be shared among AGIs but cannot be 
shared between humans.


Actually the ability to copy skills is the key item, imho, that separates 
humans from the previous smart animals. It made us a memetic substrate. In 
terms of the animal kingdom, we do it very, very well.  I'm sure that AIs 
will be able to as well, but probably it's not quite as simple as simply 
copying a subroutine library from one computer to another.


The reason is learning. If you keep the simple-copy semantics, no learning 
happens when skills are transferred. In humans, a learning step is forced, 
contributing to the memetic evolution of the skill. 


IMO, AGIs plausibly could actually transfer full, complete skills including 
whatever learning is part of it. It's all computer bits sitting somewhere, and 
they should be transferable and then integrable on the other end.


If so, this is far more powerful, new, and distinct than a newbie tennis player 
watching a pro, and trying to learn how to serve that well over a period of 
years, or a math student trying to learn calculus. Even aside from the dramatic 
time scale difference, humans can never transfer their skills fully exactly in a 
lossless-esque fashion.




Currently all I see is a very large and rapidly growing very insecure 
network of 
rapidly improving computers out there ripe for the picking by the first 
smart 
enough AGI. 


A major architectural feature of both the brain and existing supercomputers is 
that the majority of the structure/cost is in the communications fabric, not 
the processors themselves. A botnet using residential internet connections 
would be immensely hobbled. (It would be different, of course

Re: [agi] The Singularity

2006-12-06 Thread John Scanlon
Hank - do you have any theories or AGI designs?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-06 Thread Andrii (lOkadin) Zvorygin

  My aim is not a General AI, currently it's MJZT.  General AI just
seems to be a side effect .u'i(amuzement). Nodes in a JZT communicate
through language (and whatever form it may take), automation occurs of
the communication. After a certain point a typical JZT "automation"
would be able to have a conversation with an ordinary human and the
human will have trouble seeing the JZT as an inferior entity(revised
Turing test).



I'd like to note that as a believer in Determinism there is no real
difference between the "automation" and the "real person" so
technically everything is an automation. Including yourself and all
those around you.

pe'i(I opine) that this universe does not exist independantly and so
is interconnected to other universes.  Meaning we may not have to
suffer the fate of our universe and live even after it has ended it's
life cycle by uploading ourselves to outside universes. This will only
be achievable in a post-Singularity world as we wouldn't have the
technological capacity to do so.

koJMIveko (be alive by your own standards)

--
ta'o(by the way)  We With You Network at: http://lokiworld.org .i(and)
more on Lojban: http://lojban.org
mu'oimi'e lOkadin (Over, my name is lOkadin)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-06 Thread J. Storrs Hall
I'm on the road, so I'll have to give short shrift to this, but I'll try to 
hit a few high points:

On Monday 04 December 2006 07:55, Brian Atkins wrote:

> Putting aside the speed differential which you accept, but dismiss as 
important 
> for RSI, isn't there a bigger issue you're skipping regarding the other 
> differences between an Opteron-level PC and an 8080-era box? For example, 
there 
> are large differences in the addressable memory amounts. ... Does 
> it multiply with the speed differential?

I don't think the speed differential is unimportant -- but you need to 
separate it out to make certain kinds of theoretical analysis, same as you 
need to to consider Turing completeness in a computer.

ANY computer has a finite amount of memory and is thus a FSA and not a Turing 
machine. To think Turing-completeness you have to assume that it will read 
and write to an (unbounded) outboard memory. Thus the speed difference 
between, say, an 8080 and an Opteron will be a constant factor.

> Also, what is really the difference between an Einstein/Feynman brain, and 
> someone with an 80 IQ?

I think there's very likely a significant structural difference and the IQ80 
one is *not* learning universal in my sense.  

> For instance, let's say I want to design a new microprocessor. As part of 
that 
> process I may need to design a multitude of different circuits, test them, 
and 
> then integrate them. To humans, this is not a task that can run on 
autopilot
> What if though I find doing this job kinda boring after a while and wish I 
could 
> split off a largish chunk of my cognitive resources to chew away on it at a 
> somewhat slower speed unconsciously in the background and get the work done 
> while the conscious part of my mind surfs the web? Humans can't do this, but 
an 
> AGI likely could.

At any given level, a mind will have some tasks that require all its attention 
and resources. If the task is simple enough that it can be done with a 
fraction of the resources, (e.g. driving) we learn to turn it into a habit / 
skill and to it more or less subconsciously. An AI might do that faster but 
we're assuming it could lots of things faster. On the other hand, it would 
still have to pay attention to tasks that require all its resources.

> Again, aiming off the bullseye. Attempting to explain to someone about the 
> particular clouds you saw yesterday, the particular colors of the sunrise, 
etc. 
> you can of course not transfer the full information to them. A second 
example 
> would be with skills, which could easily be shared among AGIs but cannot be 
> shared between humans.

Actually the ability to copy skills is the key item, imho, that separates 
humans from the previous smart animals. It made us a memetic substrate. In 
terms of the animal kingdom, we do it very, very well.  I'm sure that AIs 
will be able to as well, but probably it's not quite as simple as simply 
copying a subroutine library from one computer to another.

The reason is learning. If you keep the simple-copy semantics, no learning 
happens when skills are transferred. In humans, a learning step is forced, 
contributing to the memetic evolution of the skill. 

> > Compare that with chess, where the learned chess module of a human is 
about 
> > equal to a supercomputer with specialized hardware, but where the problem 
is 
> > simple enough that we know how to program the supercomputer. 
> 
> I'm not really sure what this segment of text is getting at. Are you 
claiming 
> that humans can always laboriously create mind modules that will allow them 
to 
> perform feats equal to an AGI?

No, of course not, when the AI is running on vastly superior hardware. All I 
was saying is that we seem to be able to compile our skills to a form that is 
about as efficient on the hardware we do have, as the innate skills are. (mod 
cases where the actual neurons are specialized.)

> Currently all I see is a very large and rapidly growing very insecure 
network of 
> rapidly improving computers out there ripe for the picking by the first 
smart 
> enough AGI. 

A major architectural feature of both the brain and existing supercomputers is 
that the majority of the structure/cost is in the communications fabric, not 
the processors themselves. A botnet using residential internet connections 
would be immensely hobbled. (It would be different, of course, if it took 
over someone's Blue Gene...)

> What comes after that I think is unprovable currently. We do know of  
> course that relatively small tweaks in brain design led from apes to us, a 
> rather large difference in capabilities that did not apparently require too 
many 
> more atoms in the skull. Your handwaving regarding universality aside, this 
also 
> worries me.

There we will just have to disgree. I see one quantum jump in intelligence 
from Neanderthals to us. Existing AI programs are on the Neanderthal side of 
it. We AGIers are hoping to figure out what the Good Trick is an

Re: [agi] The Singularity

2006-12-06 Thread Andrii (lOkadin) Zvorygin

On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote:

Your message appeared at first to be rambling and incoherent, but I see that
that's probably because English is a second language for you.  But that's
not a problem if your ideas are solid.


English is my second language. My first language is Russian but I've
lived in Canada for just over 13 years -- I don't speak Russian on a
day to day basis.  Lojban I have only known about since last spring.
Currently I use Lojban on a day-to-day basis. Perhaps Lojban is
changing the way in which I think and the changes are expressing
themselves in my English. I admit I like using attitudinals
.ui(happiness).


And yes, language is an essential part of any intelligent system.  But there
there is another part you haven't mentioned -- the actual intelligence that
can understand and manipulate language.  Intelligence is not just parsing
and logic.  It is imagination and visualization that relates words to their
referents in the real world.

What is your idea of how this imagination and visualization that relates
language to phenomena in the real world can be engineered in software


If you mean "how will pattern recognition work in the
visual/auditory/sense system of the AI":
- I don't need cameras for keyboard input, OCR, or voice recognition
can handle other forms of language input.
- Cameras and detecting "real" things isn't really my goal. I just
want to increase productivity through automation of the things people
do.
- There are lots of people interested in graphics and pattern
recognition. They can always extend the system. The design goal is
really to make an easily extendable sustainable scalable complex
computer/network that takes care of itself.

If you mean something else you will need to elaborate for me to reply
as I'm having trouble understanding what it can mean.


in such a way that the singularity will be brought about?

I believe in hard-determinism implying anything you or I do is leading
to the Singularity -- if it is meant to be.

The point at which should start growing very fast is shortly after
there are over 150 developers/users on a "social augmentation
network".

MULno  JIKca seZENbaTCAna
Complete Social Augmentation Network:  sa'u(simply speaking) is a
network that allows for the automation of social activities such as
fact/state exchange to allow for creative endeavours to be the sole
occupation of the users (all/most other processes having been
automated) for entertainment. Mind-altering tools are definitely going
to be very popular in such a world.

 My aim is not a General AI, currently it's MJZT.  General AI just
seems to be a side effect .u'i(amuzement). Nodes in a JZT communicate
through language (and whatever form it may take), automation occurs of
the communication. After a certain point a typical JZT "automation"
would be able to have a conversation with an ordinary human and the
human will have trouble seeing the JZT as an inferior entity(revised
Turing test).




Andrii (lOkadin) Zvorygin wrote:

> On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote:
>>
>> Alright, I have to say this.
>>
>> I don't believe that the singularity is near, or that it will even occur.
>> I
>> am working very hard at developing real artificial general intelligence,
>> but
>> from what I know, it will not come quickly.  It will be slow and
>> incremental.  The idea that very soon we can create a system that can
>> understand its own code and start programming itself is ludicrous.
>>
>> Any arguments?
>>  
>
> Have you read Ray Kurzweil? He doesn't just make things up. There are
> plenty of reasons to believe in the Singularity.  Other than disaster
> theories there really is no negative evidence I've ever come across.
>
> "real artificial intelligence"
>
> .u'i(amusement) A little bit of an oxymoron there.  It also seems to
> imply there is "fake artificial intelligence".u'e(wonder). Of course
> if you could define "fake artificial intelligence" then you define
> what "real artificial intelligence" is.
>
> Once you define what "real artificial intelligence" means, or at least
> what symptoms you would be willing to satisfy for (Turing test).
>
> If it's the Turing test you're after as am I, then language is the
> key(I like stating the obvious please humour me).
>
> Once we established the goal -- a discussion between yourself and the
> computer in the language of choice.
>
> We look at the options that we have available: natural languages;
> artificial languages. Natural languages tend to be pretty ambiguous
> hard to parse, hard to code for -- you can do it if you are a
> masochist I don't mind .ui(happiness).
>
> Many/Most artificial languages suffer from similar if not the same
> kind of ambiguity, though because they are created they by definition
> can only have as many exceptions as were designed in.
>
> There is a promising subset of artificial languages: logical
> languages.  Logical languages adhe