RE: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-29 Thread John G. Rose
Mike,

 

To put it into your own words here, mathematics is a delineation out of the
infinitely diversifiable, the same zone where design comes from. And
design needs a medium, the medium can be the symbolic expressions and
language of mathematics. And so conveniently here the mathematics is
expressible in a software language, computer system and database.

 

Don't forget, the designer in all of us needs a medium to express and
communicate, if not it remains in a void. A designer emits design, and in
this case, AGI, the design is the/a designer. Sounds kind of hokey but true.
there are other narrow cases where this is true, but not in the grand way
AGI is. IOW, in a way, AGI will design itself, it's coming out of the
infinitely diversifiable and maintaining a communication with it as a
delineation within itself. It's self-organizingly injecting itself into this
chaotic world via our intended or unintended manifestations.

 

John  

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 



 

JAR: Define infinitely diversifiable.

 

I just did more or less.  A form/shape can be said to be delineated
(although I'm open to alternative terms, because delineation needn't
consist of using lines as such - as in my examples, it could involve using
amorphous masses, or pseudo-lines). 

 

Diversification - in this case creating new kinds of font - therefore
involves using 1) new principles of delineation  -  the kinds of
lines/visual elements used are radically changed, and 2) new principles of
**arrangement** of the visual elements  - for example, various fonts there
can be said to conform to an A arrangement, but one or more shifted that
to a new triangle arrangement without any cross-bar in the middle; using
double/triple lines could be classified as either 1) or 2) I guess. An
innovative (although pos. PITA) arrangement would be to have elements that
move/are mobile. And delineation involves 3) introducing new kinds of
elements *in addition* to those already there or deleting existing kinds of
elements.

 

Diversifiable is merely recognizing the realities of the fields of art and
design, which is that they will - and a creative algorithm therefore would
have to be able to -  infinitely/endlessly transform the constitution and
principles of delineation and depiction of any and all forms.

 

I think part of the problem here is that you guys think like mathematicians
and not designers - you see the world in terms of more or less rigidly
structured abstract forms ( that allows for all geometric morphisms) - but
a designer has to think consciously or unconsciously much more fluidly in
terms of  kaleidomorphic, freely structured and fluidly morphable abstract
forms. He sees abstract forms as infinitely diversifiable. You don't.

 

To do AGI, I'm suggesting - in fact, I'm absolutely sure - you will have to
start thinking in addition like designers. If you have contempt for design,
as most people here seem to do, it is actually you who deserve contempt.
God was a designer long before He took up maths.

 

 

From: J. Andrew Rogers mailto:jar.mail...@gmail.com  

Sent: Wednesday, August 25, 2010 5:23 PM

To: AGI mailto:a...@listbox.com  

Subject: Re: [agi] Re: Compressed Cross-Indexed Concepts

 

 

On Wed, Aug 25, 2010 at 9:09 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:

 

You do understand BTW that your creative algorithm must be able to produce
not just a limited collection of  shapes [either squares or A's]  but an
infinitely diversifiable** collection.

 

 

Define infinitely diversifiable.

 

There are whole fields of computer science dedicated to small applications
that routinely generate effectively unbounded diversity in the strongest
possible sense. 


-- 

J. Andrew Rogers

 


AGI |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ Description: Image removed
by sender.|  https://www.listbox.com/member/?; Modify Your Subscription 

 http://www.listbox.com/ Description: Image removed by sender.


AGI |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ Description: Image removed
by sender.|
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com/ Description: Image removed by sender.

 




---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
~WRD000.jpg

[agi] Natural Hyjacked Behavioral Control

2010-08-19 Thread John G. Rose
I thought this was interesting when looked at in relation to evolution and a
parasitic intelligence - 

 

http://www.guardian.co.uk/science/2010/aug/18/zombie-carpenter-ant-fungus




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Compressed Cross-Indexed Concepts

2010-08-19 Thread John G. Rose
An agent can only flip so many bits per second. If it gets stuck in a
computational conundrum it will waste energy that should be used for
survival purposes and the likelihood for agent death increases. 

 

Avoidance behavior for impossible computation is enforced.

 

Mathematics is a type of database for computational energy storage. All of
us multi-agent intelligences, mainly mathematicians, contribute to it over
time.

 

How long did it take to invent the wheel, but once the pattern is known, it
takes just a few bits to store.

 

That's one obvious method of the leveraging, but this could be, and is, used
all over the place. 

 

John

 

From: Jim Bromer [mailto:jimbro...@gmail.com] 



John

How would a mathematical system that is able to leverage for unnecessary or
impossible computation work exactly.  What do you mean by this?  And how
would this work to produce better integration of concepts and better
interpretation of concepts? 

 

On Fri, Aug 13, 2010 at 4:25 PM, John G. Rose johnr...@polyplexic.com
wrote:



 -Original Message-
 From: Jim Bromer [mailto:jimbro...@gmail.com]


 On Thu, Aug 12, 2010 at 12:40 AM, John G. Rose johnr...@polyplexic.com
 wrote:
 The ideological would still need be expressed mathematically.

 I don't understand this.  Computers can represent related data objects
that may
 be best considered without using mathematical terms (or with only
incidental
 mathematical functions related to things like the numbers of objects.)


The difference between data and code, or math and data, sometimes need not
be as dichotomous.



 I said:  I think the more important question is how does a
general concept
 be interpreted across a range of different kinds of ideas.  Actually this
is not so
 difficult, but what I am getting at is how are sophisticated
 conceptual  interrelations integrated and resolved?

 John said: Depends on the structure. We would want to build it such that
this
 happens at various levels or the various multidimensional densities. But
at the
 same time complex state is preserved until proven benefits show
themselves.

 Your use of the term 'densities' suggests that you are thinking about the
kinds of
 statistical relations that have been talked about a number of times in
this
 group.   The whole problem I have with statistical models is that they
don't
 typically represent the modelling variations that could be and would need
to be
 encoded into the ideas that are being represented.  For example a Bayesian
 Network does imply that a resulting evaluation would subsequently be
encoded
 into the network evaluation process, but only in a limited manner.  It
doesn't for
 example show how an idea could change the model, even though that would be
 easy to imagine.
 Jim Bromer


I also have some issues with heavily based statistical models. When I was
referring to densities I was really meaning an interconnectional
multidimensionality in the multigraph/hypergraph intelligence network, IOW a
partly combinatorial edge of chaos. There is a combination of state and
computational potential energy that an incoming idea, represented as a
data/math combo, would result in various partly self-organizational (SOM)
changes depending on how the key - the idea - effects computational energy
potential. And this is balanced against K-complexity related local extrema.

For the statistical mechanisms I would use for more of the narrow AI stuff
that is needed and also for situations that you can't come up with something
more concrete/discrete.


John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 
Powered by Listbox: http://www.listbox.com http://www.listbox.com/ 

 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Nao Nao

2010-08-13 Thread John G. Rose
I suppose that part of the work that it does is making people feel good
and being a neat conversation piece.

 

Interoperability and communications protocols can facilitate the path to
AGI. Just like the many protocols used on the internet. I haven't looked at
any for robotics specifically though there definitely are some. But having
worked with many myself I am familiar with limitations, shortcomings and
issues. Protocols is where it's at when making diverse systems work together
and having good protocols initially can save vast amounts of engineering
work. It's bang for the buck in a big way.

 

John


From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, August 12, 2010 9:02 AM
To: agi
Subject: Re: [agi] Nao Nao

 

By not made to perform work, you mean that it is not sturdy enough? Are
any half-way AGI robots made to perform work, vs production line robots? (I
think the idea of performing useful work should be a goal).

 

The protocol is obviously a good idea, but you're not suggesting it per se
will lead to AGI?

 

From: John G. Rose mailto:johnr...@polyplexic.com  

Sent: Thursday, August 12, 2010 3:17 PM

To: agi mailto:agi@v2.listbox.com  

Subject: RE: [agi] Nao Nao

 

Typically the demo is some of the best that it can do. It looks like the
robot is a mass produced model that has some really basic handling
capabilities, not that it is made to perform work. It could still have
relatively advanced microprocessor and networking system, IOW parts of the
brain could run on centralized servers. I don't think they did that BUT it
could.

 

But it looks like one Nao can talk to another Nao. What's needed here is a
standardized robot communication protocol. So a Nao could talk to a vacuum
cleaner or a video cam or any other device that supports the protocol.
Companies may resist this at first as they want to grab market share and
don't understand the benefit.

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, August 12, 2010 4:56 AM
To: agi
Subject: Re: [agi] Nao Nao

 

John,

 

Any more detailed thoughts about its precise handling capabilities? Did it,
first, not pick up the duck independently,  (without human assistance)? If
it did,  what do you think would be the range of its object handling?  (I
had an immediate question about all this - have asked the site for further
clarificiation - but nothing yet).

 

From: John G. Rose mailto:johnr...@polyplexic.com  

Sent: Thursday, August 12, 2010 5:46 AM

To: agi mailto:agi@v2.listbox.com  

Subject: RE: [agi] Nao Nao

 

I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked
mechanical entities. This opens up a whole new world of security and privacy
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 

Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com
wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription 

 http://www.listbox.com 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription

 http://www.listbox.com 

 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription 

 http://www.listbox.com 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Compressed Cross-Indexed Concepts

2010-08-13 Thread John G. Rose


 -Original Message-
 From: Jim Bromer [mailto:jimbro...@gmail.com]
 
 On Thu, Aug 12, 2010 at 12:40 AM, John G. Rose johnr...@polyplexic.com
 wrote:
 The ideological would still need be expressed mathematically.
 
 I don't understand this.  Computers can represent related data objects
that may
 be best considered without using mathematical terms (or with only
incidental
 mathematical functions related to things like the numbers of objects.)
 

The difference between data and code, or math and data, sometimes need not
be as dichotomous. 

 
 I said:  I think the more important question is how does a
general concept
 be interpreted across a range of different kinds of ideas.  Actually this
is not so
 difficult, but what I am getting at is how are sophisticated
 conceptual  interrelations integrated and resolved?
 
 John said: Depends on the structure. We would want to build it such that
this
 happens at various levels or the various multidimensional densities. But
at the
 same time complex state is preserved until proven benefits show
themselves.
 
 Your use of the term 'densities' suggests that you are thinking about the
kinds of
 statistical relations that have been talked about a number of times in
this
 group.   The whole problem I have with statistical models is that they
don't
 typically represent the modelling variations that could be and would need
to be
 encoded into the ideas that are being represented.  For example a Bayesian
 Network does imply that a resulting evaluation would subsequently be
encoded
 into the network evaluation process, but only in a limited manner.  It
doesn't for
 example show how an idea could change the model, even though that would be
 easy to imagine.
 Jim Bromer
 

I also have some issues with heavily based statistical models. When I was
referring to densities I was really meaning an interconnectional
multidimensionality in the multigraph/hypergraph intelligence network, IOW a
partly combinatorial edge of chaos. There is a combination of state and
computational potential energy that an incoming idea, represented as a
data/math combo, would result in various partly self-organizational (SOM)
changes depending on how the key - the idea - effects computational energy
potential. And this is balanced against K-complexity related local extrema. 

For the statistical mechanisms I would use for more of the narrow AI stuff
that is needed and also for situations that you can't come up with something
more concrete/discrete.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Nao Nao

2010-08-12 Thread John G. Rose
Typically the demo is some of the best that it can do. It looks like the
robot is a mass produced model that has some really basic handling
capabilities, not that it is made to perform work. It could still have
relatively advanced microprocessor and networking system, IOW parts of the
brain could run on centralized servers. I don't think they did that BUT it
could.

 

But it looks like one Nao can talk to another Nao. What's needed here is a
standardized robot communication protocol. So a Nao could talk to a vacuum
cleaner or a video cam or any other device that supports the protocol.
Companies may resist this at first as they want to grab market share and
don't understand the benefit.

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, August 12, 2010 4:56 AM
To: agi
Subject: Re: [agi] Nao Nao

 

John,

 

Any more detailed thoughts about its precise handling capabilities? Did it,
first, not pick up the duck independently,  (without human assistance)? If
it did,  what do you think would be the range of its object handling?  (I
had an immediate question about all this - have asked the site for further
clarificiation - but nothing yet).

 

From: John G. Rose mailto:johnr...@polyplexic.com  

Sent: Thursday, August 12, 2010 5:46 AM

To: agi mailto:agi@v2.listbox.com  

Subject: RE: [agi] Nao Nao

 

I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked
mechanical entities. This opens up a whole new world of security and privacy
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 

Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com
wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription 

 http://www.listbox.com 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Nao Nao

2010-08-11 Thread John G. Rose
Well both. Though much of the control could be remote depending on
bandwidth. 

 

Also, one robot could benefit from the eyes of many as they would all be
internetworked to a degree.

 

John

 

From: Ian Parker [mailto:ianpark...@gmail.com] 



Your remarks about WiFi echo my own view. Should a robot rely on an external
connection (WiFi) or should it have complex processing itself.

 

In general we try to keep real time response information local, although
local my be viewed in terms of the c the speed of light. If a PC is 150m
away from a robot this is a 300m double journey which will take a
microsecond. To access the Web for a program will, of course, take
considerably longer.

 

A μ sec is nothing even when we are considering time critical functions like
balance. However for balance it might be a good idea to either have the
robot balancing, or else to have a card inserted into the PC.

 

This is one topic for which I have not been able to have a satisfactory
discussion or answer. People who build robots tend to think in terms of
having the processing power on the robot. This I believe is wrong.

 

 

  - Ian Parker

On 10 August 2010 00:06, John G. Rose johnr...@polyplexic.com wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 

 

An unusually sophisticated ( somewhat expensive) promotional robot vid:

 

 
http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expre
sses-and-detects-emotions.html
http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expres
ses-and-detects-emotions.html


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ | Modify Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Compressed Cross-Indexed Concepts

2010-08-11 Thread John G. Rose
 -Original Message-
 From: Jim Bromer [mailto:jimbro...@gmail.com]
 
 
 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
has
 to involve more ideological relationships than mathematical.  

The ideological would still need be expressed mathematically.

 For instance
 you can apply a idea to your own thinking in a such a way that you are
 capable of (gradually) changing how you think about something.  This means
 that an idea can be a compression of some greater change in your own
 programming.  

Mmm yes or like a key.

 While the idea in this example would be associated with a
 fairly strong notion of meaning, since you cannot accurately understand
the
 full consequences of the change it would be somewhat vague at first.  (It
 could be a very precise idea capable of having strong effect, but the
details of
 those effects would not be known until the change had progressed.)
 

Yes. It would need to have receptors, an affinity something like that, or
somehow enable an efficiency change.

 I think the more important question is how does a general concept be
 interpreted across a range of different kinds of ideas.  Actually this is
not so
 difficult, but what I am getting at is how are sophisticated conceptual
 interrelations integrated and resolved?
 Jim

Depends on the structure. We would want to build it such that this happens
at various levels or the various multidimensional densities. But at the same
time complex state is preserved until proven benefits show themselves.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Nao Nao

2010-08-11 Thread John G. Rose
I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked
mechanical entities. This opens up a whole new world of security and privacy
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 



Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com
wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] How To Create General AI Draft2

2010-08-09 Thread John G. Rose
Actually this is quite critical.

 

Defining a chair - which would agree with each instance of a chair in the
supplied image - is the way a chair should be defined and is the way the
mind processes it.

 

It can be defined mathematically in many ways. There is a particular one I
would go for though...

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Sunday, August 08, 2010 7:28 AM
To: agi
Subject: Re: [agi] How To Create General AI Draft2

 

You're waffling.

 

You say there's a pattern for chair - DRAW IT. Attached should help you.

 

Analyse the chairs given in terms of basic visual units. Or show how any
basic units can be applied to them. Draw one or two.

 

You haven't identified any basic visual units  - you don't have any. Do you?
Yes/no. 

 

No. That's not funny, that's a waste.. And woolly and imprecise through
and through.

 

 

 

From: David Jones mailto:davidher...@gmail.com  

Sent: Sunday, August 08, 2010 1:59 PM

To: agi mailto:agi@v2.listbox.com  

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny. 

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:

Dave:No... it is equivalent to saying that the whole world can be modeled as
if everything was made up of matter

 

And matter is... ?  Huh?

 

You clearly don't realise that your thinking is seriously woolly - and you
will pay a heavy price in lost time.

 

What are your basic world/visual-world analytic units  wh. you are
claiming to exist?  

 

You thought - perhaps think still - that *concepts* wh. are pretty
fundamental intellectual units of analysis at a certain level, could be
expressed as, or indeed, were patterns. IOW there's a fundamental pattern
for chair or table. Absolute nonsense. And a radical failure to
understand the basic nature of concepts which is that they are *freeform*
schemas, incapable of being expressed either as patterns or programs.

 

You had merely assumed that concepts could be expressed as patterns,but had
never seriously, visually analysed it. Similarly you are merely assuming
that the world can be analysed into some kind of visual units - but you
haven't actually done the analysis, have you? You don't have any of these
basic units to hand, do you? If you do, I suggest, reply instantly, naming a
few. You won't be able to do it. They don't exist.

 

Your whole approach to AGI is based on variations of what we can call
fundamental analysis - and it's wrong. God/Evolution hasn't built the
world with any kind of geometric, or other consistent, bricks. He/It is a
freeform designer. You have to start thinking outside the
box/brick/fundamental unit.

 

From: David Jones mailto:davidher...@gmail.com  

Sent: Sunday, August 08, 2010 5:12 AM

To: agi mailto:agi@v2.listbox.com  

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

I took your comments into consideration and have been updating my paper to
make sure these problems are addressed. 

See more comments below.

On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk
wrote:

1) You don't define the difference between narrow AI and AGI - or make clear
why your approach is one and not the other


I removed this because my audience is for AI researchers... this is AGI 101.
I think it's clear that my design defines general as being able to handle
the vast majority of things we want the AI to handle without requiring a
change in design.
 

 

2) Learning about the world won't cut it -  vast nos. of progs. claim they
can learn about the world - what's the difference between narrow AI and AGI
learning?


The difference is in what you can or can't learn about and what tasks you
can or can't perform. If the AI is able to receive input about anything it
needs to know about in the same formats that it knows how to understand and
analyze, it can reason about anything it needs to.
 

 

3) Breaking things down into generic components allows us to learn about
and handle the vast majority of things we want to learn about. This is what
makes it general!

 

Wild assumption, unproven or at all demonstrated and untrue.


You are only right that I haven't demonstrated it. I will address this in
the next paper and continue adding details over the next few drafts.

As a simple argument against your counter argument... 

If that were true that we could not understand the world using a limited 

RE: [agi] How To Create General AI Draft2

2010-08-09 Thread John G. Rose
 -Original Message-
 From: Jim Bromer [mailto:jimbro...@gmail.com]
 
 The question for me is not what the
 smallest pieces of visual information necessary to represent the range
 and diversity of kinds of objects are, but how would these diverse
examples
 be woven into highly compressed and heavily cross-indexed pieces of
 knowledge that could be accessed quickly and reliably, especially for the
 most common examples that the person is familiar with.

This is a big part of it and for me the most exciting. And I don't think
that this subsystem would take up millions of lines of code either. It's
just that it is a *very* sophisticated and dynamic mathematical structure
IMO.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: RE: [agi] How To Create General AI Draft2

2010-08-09 Thread John G. Rose
Hmm... Shall we coin this the Tinter Contrarian Pattern? 

 

Or anti-pattern :)

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 
I agree John that this is a useful exercise. This would be a good discussion
if mike would ever admit that I might be right and he might be wrong. I'm
not sure that will ever happen though. :) First he says I can't define a
pattern that works. Then, when I do, he says the pattern is no good because
it isn't physical. Lol. If he would ever admit that I might have gotten it
right, the discussion would be a good one. Instead, he hugs his preconceived
notions no matter how good my arguments are and finds yet another reason,
any reason will do, to say I'm still wrong. 

On Aug 9, 2010 2:18 AM, John G. Rose johnr...@polyplexic.com wrote:

Actually this is quite critical.

 

Defining a chair - which would agree with each instance of a chair in the
supplied image - is the way a chair should be defined and is the way the
mind processes it.

 

It can be defined mathematically in many ways. There is a particular one I
would go for though...

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Sunday, August 08, 2010 7:28 AM


To: agi
Subject: Re: [agi] How To Create General AI Draft2

 

You're waffling.

 

You say there's a pattern for chair - DRAW IT. Attached should help you.

 

Analyse the chairs given in terms of basic visual units. Or show how any
basic units can be applied to them. Draw one or two.

 

You haven't identified any basic visual units  - you don't have any. Do you?
Yes/no. 

 

No. That's not funny, that's a waste.. And woolly and imprecise through
and through.

 

 

 

From: David Jones mailto:davidher...@gmail.com  

Sent: Sunday, August 08, 2010 1:59 PM

To: agi mailto:agi@v2.listbox.com  

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny. 

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:

Dave:No... it is equivalent to saying that the whole world can be modeled as
if everything was made up of matter

 

And matter is... ?  Huh?

 

You clearly don't realise that your thinking is seriously woolly - and you
will pay a heavy price in lost time.

 

What are your basic world/visual-world analytic units  wh. you are
claiming to exist?  

 

You thought - perhaps think still - that *concepts* wh. are pretty
fundamental intellectual units of analysis at a certain level, could be
expressed as, or indeed, were patterns. IOW there's a fundamental pattern
for chair or table. Absolute nonsense. And a radical failure to
understand the basic nature of concepts which is that they are *freeform*
schemas, incapable of being expressed either as patterns or programs.

 

You had merely assumed that concepts could be expressed as patterns,but had
never seriously, visually analysed it. Similarly you are merely assuming
that the world can be analysed into some kind of visual units - but you
haven't actually done the analysis, have you? You don't have any of these
basic units to hand, do you? If you do, I suggest, reply instantly, naming a
few. You won't be able to do it. They don't exist.

 

Your whole approach to AGI is based on variations of what we can call
fundamental analysis - and it's wrong. God/Evolution hasn't built the
world with any kind of geometric, or other consistent, bricks. He/It is a
freeform designer. You have to start thinking outside the
box/brick/fundamental unit.

 

From: David Jones mailto:davidher...@gmail.com  

Sent: Sunday, August 08, 2010 5:12 AM

To: agi mailto:agi@v2.listbox.com  

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

I took your comments into consideration and have been updating my paper to
make sure these problems are addressed. 

See more comments below.

On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk
wrote:

1) You don't define the difference between narrow AI and AGI - or make clear
why your approach is one and not the other


I removed this because my audience is for AI researchers... this is AGI 101.
I think it's clear that my design defines general as being able to handle
the vast majority of things we want the AI to handle without requiring a
change in design.
 

 

2) Learning about the world won't cut it -  vast nos. of progs. claim they
can learn about the world - what's the difference between narrow AI and AGI
learning

RE: [agi] Nao Nao

2010-08-09 Thread John G. Rose
Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 



 

An unusually sophisticated ( somewhat expensive) promotional robot vid:

 

 
http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expre
sses-and-detects-emotions.html
http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expres
ses-and-detects-emotions.html


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Epiphany - Statements of Stupidity

2010-08-08 Thread John G. Rose
Well, these artificial identities need to complete a loop. Say the
artificial identity acquires an email address, phone#, a physical address, a
bank account, logs onto Amazon and purchases stuff automatically it needs to
be able to put money into its bank account. So let's say it has a low profit
scheme to scalp day trading profits with its stock trading account. That's
the loop, it has to be able to make money to make purchases. And then
automatically file its taxes with the IRS. Then it's really starting to look
like a full legally functioning identity. It could persist in this fashion
for years. 

 

I would bet that these identities already exist. What happens when there are
many, many of them? Would we even know? 

 

John

 

From: Steve Richfield [mailto:steve.richfi...@gmail.com] 
Sent: Saturday, August 07, 2010 8:17 PM
To: agi
Subject: Re: [agi] Epiphany - Statements of Stupidity

 

Ian,

I recall several years ago that a group in Britain was operating just such a
chatterbox as you explained, but did so on numerous sex-related sites, all
running simultaneously. The chatterbox emulated young girls looking for sex.
The program just sat there doing its thing on numerous sites, and whenever a
meeting was set up, it would issue a message to its human owners to alert
the police to go and arrest the pedophiles at the arranged time and place.
No human interaction was needed between arrests.

I can imagine an adaptation, wherein a program claims to be manufacturing
explosives, and is looking for other people to deliver those explosives.
With such a story line, there should be no problem arranging deliveries, at
which time you would arrest the would-be bombers.

I wish I could tell you more about the British project, but they were VERY
secretive. I suspect that some serious Googling would yield much more.

Hopefully you will find this helpful.

Steve
=

On Sat, Aug 7, 2010 at 1:16 PM, Ian Parker ianpark...@gmail.com wrote:

I wanted to see what other people's views were.My own view of the risks is
as follows. If the Turing Machine is built to be as isomorphic with humans
as possible, it would be incredibly dangerous. Indeed I feel that the
biological model is far more dangerous than the mathematical.

 

If on the other hand the TM was not isomorphic and made no attempt to be,
the dangers would be a lot less. Most Turing/Löbner entries are chatterboxes
that work on databases. The database being filled as you chat. Clearly the
system cannot go outside its database and is safe.

 

There is in fact some use for such a chatterbox. Clearly a Turing machine
would be able to infiltrate militant groups however it was constructed. As
for it pretending to be stupid, it would have to know in what direction it
had to be stupid. Hence it would have to be a good psychologist.

 

Suppose it logged onto a jihardist website, as well as being able to pass
itself off as a true adherent, it could also look at the other members and
assess their level of commitment and knowledge. I think that the true
Turing/Löbner  test is not working in a laboratory environment but they
should log onto jihardist sites and see how well they can pass themselves
off. If it could do that it really would have arrived. Eventually it could
pass itself off as a peniti to use the Mafia term and produce arguments
from the Qur'an against the militant position.

 

There would be quite a lot of contracts to be had if there were a realistic
prospect of doing this.

 

 

  - Ian Parker 

On 7 August 2010 06:50, John G. Rose johnr...@polyplexic.com wrote:

 Philosophical question 2 - Would passing the TT assume human stupidity and

 if so would a Turing machine be dangerous? Not necessarily, the Turing
 machine could talk about things like jihad without
ultimately identifying with
 it.


Humans without augmentation are only so intelligent. A Turing machine would
be potentially dangerous, a really well built one. At some point we'd need
to see some DNA as ID of another extended TT.


 Philosophical question 3 :- Would a TM be a psychologist? I think it would
 have to be. Could a TM become part of a population simulation that would
 give us political insights.


You can have a relatively stupid TM or a sophisticated one just like humans.
It might be easier to pass the TT by not exposing too much intelligence.

John


 These 3 questions seem to me to be the really interesting ones.


   - Ian Parker





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com

 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription

 http://www.listbox.com 

 


agi |  https://www.listbox.com/member

RE: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread John G. Rose
statements of stupidity - some of these are examples of cramming
sophisticated thoughts into simplistic compressed text. Language is both
intelligence enhancing and limiting. Human language is a protocol between
agents. So there is minimalist data transfer, I had no choice but to ...
is a compressed summary of potentially vastly complex issues. The mind gets
hung-up sometimes on this language of ours. Better off at times to think
less using English language and express oneself with a wider spectrum
communiqué. Doing a dance and throwing paint in the air for example, as some
*primitive* cultures actually do, conveys information also and is medium of
expression rather than using a restrictive human chat protocol.

 

BTW the rules of etiquette of the human language protocol are even more
potentially restricting though necessary for efficient and standardized data
transfer to occur. Like, TCP/IP for example. The Etiquette in TCP/IP is
like an OSI layer, akin to human language etiquette.

 

John

 

 

From: Steve Richfield [mailto:steve.richfi...@gmail.com] 



To All,

I have posted plenty about statements of ignorance, our probable inability
to comprehend what an advanced intelligence might be thinking, heidenbugs,
etc. I am now wrestling with a new (to me) concept that hopefully others
here can shed some light on.

People often say things that indicate their limited mental capacity, or at
least their inability to comprehend specific situations.

1)  One of my favorites are people who say I had no choice but to ...,
which of course indicates that they are clearly intellectually challenged
because there are ALWAYS other choices, though it may be difficult to find
one that is in all respects superior. While theoretically this statement
could possibly be correct, in practice I have never found this to be the
case.

2)  Another one recently from this very forum was If it sounds too good to
be true, it probably is. This may be theoretically true, but in fact was,
as usual, made as a statement as to why the author was summarily dismissing
an apparent opportunity of GREAT value. This dismissal of something BECAUSE
of its great value would seem to severely limit the authors prospects for
success in life, which probably explains why he spends so much time here
challenging others who ARE doing something with their lives.

3)  I used to evaluate inventions for some venture capitalists. Sometimes I
would find that some basic law of physics, e.g. conservation of energy,
would have to be violated for the thing to work. When I explained this to
the inventors, their inevitable reply was Yea, and they also said that the
Wright Brothers' plane would never fly. To this, I explained that the
Wright Brothers had invested ~200 hours of effort working with their crude
homemade wind tunnel, and ask what the inventors have done to prove that
their own invention would work.

4)  One old stupid standby, spoken when you have make a clear point that
shows that their argument is full of holes That is just your opinion. No,
it is a proven fact for you to accept or refute.

5)  Perhaps you have your own pet statements of stupidity? I suspect that
there may be enough of these to dismiss some significant fraction of
prospective users of beyond-human-capability (I just hate the word
intelligence) programs.

In short, semantic analysis of these statements typically would NOT find
them to be conspicuously false, and hence even an AGI would be tempted to
accept them. However, their use almost universally indicates some
short-circuit in thinking. The present Dr. Eliza program could easily
recognize such statements.

OK, so what? What should an AI program do when it encounters a stupid user?
Should some attempt be made to explain stupidity to someone who is almost
certainly incapable of comprehending their own stupidity? Stupidity is
forever is probably true, especially when expressed by an adult.

Note my own dismissal of a some past posters for insufficient mental ability
to understand certain subjects, whereupon they invariably come back
repeating the SAME flawed logic, after I carefully explained the breaks in
their logic. Clearly, I was just wasting my effort by continuing to interact
with these people.

Note that providing a stupid user with ANY output is probably a mistake,
because they will almost certainly misconstrue it in some way. Perhaps it
might be possible to dumb down the output to preschool-level, at least
that (small) part of the output that can be accurately stated in preschool
terms.

Eventually as computers continue to self-evolve, we will ALL be categorized
as some sort of stupid, and receive stupid-adapted output.

I wonder whether, ultimately, computers will have ANYTHING to say to us,
like any more than we now say to our dogs.

Perhaps the final winner of the Reverse Turing Test will remain completely
silent?!

You don't explain to your dog why you can't pay the rent from The Fall of
Colossus.

Any thoughts?

Steve





RE: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread John G. Rose
 -Original Message-
 From: Ian Parker [mailto:ianpark...@gmail.com]
 
 The Turing test is not in fact a test of intelligence, it is a test of
similarity with
 the human. Hence for a machine to be truly Turing it would have to make
 mistakes. Now any useful system will be made as intelligent as we can
 make it. The TT will be seen to be an irrelevancy.
 
 Philosophical question no 1 :- How useful is the TT.
 

TT in its basic form is rather simplistic. It's thought of usually in its
ideal form, the determination of an AI or a human. I look at it more of
analogue verses discrete boolean. Much of what is out there is human with
computer augmentation and echoes of human interaction. It's blurry in
reality and the TT has been passed in some ways but not in its most ideal
way.

 As I said in my correspondence With Jan Klouk, the human being is stupid,
 often dangerously stupid.
 
 Philosophical question 2 - Would passing the TT assume human stupidity and
 if so would a Turing machine be dangerous? Not necessarily, the Turing
 machine could talk about things like jihad without
ultimately identifying with
 it.
 

Humans without augmentation are only so intelligent. A Turing machine would
be potentially dangerous, a really well built one. At some point we'd need
to see some DNA as ID of another extended TT.

 Philosophical question 3 :- Would a TM be a psychologist? I think it would
 have to be. Could a TM become part of a population simulation that would
 give us political insights.
 

You can have a relatively stupid TM or a sophisticated one just like humans.
It might be easier to pass the TT by not exposing too much intelligence.

John

 These 3 questions seem to me to be the really interesting ones.
 
 
   - Ian Parker 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Pretty worldchanging

2010-07-25 Thread John G. Rose
You have to give toast though to Net entities like Wikipedia, I'd dare say
one of humankind's greatest achievements. Then eventually over a few years
it'll be available as a plug-in, as a virtual trepan thus reducing the
effort of subsuming all that. And then maybe structural intelligence add-ins
so that abstract concepts need not be learned by medieval rote
conditioning. These humanity features are not far off. So instead of a $35
laptop a fifty cent liqua chip could be injected as a prole
inoculation/augmentation.

 

John

 

From: Boris Kazachenko [mailto:bori...@verizon.net] 
Sent: Saturday, July 24, 2010 5:50 PM
To: agi
Subject: Re: [agi] Pretty worldchanging

 

Maybe there are some students on this email list, who are wading through all
the BS and learning something about AGI, by following links and reading
papers mentioned here, etc.  Without the Net, how would these students learn
about AGI, in practice?  Such education would be far harder to come by and
less effective without the Net.  That's world-changing... ;-) ...

The Net saves time. Back in the day, one could spend a lifetime sifting
through paper in the library, or traveling the world to meet authorities.
Now you do some googling, realize that no one has a clue,  go on to do some
real work on your own. That's if you have the guts, of course.

intelligence-as-a-cognitive-algorithm
http://ntelligence-as-a-cognitive-algorithm 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Clues to the Mind: Illusions / Vision

2010-07-25 Thread John G. Rose
Here is an example of superimposed images where you have to have a
predisposed perception - 

 

http://www.youtube.com/watch?v=V1m0kCdC7co

 

John

 

From: deepakjnath [mailto:deepakjn...@gmail.com] 
Sent: Saturday, July 24, 2010 11:03 PM
To: agi
Subject: [agi] Clues to the Mind: Illusions / Vision

 

http://www.youtube.com/watch?v=QbKw0_v2clo
http://www.youtube.com/watch?v=QbKw0_v2clofeature=player_embedded
feature=player_embedded

What we see is not really what you see. Its what you see and what you know
you are seeing. The brain superimposes the predicted images to the viewed
image to actually have a perception of image.

cheers,
Deepak


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] How do we hear music

2010-07-24 Thread John G. Rose
 -Original Message-
 
 You have all missed one vital point. Music is repeating and it has a
symmetry.
 In dancing (song and dance) moves are repeated in a symmetrical pattern.
 
 Question why are we programmed to find symmetry? This question may be
 more core to AGI than appears at first sight. Chearly an AGI system will
have
 to look for symmetry and do what Hardy described as beautiful maths.
 

Symmetry is at the heart of everything; without symmetry the universe
collapses. Intelligence operates over symmetric verses non-symmetric IMO.
But everything is ultimately grounded in symmetry. 

BTW kind of related, was just watching this neat video - the soundtrack
needs to be redone though :)

http://www.youtube.com/watch?v=4dpRPTwsKJs

Why does the brain have bi-lateral symmetry I wonder and why is the heart
not symmetric? Some researchers say consciousness is both heart and brain.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] OFF-TOPIC: University of Hong Kong Library

2010-07-15 Thread John G. Rose
Make sure you study that up YKY :)

 

John

 

From: YKY (Yan King Yin, 甄景贤) [mailto:generic.intellige...@gmail.com] 
Sent: Thursday, July 15, 2010 8:59 AM
To: agi
Subject: [agi] OFF-TOPIC: University of Hong Kong Library

 

 

Today, I went to the HKU main library: 

 

 

=)

KY


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ Description:
https://www.listbox.com/images/feed-icon-10x10.jpg|
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com/ Description:
https://www.listbox.com/images/listbox-logo-small.png

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
image001.jpgimage002.png

RE: [agi] OFF-TOPIC: University of Hong Kong Library

2010-07-15 Thread John G. Rose
 -Original Message-
 From: Ian Parker [mailto:ianpark...@gmail.com]
 
 Ok Off topic, but not as far as you might think. YKY has posted in Creating
 Artificial Intelligence on a collaborative project. It is quite important to 
 know
 exactly where he is. You see Taiwan uses the classical character set, The
 People's Republic uses a simplified character set.
 

The classical character set is much more artistic but more difficult to learn 
thus the simplified is becoming popular.  Like a social tendency of 
K-complexity minimalistic language langour. Less energy expended since less 
bits required for the symbols.
 

 Hong Kong was handed back to China in I think 1997. It is still outside the
 Great Firewall and (I presume) uses classical characters, although I don't
 really know. If we are to discuss transliteration schemes, translation and
 writing Chinese (PRC or Taiwan) on Western keyboards, it is important for us
 to know.
 
 I have just bashed up a Java program to write Arabic. You input Roman
 Buckwalter and it has an internal conversion table. The same thing could in
 principle be done for a load of character sets. In Chinese you would have to
 input two Western keys simultaneously. That can be done.
 

I always wondered - do language translators map from one language to another or 
do they map to a universal language first. And if there is a universal 
language what is it or.. what are they?

 I know HK is outside the Firewall because that is where Google has its proxy
 server. Is YKY there, do you know?
 

Uhm yes. He's been followed by the government censors into the HK library. 
They're thinking about sending him to re-education camp for being caught 
red-handed reading AI4U.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] New KurzweilAI.net site... with my silly article sillier chatbot ;-p ;) ....

2010-07-12 Thread John G. Rose
These video/rendered chatbots have huge potential and will be taken in many
different directions.

 

They are gradually over time approaching a p-zombie-esque situation.

 

They add multi-modal communication - body/facial language/expression and
prosody. So even if the text alone is not too good the simultaneous rending
of the multi-channel information adds some sort of legitimacy. Though in
these simple cases the bot only takes text as input so much of the
communication complexity
http://en.wikipedia.org/wiki/Communication_complexity  is running semi
half-duplex.

 

John

 

From: The Wizard [mailto:key.unive...@gmail.com] 
Sent: Monday, July 12, 2010 1:02 AM
To: agi
Subject: Re: [agi] New KurzweilAI.net site... with my silly article 
sillier chatbot ;-p ;) 

 

Have you guys talked to the army's artificial intelligence chat bot yet?

http://sgtstar.goarmy.com/ActiveAgentUI/Welcome.aspx

 

nothing really special other than the voice sounds really natural..

 

On Thu, Jul 8, 2010 at 11:09 PM, Mike Archbold jazzbo...@gmail.com wrote:

The concept of citizen science sounds great, Ben -- especially in
this age.  From my own perspective I feel like my ideas are good but
it falls short always of the rigor of a proper scientist, so I don't
have that pretense.  The internet obviously helps out a lot.The
plight of the solitary laborer is better than it used to be, I think,
due to the availability of information/research.

Mike Archbold


On Mon, Jul 5, 2010 at 8:52 PM, Ben Goertzel b...@goertzel.org wrote:
 Check out my article on the H+ Summit


http://www.kurzweilai.net/h-summit-harvard-the-rise-of-the-citizen-scientist

 and also the Ramona4 chatbot that Novamente LLC built for Ray Kurzweil
 a while back

 http://www.kurzweilai.net/ramona4/ramona.html

 It's not AGI at all; but it's pretty funny ;-)

 -- Ben



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 
 When nothing seems to help, I go look at a stonecutter hammering away
 at his rock, perhaps a hundred times without as much as a crack
 showing in it. Yet at the hundred and first blow it will split in two,
 and I know it was not that blow that did it, but all that had gone
 before.


 ---

 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 

 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 

Powered by Listbox: http://www.listbox.com




-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-11 Thread John G. Rose
Note:

 

Theorem 1.7.1 There eRectively exists a universal computer.

 

If you copy and paste this declaration the ff gets replaced with a circle
cap R :)

 

Not sure how this shows up...

 

John

 

From: Ben Goertzel [mailto:b...@goertzel.org] 
Sent: Friday, July 09, 2010 8:50 AM
To: agi
Subject: Re: [agi] Solomonoff Induction is Not Universal and Probability
is not Prediction

 


To make this discussion more concrete, please look at

http://www.vetta.org/documents/disSol.pdf 

Section 2.5 gives a simple version of the proof that Solomonoff induction is
a powerful learning algorithm in principle, and Section 2.6 explains why it
is not practically useful.

What part of that paper do you think is wrong?

thx
ben



On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer jimbro...@gmail.com wrote:

On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote:

If you're going to argue against a mathematical theorem, your argument must
be mathematical not verbal.  Please explain one of

1) which step in the proof about Solomonoff induction's effectiveness you
believe is in error

2) which of the assumptions of this proof you think is inapplicable to real
intelligence [apart from the assumption of infinite or massive compute
resources]



 

Solomonoff Induction is not a provable Theorem, it is therefore a
conjecture.  It cannot be computed, it cannot be verified.  There are many
mathematical theorems that require the use of limits to prove them for
example, and I accept those proofs.  (Some people might not.)  But there is
no evidence that Solmonoff Induction would tend toward some limits.  Now
maybe the conjectured abstraction can be verified through some other means,
but I have yet to see an adequate explanation of that in any terms.  The
idea that I have to answer your challenges using only the terms you specify
is noise.

 

Look at 2.  What does that say about your Theorem.

 

I am working on 1 but I just said: I haven't yet been able to find a way
that could be used to prove that Solomonoff Induction does not do what Matt
claims it does.

  Z

What is not clear is that no one has objected to my characterization of the
conjecture as I have been able to work it out for myself.  It requires an
infinite set of infinitely computed probabilities of each infinite string.
If this characterization is correct, then Matt has been using the term
string ambiguously.  As a primary sample space: A particular string.  And
as a compound sample space: All the possible individual cases of the
substring compounded into one.  No one has yet to tell of his mathematical
experiments of using a Turing simulator to see what a finite iteration of
all possible programs of a given length would actually look like.

 

I will finish this later.

 

 

On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote:

Abram,

Solomoff Induction would produce poor predictions if it could be used to
compute them.  


Solomonoff induction is a mathematical, not verbal, construct.  Based on the
most obvious mapping from the verbal terms you've used above into
mathematical definitions in terms of which Solomonoff induction is
constructed, the above statement of yours is FALSE.

If you're going to argue against a mathematical theorem, your argument must
be mathematical not verbal.  Please explain one of

1) which step in the proof about Solomonoff induction's effectiveness you
believe is in error

2) which of the assumptions of this proof you think is inapplicable to real
intelligence [apart from the assumption of infinite or massive compute
resources]

Otherwise, your statement is in the same category as the statement by the
protagonist of Dostoesvky's Notes from the Underground --

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too.

;-)

 

Secondly, since it cannot be computed it is useless.  Third, it is not the
sort of thing that is useful for AGI in the first place.


I agree with these two statements

-- ben G 

 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ | Modify Your Subscription


 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/ 


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

 
When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, 

RE: [agi] masterpiece on an iPad

2010-07-02 Thread John G. Rose
An AGI may not really think like we do, it may just execute code. 

 

Though I suppose you could program a lot of fuzzy loops and idle
speculation, entertaining possibilities, having human think envy.. 

 

John

 

From: Matt Mahoney [mailto:matmaho...@yahoo.com] 
Sent: Friday, July 02, 2010 8:21 AM
To: agi
Subject: Re: [agi] masterpiece on an iPad

 

AGI is all about building machines that think, so you don't have to.


 

-- Matt Mahoney, matmaho...@yahoo.com

 

 

  _  

From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 9:37:51 AM
Subject: Re: [agi] masterpiece on an iPad

that's like saying cartography or cartoons could be done a lot faster if
they just used cameras -  ask Michael to explain what the hand can draw that
the camera can't

 

From: Matt Mahoney mailto:matmaho...@yahoo.com  

Sent: Friday, July 02, 2010 2:21 PM

To: agi mailto:agi@v2.listbox.com  

Subject: Re: [agi] masterpiece on an iPad

 

It could be done a lot faster if the iPad had a camera.


 

-- Matt Mahoney, matmaho...@yahoo.com 

 

 

  _  

From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad

http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-crea
tes-masterpiece-on-an-iPad.html

 

McLuhan argues that touch is the central sense - the one that binds the
others. He may be right. The i-devices integrate touch into intelligence.


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ | Modify Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription 

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] masterpiece on an iPad

2010-07-02 Thread John G. Rose
Sounds like everyone would want one, or, one AGI could service us all. And
that AGI could do all of the heavy thinking for us. We could become pleasure
seeking, fibrillating blobs of flesh and bone suckling on the electronic
brains of one big giant AGI.

John

 

From: Matt Mahoney [mailto:matmaho...@yahoo.com] 
Sent: Friday, July 02, 2010 1:16 PM
To: agi
Subject: Re: [agi] masterpiece on an iPad

 

An AGI only has to predict your behavior so that it can serve you better by
giving you what you want without you asking for it. It is not a copy of your
mind. It is a program that can call a function that simulates your mind for
some arbitrary purpose determined by its programmer.


 

-- Matt Mahoney, matmaho...@yahoo.com

 

 

  _  

From: John G. Rose johnr...@polyplexic.com
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 11:39:23 AM
Subject: RE: [agi] masterpiece on an iPad

An AGI may not really think like we do, it may just execute code. 

 

Though I suppose you could program a lot of fuzzy loops and idle
speculation, entertaining possibilities, having human think envy.. 

 

John

 

From: Matt Mahoney [mailto:matmaho...@yahoo.com] 
Sent: Friday, July 02, 2010 8:21 AM
To: agi
Subject: Re: [agi] masterpiece on an iPad

 

AGI is all about building machines that think, so you don't have to.


 

-- Matt Mahoney, matmaho...@yahoo.com

 

 

  _  

From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 9:37:51 AM
Subject: Re: [agi] masterpiece on an iPad

that's like saying cartography or cartoons could be done a lot faster if
they just used cameras -  ask Michael to explain what the hand can draw that
the camera can't

 

From: Matt Mahoney mailto:matmaho...@yahoo.com  

Sent: Friday, July 02, 2010 2:21 PM

To: agi mailto:agi@v2.listbox.com  

Subject: Re: [agi] masterpiece on an iPad

 

It could be done a lot faster if the iPad had a camera.


 

-- Matt Mahoney, matmaho...@yahoo.com 

 

 

  _  

From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Fri, July 2, 2010 6:28:58 AM
Subject: [agi] masterpiece on an iPad

http://www.telegraph.co.uk/culture/culturevideo/artvideo/7865736/Artist-crea
tes-masterpiece-on-an-iPad.html

 

McLuhan argues that touch is the central sense - the one that binds the
others. He may be right. The i-devices integrate touch into intelligence.


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ | Modify Your Subscription



 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription 



 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription



 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
It's just that something like world hunger is so complex AGI would have to
master simpler problems. Also, there are many people and institutions that
have solutions to world hunger already and they get ignored. So an AGI would
have to get established over a period of time for anyone to really care what
it has to say about these types of issues. It could simulate things and come
up with solutions but they would not get implemented unless it had power to
influence. So in addition AGI would need to know how to make people
listen... and maybe obey.

 

IMO I think AGI will take the embedded route - like other types of computer
systems - IRS, weather, military, Google, etc. and we become dependent
intergenerationally so that it is impossible to survive without. At that
point AGI's will have power to influence.

 

John

 

From: Ian Parker [mailto:ianpark...@gmail.com] 
Sent: Saturday, June 26, 2010 2:19 PM
To: agi
Subject: Re: [agi] The problem with AGI per Sloman

 

Actually if you are serious about solving a political or social question
then what you really need is CRESS http://cress.soc.surrey.ac.uk/web/home
. The solution of World Hunger is BTW a political question not a technical
one. Hunger is largely due to bad governance in the Third World. How do you
get good governance. One way to look at the problem is via CRESS and run
simulations in second life.

 

One thing which has in fact struck me in my linguistic researches is this.
Google Translate is based on having Gigabytes of bilingual text. The fact
that GT is so bad at technical Arabic indicates the absence of such
bilingual text. Indeed Israel publishes more papers than the whole of the
Islamic world. This is of profound importance for understanding the Middle
East. I am sure CRESS would confirm this.

 

AGI would without a doubt approach political questions by examining all the
data about the various countries before making a conclusion. AGI would
probably be what you would consult for long term solutions. It might not be
so good at dealing with something (say) like the Gaza flotilla. In coing to
this conclusion I have the University of Surrey and CRESS in mind.

 

 

  - Ian Parker

On 26 June 2010 14:36, John G. Rose johnr...@polyplexic.com wrote:

 -Original Message-
 From: Ian Parker [mailto:ianpark...@gmail.com]


 How do you solve World Hunger? Does AGI have to. I think if it is truly
G it
 has to. One way would be to find out what other people had written on the
 subject and analyse the feasibility of their solutions.



Yes, that would show the generality of their AGI theory. Maybe a particular
AGI might be able to work with some problems but plateau out on its
intelligence for whatever reason and not be able to work on more
sophisticated issues. An AGI could be hardcoded perhaps and not improve
much, whereas another AGI might improve to where it could tackle vast
unknowns at increasing efficiency. There are common components in tackling
unknowns, complexity classes for example, but some AGI systems may operate
significantly more efficiently and improve. Human brains at some point may
plateau without further augmentation though I'm not sure we have come close
to what the brain is capable of.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 
Powered by Listbox: http://www.listbox.com

 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
 -Original Message-
 From: Ian Parker [mailto:ianpark...@gmail.com]
 
 So an AGI would have to get established over a period of time for anyone
to
 really care what it has to say about these types of issues. It could
simulate
 things and come up with solutions but they would not get implemented
 unless it had power to influence. So in addition AGI would need to know
how
 to make people listen... and maybe obey.
 
 This is CRESS. CRESS would be an accessible option.
 

Yes, I agree, it looks like that.

 IMO I think AGI will take the embedded route - like other types of
computer
 systems - IRS, weather, military, Google, etc. and we become dependent
 intergenerationally so that it is impossible to survive without. At that
point
 AGI's will have power to influence.
 
 Look! The point is this:-
 
 1) An embedded system is AI not AGI.
 
 2) AGI will arise simply because all embedded systems are themselves
 searchable.
 

A narrow embedded system, like say a DMV computer network is not an AGI.
But that doesn't mean an AGI could not perform that function. In fact, AGI
might arise out of these systems needing to become more intelligent. And an
AGI system, that same AGI software may be used for a DMV, a space navigation
system, IRS, NASDAQ, etc. it could adapt. .. efficiently. There are some
systems that tout multi-use now but these are basically very narrow AI. AGI
will be able to apply it's intelligence across domains and should be able to
put its feelers into all the particular subsystems. Although I foresee some
types of standard interfaces perhaps into these narrow AI computer networks;
some sort of intelligence standards maybe, or the AGI just hooks into the
human interfaces...

An AGI could become a God but also it could do some useful stuff like run
everyday information systems just like people with brains have to perform
menial labor.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] The problem with AGI per Sloman

2010-06-26 Thread John G. Rose
 -Original Message-
 From: Ian Parker [mailto:ianpark...@gmail.com]
 
 
 How do you solve World Hunger? Does AGI have to. I think if it is truly
G it
 has to. One way would be to find out what other people had written on the
 subject and analyse the feasibility of their solutions.
 
 

Yes, that would show the generality of their AGI theory. Maybe a particular
AGI might be able to work with some problems but plateau out on its
intelligence for whatever reason and not be able to work on more
sophisticated issues. An AGI could be hardcoded perhaps and not improve
much, whereas another AGI might improve to where it could tackle vast
unknowns at increasing efficiency. There are common components in tackling
unknowns, complexity classes for example, but some AGI systems may operate
significantly more efficiently and improve. Human brains at some point may
plateau without further augmentation though I'm not sure we have come close
to what the brain is capable of. 

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] The problem with AGI per Sloman

2010-06-24 Thread John G. Rose
I think some confusion occurs where AGI researchers want to build an
artificial person verses artificial general intelligence. An AGI might be
just a computational model running in software that can solve problems
across domains.  An artificial person would be much else in addition to AGI.

 

With intelligence engineering and other engineering that artificial person
could be built, or some interface where it appears to be a person. And a
huge benefit is in having artificial people to do things that real people
do. But pursuing AGI need not have to be pursuit of building artificial
people.

 

Also, an AGI need not have to be able to solve ALL problems initially.
Coming out and asking why some AGI theory wouldn't be able to figure out how
to solve some problem like say, world hunger, I mean WTF is that?

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, June 24, 2010 5:33 AM
To: agi
Subject: [agi] The problem with AGI per Sloman

 

One of the problems of AI researchers is that too often they start off with
an inadequate
understanding of the problems and believe that solutions are only a few
years away. We need an educational system that not only teaches techniques
and solutions, but also an understanding of problems and their difficulty -
which can come from a broader multi-disciplinary education. That could speed
up progress.

A. Sloman

 

( who else keeps saying that?)


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
 My underlying thought here is that we may all be working on the wrong
 problems. Instead of working on the particular analysis methods (AGI) or
 self-organization theory (NN), perhaps if someone found a solution to
large-
 network stability, then THAT would show everyone the ways to their
 respective goals.
 

For a distributed AGI this is a fundamental problem. Difference is that a
power grid is such a fixed network. A distributed AGI need not be that
fixed, it could lose chunks of itself but grow them out somewhere else.
Though a distributed AGI could be required to run as a fixed network. 

Some traditional telecommunications networks are power grid like. They have
a drastic amount of stability and healing functions built-in as have been
added over time. 

Solutions for large-scale network stabilities would vary per network
topology, function, etc.. Virtual networks play a large part, this would be
related to the network's ability to reconstruct itself meaning knowing how
to heal, reroute, optimize and grow.. 

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
 John,
 
 Your comments appear to be addressing reliability, rather than
stability...

Both can be very interrelated. It can be an oversimplification to separate
them, or too impractical/theoretical. 

 On Mon, Jun 21, 2010 at 9:12 AM, John G. Rose johnr...@polyplexic.com
 wrote:
  -Original Message-
  From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
  My underlying thought here is that we may all be working on the wrong
  problems. Instead of working on the particular analysis methods (AGI) or
  self-organization theory (NN), perhaps if someone found a solution to
 large-
  network stability, then THAT would show everyone the ways to their
  respective goals.
 
 For a distributed AGI this is a fundamental problem. Difference is that a
 power grid is such a fixed network.
 
 Not really. Switches may connect or disconnect Canada, equipment is
 constantly failing and being repaired, etc. In any case, this doesn't seem
to be
 related to stability, other than it being a lot easier to analyze a fixed
network
 rather than a variable network.
 

There are a fixed amount of copper wires going into a node. 

The network is usually a hierarchy of networks. Fixed may be more
limiting, sophisticated and kludged rendering it more difficult to deal with
so don't assume.

 A distributed AGI need not be that
 fixed, it could lose chunks of itself but grow them out somewhere else.
 Though a distributed AGI could be required to run as a fixed network.
 
 Some traditional telecommunications networks are power grid like. They
 have
 a drastic amount of stability and healing functions built-in as have been
 added over time.
 
 However, there is no feedback, so stability isn't even a potential issue.

No feedback? Remember some traditional telecommunications networks run over
copper with power, and are analog; there are huge feedback issues of which
many taken care of at a lower signaling level or with external equipment
such as echo-cancellers. Again though, there is a hierarchy and mesh of
various networks here. I've suggested traditional telecommunications since
they are vastly more complex, real-time and many other networks have learned
from it.

 
 Solutions for large-scale network stabilities would vary per network
 topology, function, etc..
 
 However, there ARE some universal rules, like the 12db/octave requirement.
 

Really? Do networks such as botnets really care about this? Or does it
apply?

 Virtual networks play a large part, this would be
 related to the network's ability to reconstruct itself meaning knowing how
 to heal, reroute, optimize and grow..
 
 Again, this doesn't seem to relate to millisecond-by-millisecond
stability.
 

It could be as the virtual network might contain images of the actual
network, as an internal model and use this for changing the network
structure for a more stable one if there were timing issues...

Just some thoughts...

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 
 Really? Do networks such as botnets really care about this? Or does it
 apply?
 
 Anytime negative feedback can become positive feedback because of delays
 or phase shifts, this becomes an issue. Many competent EE people fail to
see
 the phase shifting that many decision processes can introduce, e.g. by
 responding as quickly as possible, finite speed makes finite delays and
sharp
 frequency cutoffs, resulting in instabilities at those frequency cutoff
points
 because of violation of the 12db/octave rule. Of course, this ONLY applies
in
 feedback systems and NOT in forward-only systems, except at the real-world
 point of feedback, e.g. the bots themselves.
 
 Of course, there is the big question of just what it is that is being
 attenuated in the bowels of an intelligent system. Usually, it is
 computational delays making sharp frequency-limited attenuation at their
 response speeds.
 
 Every gamer is well aware of the oscillations that long ping times can
 introduce in people's (and intelligent bot's) behavior. Again, this is
basically
 the same 12db/octave phenomenon.
 

OK, excuse my ignorance on this - a design issue in distributed intelligence
is how to split up things amongst the agents. I see it as a hierarchy of
virtual networks, with the lowest level being the substrate like IP sockets
or something else but most commonly TCP/UDP.

The protocols above that need to break up the work, and the knowledge
distribution, so the 12db/octave phenomenon must apply there too. 

I assume any intelligence processing engine must include a harmonic
mathematical component since ALL things are basically network, especially
intelligence. 

This might be an overly aggressive assumption but it seems from observance
that intelligence/consciousness exhibits some sort of harmonic property, or
levels.

John







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
 -Original Message-
 From: Steve Richfield [mailto:steve.richfi...@gmail.com]
 John,
 
 Hmmm, I though that with your EE background, that the 12db/octave would
 bring back old sophomore-level course work. OK, so you were sick that day.
 I'll try to fill in the blanks here...

Thanks man. Appreciate it.  What little EE training I did undergo was brief
and painful :)

 On Mon, Jun 21, 2010 at 11:16 AM, John G. Rose johnr...@polyplexic.com
 wrote:
 
  Of course, there is the big question of just what it is that is being
  attenuated in the bowels of an intelligent system. Usually, it is
  computational delays making sharp frequency-limited attenuation at their
  response speeds.
 
  Every gamer is well aware of the oscillations that long ping times can
  introduce in people's (and intelligent bot's) behavior. Again, this is
 basically
  the same 12db/octave phenomenon.
 
 OK, excuse my ignorance on this - a design issue in distributed
intelligence
 is how to split up things amongst the agents. I see it as a hierarchy of
 virtual networks, with the lowest level being the substrate like IP
sockets
 or something else but most commonly TCP/UDP.
 
 The protocols above that need to break up the work, and the knowledge
 distribution, so the 12db/octave phenomenon must apply there too.
 
 RC low-pass circuits exhibit 6db/octave rolloff and 90 degree phase
shifts.
 12db/octave corresponds to a 180 degree phase shift. More than 180
 degrees and you are into positive feedback. At 24db/octave, you are at
 maximum positive feedback, which makes great oscillators.
 
 The 12 db/octave limit applies to entire loops of components, and not to
the
 individual components. This means that you can put a lot of 1db/octave
 components together in a big loop and get into trouble. This is commonly
 encountered in complex analog filter circuits that incorporate 2 or more
op-
 amps in a single feedback loop. Op amps are commonly compensated to
 have 6db/octave rolloff. Put 2 of them together and you right at the
precipice
 of 12db/octave. Add some passive components that have their own rolloffs,
 and you are over the edge of stability, and the circuit sits there and
oscillates
 on its own. The usual cure is to replace one of the op-amps with an
 uncompensated op-amp with ~0db/octave rolloff, until it gets to its
 maximum frequency, whereupon it has an astronomical rolloff. However,
 that astronomical rolloff works BECAUSE the loop gain at that frequency is
 less than 1, so the circuit cannot self-regenerate and oscillate at that
 frequency.
 
 Considering the above and the complexity of neural circuits, it would seem
 that neural circuits would have to have absolutely flat responses and some
 central rolloff mechanism, maybe one of the ~200 different types of
 neurons, or alternatively, would have to be able to custom-tailor their
 responses to work in concert to roll off at a reasonable rate. A third
 alternative is discussed below, where you let them go unstable, and
actually
 utilize the instability to achieve some incredible results.
 
 I assume any intelligence processing engine must include a harmonic
 mathematical component
 
 I'm not sure I understand what you are saying here. Perhaps you have
 discovered the recipe for the secret sauce?

Uhm, no I was merely asking your opinion if the 12db/octave phenomena
applies to a non-EE based intelligence system. If it could be lifted off of
its EE nativeness and applied to ANY network since there are latencies in
ALL networks.  BUT it sounds as if it is heavily analog circuit based,
though there may be some *analogue in an informational network. And this
would be represented under a different technical name or formula most
likely.

 
 since ALL things are basically network, especially
 intelligence.
 
 Most of the things we call networks really just pass information along
and
 do NOT have feedback mechanisms. Power control is an interesting
 exception, but most of those guys are unable to even carry on an
intelligent
 conversation about the subject. No wonder the power networks have
 problems.

Steve - I actually did work in nuclear power engineering many years ago and
remember the Neanderthals involved in that situation believe it or not. But
I will say they strongly emphasized practicality and safety verses
theoretics and academics. And especially trial and error was something to be
frowned upon ... for obvious reasons. IOW, do not rock the boat since there
are real reasons for them being that way!

 
 This might be an overly aggressive assumption but it seems from observance
 that intelligence/consciousness exhibits some sort of harmonic property,
or
 levels.
 
 You apparently grok something about harmonics that I don't (yet) grok.
 Please enlighten me.
 

I was wondering if YOU could envision a harmonic correlation between certain
electrical circuit phenomenon and intelligence. I've just suspected that
there are harmonic properties in intelligence/consciousness. IOW

RE: [agi] just a thought

2009-01-14 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 --- On Wed, 1/14/09, Christopher Carr cac...@pdx.edu wrote:
 
  Problems with IQ notwithstanding, I'm confident that, were my silly IQ
 of 145 merely doubled, I could convince Dr. Goertzel to give me the
 majority of his assets, including control of his businesses. And if he
 were to really meet someone that bright, he would be a fool or
 super-human not to do so, which he isn't (a fool, that is).
 
 First, if you knew what you would do if you were twice as smart, you
 would already be that smart. Therefore you don't know.
 
 Second, you have never even met anyone with an IQ of 290. How do you
 know what they would do?
 
 How do you measure an IQ of 100n?
 
 - Ability to remember n times as much?
 - Ability to learn n times faster?
 - Ability to solve problems n times faster?
 - Ability to do the work of n people?
 - Ability to make n times as much money?
 - Ability to communicate with n people at once?
 
 Please give me an IQ test that measures something that can't be done by
 n log n people (allowing for some organizational overhead).
 

How do you measure the collective IQ of humanity? Individual IQ's are just a
subset.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


RE: [agi] initial reaction to A2I2's call center product

2009-01-12 Thread John G. Rose
 From: Ben Goertzel [mailto:b...@goertzel.org]

 Sent: Monday, January 12, 2009 3:42 AM

 To: agi@v2.listbox.com

 Subject: [agi] initial reaction to A2I2's call center product

 

 AGI company A2I2 has released a product for automating call center

 functionality, see...

 

 http://www.smartaction.com/index.html

 

 

 

I'm diggin' it. Telephony and AGI merge. Either telephony was going to come
to AGI or AGI was going to come to telephony. At some point they needed to
embrace.

 

Picture it as resource sharing. 

 

I'm definitely interested in what going on with this...

 

John

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


RE: [agi] initial reaction to A2I2's call center product

2009-01-12 Thread John G. Rose
 From: Bob Mottram [mailto:fuzz...@gmail.com]
 
 2009/1/12 Ben Goertzel b...@goertzel.org:
  AGI company A2I2 has released a product for automating call center
  functionality
 
 
 We value your interest in our AGI related service.
 
 If you agree that AGI can have useful applications for call centres,
 press 1
 
 If our AGI repeatedly misinterprets your speech, because it was
 trained on an Australian accent where all statements actually sound
 like questions, press 2
 
 If you want to listen to some Dire Straits track which is quite good
 the first time but becomes increasingly annoying as it is played over
 and over again for ten minutes, press 3
 
 If you wish to be directly connected to our superintelligence, which
 often gives a cryptic reply before hanging up, press 4
 
 For all other enquiries, press 5 repeatedly in an infuriated manner.
 
 

Well when you press 4 to get to the superintelligence it's still some remote
person on the other side of the world but this time they're doing
text-to-speech that plays a semi-robotic voice, so technically it's still an
Artificial General Intelligence HAR HAR HAR just kidding...

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-30 Thread John G. Rose
If the agents were p-zombies or just not conscious they would have different
motivations.

 

Consciousness has properties of communication protocol and effects
inter-agent communication. The idea being it enhances agents' existence and
survival. I assume it facilitates collective intelligence, generally. For a
multi-agent system with a goal of compression or prediction the agent
consciousness would have to be catered.  So introducing - 

Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X
to the agents would give them more glue if they expended that
consciousness on one another. The communications dynamics of the system
would change verses a similar non-conscious multi-agent system.

 

John

 

From: Ben Goertzel [mailto:b...@goertzel.org] 
Sent: Monday, December 29, 2008 2:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Universal intelligence test benchmark

 


Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X

;-)

ben g

On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney matmaho...@yahoo.com wrote:

--- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:

  What does consciousness have to do with the rest of your argument?
 

 Multi-agent systems should need individual consciousness to
 achieve advanced
 levels of collective intelligence. So if you are
 programming a multi-agent
 system, potentially a compressor, having consciousness in
 the agents could
 have an intelligence amplifying effect instead of having
 non-conscious
 agents. Or some sort of primitive consciousness component
 since higher level
 consciousness has not really been programmed yet.

 Agree?

No. What do you mean by consciousness?

Some people use consciousness and intelligence interchangeably. If that
is the case, then you are just using a circular argument. If not, then what
is the difference?


-- Matt Mahoney, matmaho...@yahoo.com








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-30 Thread John G. Rose
The main point being consciousness effects multi-agent collective
intelligence. Theoretically it could be used to improve a goal of
compression since compression and intelligence are related though
compression seems more narrow, or attempting to compress that is.

 

Either way this is not nonsense. Contemporary compression has yet to get
very close to max theoretical so exploring the space of potential
mechanisms, especially intelligence related facets like consciousness and
multi-agent consciousness can be potential candidates for a new hack? I
think though that attempting to get close to max compression is not as
related to a goal of an efficient compression... 

 

John

 

From: Matt Mahoney [mailto:matmaho...@yahoo.com] 
Sent: Tuesday, December 30, 2008 8:47 AM
To: agi@v2.listbox.com
Subject: RE: [agi] Universal intelligence test benchmark

 


John,
So if consciousness is important for compression, then I suggest you write
two compression programs, one conscious and one not, and see which one
compresses better. 

Otherwise, this is nonsense.

-- Matt Mahoney, matmaho...@yahoo.com

--- On Tue, 12/30/08, John G. Rose johnr...@polyplexic.com wrote:

From: John G. Rose johnr...@polyplexic.com
Subject: RE: [agi] Universal intelligence test benchmark
To: agi@v2.listbox.com
Date: Tuesday, December 30, 2008, 9:46 AM

If the agents were p-zombies or just not conscious they would have different
motivations.

 

Consciousness has properties of communication protocol and effects
inter-agent communication. The idea being it enhances agents' existence and
survival. I assume it facilitates collective intelligence, generally. For a
multi-agent system with a goal of compression or prediction the agent
consciousness would have to be catered.  So introducing - 

Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X
to the agents would give them more glue if they expended that
consciousness on one another. The communications dynamics of the system
would change verses a similar non-conscious multi-agent system.

 

John

 

From: Ben Goertzel [mailto:b...@goertzel.org] 
Sent: Monday, December 29, 2008 2:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Universal intelligence test benchmark

 


Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X

;-)

ben g

On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney 
mailto:matmaho...@yahoo.com matmaho...@yahoo.com wrote:

--- On Mon, 12/29/08, John G. Rose  mailto:johnr...@polyplexic.com
johnr...@polyplexic.com wrote:

  What does consciousness have to do with the rest of your argument?
 

 Multi-agent systems should need individual consciousness to
 achieve advanced
 levels of collective intelligence. So if you are
 programming a multi-agent
 system, potentially a compressor, having consciousness in
 the agents could
 have an intelligence amplifying effect instead of having
 non-conscious
 agents. Or some sort of primitive consciousness component
 since higher level
 consciousness has not really been programmed yet.

 Agree?

No. What do you mean by consciousness?

Some people use consciousness and intelligence interchangeably. If that
is the case, then you are just using a circular argument. If not, then what
is the difference?


-- Matt Mahoney,  mailto:matmaho...@yahoo.com matmaho...@yahoo.com




  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription

 http://www.listbox.com 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Sun, 12/28/08, John G. Rose johnr...@polyplexic.com wrote:
 
  So maybe for improved genetic
  algorithms used for obtaining max compression there needs to be a
  consciousness component in the agents? Just an idea I think there
 is
  potential for distributed consciousness inside of command line
 compressors
  :)
 
 No, consciousness (as the term is commonly used) is the large set of
 properties of human mental processes that distinguish life from death,
 such as ability to think, learn, experience, make decisions, take
 actions, communicate, etc. It is only relevant as an independent
 concept to agents that have a concept of death and the goal of avoiding
 it. The only goal of a compressor is to predict the next input symbol.
 

Well that's a question. Does death somehow enhance a lifeforms' collective
intelligence? Agents competing over finite resources.. I'm wondering if
there were multi-agent evolutionary genetics going on would there be a
finite resource of which there would be a relation to the collective goal of
predicting the next symbol. Agent knowledge is not only passed on in their
genes, it is also passed around to other agents Does agent death hinder
advances in intelligence or enhance it? And then would the intelligence
collected thus be applicable to the goal. And if so, consciousness may be
valuable.

John 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:
 
  Agent knowledge is not only passed on in their
  genes, it is also passed around to other agents Does agent death
 hinder
  advances in intelligence or enhance it? And then would the
 intelligence
  collected thus be applicable to the goal. And if so, consciousness
 may be
  valuable.
 
 What does consciousness have to do with the rest of your argument?
 

Multi-agent systems should need individual consciousness to achieve advanced
levels of collective intelligence. So if you are programming a multi-agent
system, potentially a compressor, having consciousness in the agents could
have an intelligence amplifying effect instead of having non-conscious
agents. Or some sort of primitive consciousness component since higher level
consciousness has not really been programmed yet. 

Agree?

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Alternative Cicuitry

2008-12-28 Thread John G. Rose
Reading this - 

http://www.nytimes.com/2008/12/23/health/23blin.html?ref=science

 

makes me wonder what other circuitry we have that's discouraged from being
accepted.

 

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-28 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Sat, 12/27/08, John G. Rose johnr...@polyplexic.com wrote:
 
  Well I think consciousness must be some sort of out of band
 intelligence
  that bolsters an entity in terms of survival. Intelligence probably
  stratifies or optimizes in zonal regions of similar environmental
  complexity, consciousness being one or an overriding out-of-band
 one...
 
 No, consciousness only seems mysterious because human brains are
 programmed that way. For example, I should logically be able to
 convince you that pain is just a signal that reduces the probability
 of you repeating whatever actions immediately preceded it. I can't do
 that because emotionally you are convinced that pain is real.
 Emotions can't be learned the way logical facts can, so emotions always
 win. If you could accept the logical consequences of your brain being
 just a computer, then you would not pass on your DNA. That's why you
 can't.
 
 BTW the best I can do is believe both that consciousness exists and
 consciousness does not exist. I realize these positions are
 inconsistent, and I leave it at that.
 

Consciousness must be a component of intelligence. For example - to pass on
DNA for humans, they need to be conscious, or have been up to this point.
Humans only live approx. 80 years. Intelligence is really a multi-agent
thing, IOW our individual intelligence has come about through the genetic
algorithm of humanity, we are really a distributed intelligence and
theoretically AGI will be born out of that. So maybe for improved genetic
algorithms used for obtaining max compression there needs to be a
consciousness component in the agents? Just an idea I think there is
potential for distributed consciousness inside of command line compressors
:)

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-27 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Sat, 12/27/08, John G. Rose johnr...@polyplexic.com wrote:
 
How does consciousness fit into your compression
intelligence modeling?
  
   It doesn't. Why is consciousness important?
  
 
  I was just prodding you on this. Many people on this list talk about
 the
  requirements of consciousness for AGI and I was imagining some sort
 of
  consciousness in one of your command line compressors :)
  I've yet to grasp
  the relationship between intelligence and consciousness though lately
 I
  think consciousness may be more of an evolutionary social thing. Home
 grown
  digital intelligence, since it is a loner, may not require much
  consciousness IMO..
 
 What we commonly call consciousness is a large collection of features
 that distinguish living human brains from dead human brains: ability to
 think, communicate, perceive, make decisions, learn, move, talk, see,
 etc. We only attach significance to it because we evolved, like all
 animals, to fear a large set of things that can kill us.
 


Well I think consciousness must be some sort of out of band intelligence
that bolsters an entity in terms of survival. Intelligence probably
stratifies or optimizes in zonal regions of similar environmental
complexity, consciousness being one or an overriding out-of-band one...

 
 I was hoping to discover an elegant theory for AI. It didn't quite work
 that way. It seems to be a kind of genetic algorithm: make random
 changes to the code and keep the ones that improve compression.
 

Is this true for most data? For example would PI digit compression attempts
result in genetic emergences the same as say compressing environmental
noise? I'm just speculating that genetically originated data would require
compression avenues of similar algorithmic complexity descriptors, for
example PI digit data does not originate genetically so compression attempts
would not show genetic emergences as chained as say environmental
noise basically I'm asking if you can tell the difference from data that
has a genetic origination ingredient verses all non-genetic...

John








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-26 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Fri, 12/26/08, Philip Hunt cabala...@googlemail.com wrote:
 
  Humans aren't particularly good at compressing data. Does this mean
  humans aren't intelligent, or is it a poor definition of
 intelligence?
 
 Humans are very good at predicting sequences of symbols, e.g. the next
 word in a text stream. However, humans are not very good at resetting
 their mental states and deterministically reproducing the exact
 sequence of learning steps and assignment of probabilities, which is
 what you need to decompress the data. Fortunately this is not a problem
 for computers.
 

Human memory storage may be lossy compression and recall may be
decompression. Some very rare individuals remember every day of their life
in vivid detail, not sure what that means in terms of memory storage.

How does consciousness fit into your compression intelligence modeling?

The thing about the word compression is that it is bass-ackwards when
talking about intelligence. The word describes kind of an external effect,
instead of an internal reconfiguration/re-representation. Also there is a
difference between a goal of achieving maximum compression verses a goal of
achieving a high efficiency data description. Max compression implies hacks,
kludges and a large decompressor. 

Here is a simple example of human memory compression/decompression - When
you think of space, air or emptiness like driving across Kansas, looking at
the moon, or waiting idly over a period of time, do you store the emptiness
and redundantness or does it get compressed out? The trip across Kansas you
remember the starting point, rest stops, and the end, not the full duration.
It's a natural compression. In fact I'd say this is a partially lossless
compression though more lossy... maybe it is incidental but it is still
there.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-26 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
  How does consciousness fit into your compression
  intelligence modeling?
 
 It doesn't. Why is consciousness important?
 

I was just prodding you on this. Many people on this list talk about the
requirements of consciousness for AGI and I was imagining some sort of
consciousness in one of your command line compressors :) I've yet to grasp
the relationship between intelligence and consciousness though lately I
think consciousness may be more of an evolutionary social thing. Home grown
digital intelligence, since it is a loner, may not require much
consciousness IMO..

  Max compression implies hacks, kludges and a large decompressor.
 
 As I discovered with the large text benchmark.
 

Yep and the behavior of the metrics near max theoretical compression is
erratic I think?

john



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Relevance of SE in AGI

2008-12-22 Thread John G. Rose
I've been experimenting with extending OOP to potentially implement
functionality that could make a particular AGI design easier to build.

 

The problem with SE is that it brings along much baggage that can totally
obscure AGI thinking.

 

Many AGI people and AI people are automatic top of the line software
engineers. So the type of SE for AGI is different than typical SE and the
challenges are different.

 

I think though that proto-AGI's will emerge from hybrid SE AGI organizations
either independent or embedded within larger orgs.  Though some AGI
principles are so tantalizing close to a potential software implementation
you can almost taste it... though these typically turn out to be mirages...

 

John

 

 

From: Valentina Poletti [mailto:jamwa...@gmail.com] 
Sent: Saturday, December 20, 2008 6:29 PM
To: agi@v2.listbox.com
Subject: [agi] Relevance of SE in AGI

 

I have a question for you AGIers.. from your experience as well as from your
background, how relevant do you think software engineering is in developing
AI software and, in particular AGI software? Just wondering.. does software
verification as well as correctness proving serve any use in this field? Or
is this something used just for Nasa and critical applications? 

 

Valentina

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-20 Thread John G. Rose
 From: Mike Tintner [mailto:tint...@blueyonder.co.uk]
 
 Sound silly? Arguably the most essential requirement for a true human-
 level
 GI is to be able to consider any object whatsoever as a thing. It's a
 cognitively awesome feat . It means we can conceive of literally any
 thing
 as a thing - and so bring together, associate and compare immensely
 diverse objects such as, say, an amoeba, a bus, a car, a squid, a poem,
 a
 skyscraper, a box, a pencil, a fir tree, the number 1...
 
 Our thingy capacity makes us supremely adaptive. It means I can set
 you a
 creative problem like go and get me some *thing* to block this doorway
 [or
 hole] and you can indeed go and get any of a vastly diverse range of
 appropriate objects.
 
 How are we able to conceive of all these forms as things? Not by any
 rational means, I suggest, but by the imaginative means of drawing them
 all
 mentally or actually as similar adjustable gloops or blobs.
 
 Arnheim provides brilliant evidence for this:
 
 a young child in his drawings uses circular shapes to represent almost
 any
 object at all: a human figure, a house, a car, a book, and even the
 teeth of
 a saw, as can be seen in Fig x, a drawing by a five year old. It would
 be a
 mistake to say that the child neglects or misrepresents the shape of
 these
 objects. Only to adult eyes is he picturing them as round. Actually,
 intended roundness does not exist before other shapes, such as
 straightness
 or angularity are available to the child. At the stage when he begins
 to
 draw circles, shape is not yet differentiated. The circle does not
 stand for
 roundness but for the more general quality of thingness - that is,
 for the
 compactness of a solid object as distinguished from the nondescript
 ground.
 [Art and Visual Perception]
 

Even for things and objects the mathematics is inherent. There is
plurality, partitioning, grouping, attributes.. interrelatedness. Is a wisp
of smoke a thing, or a wave on the ocean, or a sound echoing through the
mountains. Is everything one big thing?

Perhaps creativity involves zeroing out from the precise definition of
things in order to make their interrelatedness less restricting. Can't
find a solution to those complex problems when you are stuck in all the
details, you can't' rationalize your way out of the rules as there may be a
non-local solution or connection that needs to be made. 

The young child is continuously exercising creativity as things are blobs or
circles and creativity combined with trial and error rationalizes things
into domains and rules...

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Should I get a PhD?

2008-12-19 Thread John G. Rose
Mike,

 

Exercising rational thinking need not force exposure of oneself into being
sequestered as a rationalist.  And utilizing creativity effectively requires
a context in some domain. The domain context typically involves application
of rationality. A temporary absence of creativity does not mean it is not
valued it just means that you have to instantiate creativity.  AGIers may
seem like strict rationalists and many are, but many are just instantiating
their creativity or putting pure creativity on the backburner. And the dying
culture that you are talking about is not true. There is a mass synthesis
going on...

 

Simplexity is an interesting concept related to this... Creativity and
rationality are not opposed, they typically are out of balance and each has
its own deadfalls. This subject is an old argument.  And when you split up
the two, creativity and rationality, your are over rationalizing them and
need to be more creative.

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, December 18, 2008 4:47 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Should I get a PhD?

 

Ben:I don't think there's any lack of creativity in the AGI world ... and I
think it's pretty clear that rationality and creativity work together in all
really good scientific work.Creativity is about coming up with new ideas.
Rationality is about validating ideas, and deriving their natural
consequences.  They're complementary, not contradictory, within a healthy
scientific thought process.

 

Ben,

 

I radically disagree. Human intelligence involves both creativity and
rationality, certainly.  But  rationality - and the rational systems  of
logic/maths and formal languages, [on which current AGI depends]  -  are
fundamentally *opposed* to creativity and the generation of new ideas.  What
I intend to demonstrate in a while is that just about everything that is bad
thinking from a rational POV is *good [or potentially good] thinking* from a
creative POV (and vice versa). To take a small example, logical fallacies
are indeed illogical and irrational - an example of rationally bad thinking.
But they are potentially good thinking from a creative POV -   useful
skills, for example, in a political spinmeister's art. (And you and Pei use
them a lot in arguing for your AGI's  :)).

 

As someone once said:

 

Creativity is the great mystery at the center of Western culture. We preach
order, science, logic and reason. But none of the great accomplishments of
science, logic and reason was actually achieved in a scientific, logical,
reasonable manner. Every single one must, instead, be attributed to the
strange, obscure and definitively irrational process of creative
inspiration. Logic and reason are indispensible in the working out ideas,
once they have arisen -- but the actual  conception of bold, original ideas
is something else entirely.

 

Who did say that? Oh yes, it was you :) in your book .

 

As I indicated, it would be better to continue this when I am ready to set
out a detailed argument. But for now, it wouldn't hurt to take away the
central idea that everything which is good for rationality and specialist
intelligence is in fact bad for,  or at any rate the inverse of,  creativity
and general intelligence, (and AGI). It's generally true. [Finding structure
and patterns, for example, which you and others make so much of, are
normally good only for rational, narrow AI - and *bad* for,,or the inverse
of,  creativity].

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread John G. Rose
Top posted here:

Using your bricks to construct something, you have to construct it within
constraints. Constraints is the key word. Whatever bricks you are using
they have their own limiting properties. You CANNOT build anything anyway
you please. Just by defining bricks you are already applying rationalist
hand tying due to the fact that even your abstract bricks have a limiting
rationalist inducing structure... Maybe bricks are too rationalist, I want
to use gloops to build creative things that are impossible to build with
bricks.

John


 From: Mike Tintner [mailto:tint...@blueyonder.co.uk]
 
 P.S. To put the distinction in a really simple easy to visualise
 (though
 *not* formal) form:
 
 rationality and creativity can be seen as reasoning about how to put
 bricks
 together -  (from the metaphorical bricks of an argument to the literal
 bricks of a building)
 
 with rationality, you reason according to predetermined blueprints (or
 programs) of  buildings -  you infer that if this is a building in such
 and
 such a style, then this brick will have to go here and that brick will
 have
 to go there - everything follows. The bricks have to go together in
 certain
 ways. The links in any chain or structure of logical reasoning are
 rigid.
 
 with creativity, you reason *without* precise blueprints  -   you can
 put
 bricks together in any way you like, subject to the constraints that
 they
 must connect with and support each other.  -  and you start with only a
 rough idea, at best,  of the end result/ building you want. (Build me
 a
 skyscraper that's radically different from anything ever built),
 
 rationality in any given situation and with any given, rational
 problem, can
 have only one result.Convergent construction.
 
 creativity in any given situation and with any creative, non-rational
 problem,  can have an infinity of results. Divergent construction.
 
 Spot the difference?
 
 Rationality says bricks build brick buildings. It follows.
 
 Creativity says puh-lease, how boring. It may be rational and necessary
 on
 one level, but it's not necessary at all on a deeper level  With a
 possible
 infinity of ways to put bricks together, we can always build something
 radically different.
 
 http://www.cpluv.com/www/medias/Christophe/Christophe_4661b649bdc87.jpg
 
 (On the contrary, Pei, you can't get more narrow-minded than rational
 thinking. That's its strength and its weakness).
 
 You can't arrive at brick art or any art or any creative work or even
 the
 simplest form of everyday creativity by rationality/logic/deduction,
 induction , abduction, transduction et al. (What's the logical solution
 to
 freeing up bank lending right now? Or seducing that woman over there?
 Think
 about it.)
 
 AGI is about creativity. Building without blueprints. (Or hidden
 structures). Just rough ideas and outlines.
 
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread John G. Rose
 From: Trent Waddington [mailto:[EMAIL PROTECTED]
 
 On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED]
 wrote:
  I mean that people are free to decide if others feel pain. For
 example, a scientist may decide that a mouse does not feel pain when it
 is stuck in the eye with a needle (the standard way to draw blood) even
 though it squirms just like a human would. It is surprisingly easy to
 modify one's ethics to feel this way, as proven by the Milgram
 experiments and Nazi war crime trials.
 
 I'm sure you're not meaning to suggest that scientists commonly
 rationalize in this way, nor that they are all Nazi war criminals for
 experimenting on animals.
 
 I feel the need to remind people that animal rights is a fringe
 movement that does not represent the views of the majority.  We
 experiment on animals because the benefits, to humans, are considered
 worthwhile.
 

I like animals. And I like the idea of coming up with cures to diseases and
testing them on animals first. In college my biologist roommate protested
the torture of fruit flies. My son has starting playing video games where
you shoot, zapp and chemically immolate the opponent, so I need to explain
to him that those bad guys are not conscious...yet.

I don't know if there are guidelines. Humans, being the rulers of planet,
appear as godlike beings to other conscious inhabitants. That brings
responsibility. So when we start coming up with AI stuff in the lab that
attains certain levels of consciousness we have to know what consciousness
is in order to govern our behavior.

And naturally if some superintelligent space alien or rogue interstellar AI
encounters us and decides that we are a culinary delicacy and wants to grow
us enmass economically, we hope that some respect is given eh? 

Reminds me of hearing that some farms are experimenting with growing
chickens w/o heads. Animal rights may be more than just a fringe movement.
Kind of like Mike - http://en.wikipedia.org/wiki/Mike_the_Headless_Chicken

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:
 
 http://susaro.com/wp-
 content/uploads/2008/11/draft_consciousness_rpwl.pdf
 


Um... this is a model of consciousness. One way of looking at it.
Whether or not it is comprehensive enough, not sure, this irreducible
indeterminacy. But after reading the paper a couple times I get what you are
trying to describe. It's part of an essence of consciousness but not sure if
it enough.

Kind of reminds me of Curly's view of consciousness - I'm trying to think
but nothing happens!

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 Three things.
 
 
 First, David Chalmers is considered one of the world's foremost
 researchers in the consciousness field (he is certainly now the most
 celebrated).  He has read the argument presented in my paper, and he
 has
 discussed it with me.  He understood all of it, and he does not share
 any of your concerns, nor anything remotely like your concerns.  He had
 one single reservation, on a technical point, but when I explained my
 answer, he thought it interesting and novel, and possibly quite valid.
 
 Second, the remainder of your comments below are not coherent enough to
 be answerable, and it is not my job to walk you through the basics of
 this field.
 
 Third, about your digression:  gravity does not escape from black
 holes, because gravity is just the curvature of spacetime.  The other
 things that cannot escape from black holes are not forces.
 
 I will not be replying to any further messages from you because you are
 wasting my time.
 
 

I read this paper several times and still have trouble holding the model
that you describe in my head as it fades quickly and then there is a just a
memory of it (recursive ADD?). I'm not up on the latest consciousness
research but still somewhat understand what is going on there. Your paper is
a nice and terse description but to get others to understand the highlighted
entity that you are trying to describe may be easier done with more
diagrams. When I kind of got it for a second it did appear quantitative,
like mathematically describable. I find it hard to believe though that
others have not put it this way, I mean doesn't Hofstadter talk about this
in his books, in an unacademical fashion?
 
Also Edward's critique is very well expressed and thoughtful. Just blowing
him off like that is undeserving.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-14 Thread John G. Rose
 From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
 On Fri, Nov 14, 2008 at 2:07 AM, John G. Rose [EMAIL PROTECTED]
 wrote:
 there are many computer systems now, domain specific intelligent ones
 where their life is more
 important than mine. Some would say that the battle is already lost.
 
 For now, it's not really your life (or interest) vs the system's life
 (or interest). It's rather your life (or interest) vs lives (or
 interests) of people the system protects/supports. Our machines still
 work for humans. At least it still seems to be the case ;-)). If we
 are stupid enough to develop very powerful machines without equally
 powerful safety controls then we (just like many other species) are
 due for extinction for adaptability limitations.
 

It is where the interests of others is more valuable than an individual's
life. Ancient Rome had the entertainment interests of the masses at a higher
value than those being devoured by lions in the arena. I would say that
computers and machines interests today in many cases now are of similar
relational circumstances in some cases.

Our herd mentality makes it easy for rights to be taken away and at the same
time it is accepted and defended as necessary and an improvement. Example -
anonymity and privacy = gone. Sounds paranoiacal but there are many that
agree on this.

It is an icky subject, easy to ignore, and perhaps something that hinders
technological progression.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread John G. Rose
 From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
 On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED]
 wrote:
  is it really necessary for an AGI to be conscious?
 
 Depends on how you define it. If you think it's about feelings/qualia
 then - no - you don't need that [potentially dangerous] crap + we
 don't know how to implement it anyway.
 If you view it as high-level built-in response mechanism (which is
 supported by feelings in our brain but can/should be done differently
 in AGI) then yes - you practically (but not necessarily theoretically)
 need something like that for performance. If you are concerned about
 self-awareness/consciousness then note that AGI can demonstrate
 general problem solving without knowing anything about itself (and
 about many other particular concepts). The AGI just should be able to
 learn new concepts (including self), though I think some built-in
 support makes sense in this particular case. BTW for the purpose of my
 AGI RD I defined self-awareness as a use of an internal
 representation (IR) of self, where the IR is linked to real features
 of the system. Nothing terribly complicated or mysterious about that.
 

Yes, I agree that problem solving can be performed without self-awareness
and I believe that actions involving rich intelligence need not require
consciousness. But yes it all depends on how you define consciousness. It
can be argued that a rock is conscious.

 Doesn't that complicate things?
 
 it does
 
  Shouldn't the machines/computers be slaves to man?
 
 They should and it shouldn't be viewed negatively. It's nothing more
 than a smart tool. Changing that would be a big mistake IMO.

Yup when you need to scuttle the spaceship and HAL is having issues with
that uhm it would be better for HAL to understand that he is expendable.
Though there are AGI applications that would involve humans building close
interpersonal relationships for various reasons. I mean having that AGI
psychotherapist could be useful :) And advanced post-Singularity AGI
applications, yes, I suppose machine consciousness and consciousness
uploading and mixing, ya, in the meantime though for pre-Singularity design
and study I don't see machine consciousness as required, human equiv that
is. Though I do have a fuzzy view of how I would design a consciousness.

 
 Or will they be equal/superior.
 
 Rocks are superior to us in being hard. Cars are superior to us when
 it comes to running fast. AGIs will be superior to us when it comes to
 problem solving.
 So what? Equal/superior in whatever - who cares as long as we can
 progress  safely enjoy life - which is what our tools (including AGI)
 are being designed to help us with.
 

Superior meaning - if it was me or AGI-X due to limited resources does AGI-X
get to live and I am expendable. Unfortunately there are many computer
systems now, domain specific intelligent ones where their life is more
important than mine. Some would say that the battle is already lost.

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
  I thought what he said was a good description more or less. Out of
 600
  millions years there may be only a fraction of that which is an
 improvement
  but it's still there.
 
  How do you know, beyond a reasonable doubt, that any other being is
  conscious?
 
 The problem is, you have to nail down exactly what you *mean* by the
 word conscious before you start asking questions or making
 statements.
   Once you start reading about and thinking about all the attempts that
 have been made to get specific about it, some interesting new answers
 to
 simple questions like this begin to emerge.
 
 What I am fighting here is a tendency for some people to use
 wave-of-the-hand definitions that only capture a fraction of a percent
 of the real meaning of the term.  And sometimes not even that.
 


I see consciousness as a handle to a system. Consciousness is and is not a
unit. Being a system it has components. And the word consciousness may be
semi-inclusive or over-inclusive. As well consciousness can be descripted as
an ether type thing also but consciousness as a system is more applicable
here I think.

I would be interested in how one goes about proving that another being is
conscious. I can imagine definitions of consciousness that would prove that.
Somehow though the mystery is worthy of perpetuation. 



 One of the main conclusions of the paper I am writing now is that you
 will (almost certainly) have no choice in the matter, because a
 sufficiently powerful type of AGI will be conscious whether you like it
 or not.
 

Uhm what is sufficiently mean here? Consciousness may require some
intelligence but I think that intelligence need only possess absolute
minimalistic consciousness. 

Definitions, definitions. Is there someone who has come up with a
consciousness system described quantitatively instead of just fussy word
descriptions?


 The question of slavery is completely orthogonal.

Yes and no. It's related.

 
  I just want things to be taken care of and no issues. Consciousness
 brings
  issues. Intelligence and consciousness are separate.
 
 
 Back to my first paragraph above:  until you have thought carefully
 about what you mean by consciousness, and have figured out where it
 comes from, you can't really make a definitive statement like that,
 surely?
 

Have deeply thought about it. They are not mutually exclusive nor mostly the
same. With both I assume calculations involving resource processing and
space time dynamics. Consciousness needs to be broken up into different
kinds of consciousness with interrelatedness between. Intelligence has less
complexity than consciousness. It is a semi-system. Consciousness can be
evoked using intelligence. Intelligence can be spurred with consciousness.
They both interoperate but intelligence can be distilled out of an existing
conscio-intelligence. And they can facilitate each other yet hinder each
other.

We'd really have to get into the math to get commitant on it.

 And besides, the wanting to have things taken care of bit is a separate
 issue.  That is not a problem, either way.

Heh.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 John LaMuth wrote:
  Reality check ***
 
  Consciousness is an emergent spectrum of subjectivity spanning 600
 mill.
  years of
  evolution involving mega-trillions of competing organisms, probably
  selecting
  for obscure quantum effects/efficiencies
 
  Our puny engineering/coding efforts could never approach this - not
 even
  in a million years.
 
  An outwardly pragmatic language simulation, however, is very do-able.
 
  John LaMuth
 
 It is not.
 
 And we can.
 

I thought what he said was a good description more or less. Out of 600
millions years there may be only a fraction of that which is an improvement
but it's still there.

How do you know, beyond a reasonable doubt, that any other being is
conscious? 

At some point you have to trust that others are conscious, in the same
species, you bring them into your recursive loop of consciousness
component mix.

A primary component of consciousness is a self definition. Conscious
experience is unique to the possessor. It is more than a belief that the
possessor herself is conscious but others who appear conscious may be just
that, appearing to be conscious. Though at some point there is enough
feedback between individuals and/or a group to share consciousness
experience.

Still though, is it really necessary for an AGI to be conscious? Except for
delivering warm fuzzies to the creators? Doesn't that complicate things?
Shouldn't the machines/computers be slaves to man? Or will they be
equal/superior. It's a dog-eat-dog world out there.

I just want things to be taken care of and no issues. Consciousness brings
issues. Intelligence and consciousness are separate.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Cloud Intelligence

2008-11-03 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 True, we can't explain why the human brain needs 10^15 synapses to
 store 10^9 bits of long term memory (Landauer's estimate). Typical
 neural networks store 0.15 to 0.25 bits per synapse.
 

This study - 
http://www.cogsci.rpi.edu/CSJarchive/1986v10/i04/p0477p0493/MAIN.PDF

is just throwing a dart at the wall. You'd need something more real life
instead of word and picture recall calculations to arrive at a number even
close to actual.

 I estimate a language model with 10^9 bits of complexity could be
 implemented using 10^9 to 10^10 synapses. However, time complexity is
 hard to estimate. A naive implementation would need around 10^18 to
 10^19 operations to train on 1 GB of text. However this could be sped
 up significantly if only a small fraction of neurons are active at any
 time.
 
 Just looking at the speed/memory/accuracy tradeoffs of various models
 at http://cs.fit.edu/~mmahoney/compression/text.html (the 2 graphs
 below the main table), it seems that memory is more of a limitation
 than CPU speed. A real time language model would be allowed 10-20
 years.
 

I'm sorry, what are those 2 graphs indicating? To get a smaller compressed
size more running memory is needed? That y-axis is a compressor runtime
memory limit specified by a command line switch or is it just what the
compressor consumes for the data to be compressed?

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] the universe is computable [Was: Occam's Razor and its abuse]

2008-11-02 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 --- On Thu, 10/30/08, John G. Rose [EMAIL PROTECTED] wrote:
 
  You can't compute the universe within this universe
  because the computation
  would have to include itself.
 
 Exactly. That is why our model of physics must be probabilistic
 (quantum mechanics).
 

I'd venture to say that ANY computation is an estimation unless the
computation is itself. To compute the universe you could estimate it but
that computation is an estimation unless the computation is the universe.
Thus the universe itself IS an exact computation just as a chair for example
is an exact computation existing uniquely as itself. Any other computation
of that chair is an estimation.

IOW a computation is itself unless it is an approximation of something else,
it's somewhere between being partially exact or a partially exact
anti-representation. A computation mimicking another same computation would
be partially exact taking time and space into account.

Though there may be some subatomic symmetric simultaneity that violates what
I'm saying above not sure.

Also it's early in the morning and I'm actually just blabbing here so this
all may be relatively inexact :)

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Cloud Intelligence

2008-11-02 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 --- On Thu, 10/30/08, John G. Rose [EMAIL PROTECTED] wrote:
 
   From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  
   Cloud computing is compatible with my proposal for distributed AGI.
   It's just not big enough. I would need 10^10 processors, each 10^3
 to
   10^6 times more powerful than a PC.
  
 
  The only thing we have that come close to those numbers are
  insect brains.
  Maybe something can be biogenetically engineered :) Somehow
  wire billions of
  insect brains together modified in such a way that they are
  peer 2 peer and
  emerge a greater intelligence :)
 
 Or molecular computing. The Earth has about 10^37 bits of data encoded
 in DNA*. Evolution executes a parallel algorithm that runs at 10^33
 operations per second**. This far exceeds the 10^25 bits of memory and
 10^27 OPS needed to simulate all the human brains on Earth as neural
 networks***.
 
 *Human DNA has 6 x 10^9 base pairs (diploid count) at 2 bits each ~
 10^10 bits. The human body has ~ 10^14 cells = 10^24 bits. There are ~
 10^10 humans ~ 10^34 bits. Humans make up 0.1% of the biomass ~ 10^37
 bits.
 
 **Cell replication ranges from 20 minutes in bacteria to ~ 1 year in
 human tissue. Assume 10^-4 replications per second on average ~ 10^33
 OPS. The figure would be much higher if you include RNA and protein
 synthesis.
 
 ***Assume 10^15 synapses per brain at 1 bit each and 10 ms resolution
 times 10^10 humans.
 


I agree on the molecular computing. The resources are there. Not sure though
how one would go about calculating the evolution parallel algorithm OPS, it
would be different than just cell reproduction magnitude.

Still though I don't agree on your initial numbers estimate for AGI. A bit
high perhaps? Your numbers may be able to be trimmed down based on refined
assumptions.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Cloud Intelligence

2008-10-30 Thread John G. Rose
 From: Russell Wallace [mailto:[EMAIL PROTECTED]
 On Thu, Oct 30, 2008 at 6:45 AM,  [EMAIL PROTECTED] wrote:
  It sure seems to me that the availability of cloud computing is
 valuable
  to the AGI project.  There are some claims that maybe intelligent
 programs
  are still waiting on sufficient computer power, but with something
 like
  this, anybody who really thinks that and has some real software in
 mind
  has no excuse.  They can get whatever cpu horsepower they need, I'm
 pretty
  sure even to the theoretical levels predicted by, say, Moravec and
  Kurzweil.  It takes away that particular excuse.
 
 Indeed, that's been the most important effect of computing power
 limitations. It's not that we've ever been able to say this program
 would do great things, if only we had the hardware to run it. It's
 that we learn to flinch away from the good designs, the workable
 approaches, because they won't fit on the single cheap beige box we
 have on our desks. The key benefit of cloud computing is one that can
 be had before the first line of code is written: don't think in terms
 of how your design will run on one box, think in terms of how it will
 run on 10,000.
 

My suspicion though is that say you had 100 physical servers and then 100
physical cloud servers. You could hand tailor your distributed application
so that it is extremely more efficient not running on the cloud substrate.
Even if you took the grid substrate that the cloud is running on and hand
tweaked your app to utilize that I suspect that it would still be way less
efficient than a 100% native written.

The advantage of using cloud or grid substrate is that it makes writing the
application much easier. Hand coded distributed applications take a
particular expertise to develop. Eliminating that helps from a bootstrap
perspective.

Also when you have control over your server you can manipulate topology. It
is possible to enhance inter-server communication by creating custom
physical and virtual network topology.

I assume as grid and cloud computing matures the software substrate will
become more efficient and adaptable to the application. To be sure though on
the efficiencies, some tests would need to be run. Unless someone here
understands cloud/grid enough to know what the deal is or has already run
tests.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Cloud Intelligence

2008-10-30 Thread John G. Rose
 From: Ben Goertzel [mailto:[EMAIL PROTECTED]
 Sent: Thursday, October 30, 2008 9:18 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Cloud Intelligence
 
 
 Unless you are going to hand-wire some special processor-to-processor
 interconnect fabric, this seems probably not to be true...
 
 ben g
 On Thu, Oct 30, 2008 at 11:15 AM, Russell Wallace
 [EMAIL PROTECTED] wrote:
 On Thu, Oct 30, 2008 at 3:07 PM, John G. Rose [EMAIL PROTECTED]
 wrote:
  My suspicion though is that say you had 100 physical servers and then
 100
  physical cloud servers. You could hand tailor your distributed
 application
  so that it is extremely more efficient not running on the cloud
 substrate.
 Why would you suspect that? My understanding of cloud computing is
 that the servers are perfectly ordinary Linux boxes, with perfectly
 ordinary network connections, it's just that you rent them instead of
 buying them.
 

Not talking custom hardware, when you take your existing app and apply it to
the distributed resource and network topology (your 100 servers) you can
structure it to maximize its execution reward. And the design of the app
should take the topology into account. Just creating an app and uploading it
to a cloud and assuming the cloud will be smart enough to figure it out?
There's gonna be layers there man and resource task switching with other
customers.

Cloud substrate software is probably good but not that good.

You could understand how the cloud processes and structure your app towards
that. I have no idea how these clouds are implemented.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Cloud Intelligence

2008-10-30 Thread John G. Rose
 From: Russell Wallace [mailto:[EMAIL PROTECTED]
 On Thu, Oct 30, 2008 at 3:42 PM, John G. Rose [EMAIL PROTECTED]
 wrote:
  Not talking custom hardware, when you take your existing app and
 apply it to
  the distributed resource and network topology (your 100 servers) you
 can
  structure it to maximize its execution reward. And the design of
 the app
  should take the topology into account.
 
 That would be a very bad idea, even if there were no such thing as
 cloud computing. Even if there was a significant efficiency gain to be
 had that way (which there isn't, in the usual scenario where you're
 talking about ethernet not some custom grid fabric), as soon as the
 next hardware purchase comes along, the design over which you sweated
 so hard is now useless or worse than useless.
 

No, you don't lock it into an instance in time. You make it selectively
scalable. 

When your app or your application's resources span more than one machine you
need to organize that. The choice on how you do so effects execution
efficiency. You could have an app now that needs 10 machines to run and 5
years from now will run on one machine yes. That is true. 

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Cloud Intelligence

2008-10-30 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 Cloud computing is compatible with my proposal for distributed AGI.
 It's just not big enough. I would need 10^10 processors, each 10^3 to
 10^6 times more powerful than a PC.
 

The only thing we have that come close to those numbers are insect brains.
Maybe something can be biogenetically engineered :) Somehow wire billions of
insect brains together modified in such a way that they are peer 2 peer and
emerge a greater intelligence :)

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] the universe is computable [Was: Occam's Razor and its abuse]

2008-10-30 Thread John G. Rose
You can't compute the universe within this universe because the computation
would have to include itself.

Also there's not enough energy to power the computation.

But if the universe is not what we think it is, perhaps it is computable
since all kinds of assumptions are made about it, structurally and so forth.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Cloud Intelligence

2008-10-29 Thread John G. Rose
 From: Bob Mottram [mailto:[EMAIL PROTECTED]
 Beware of putting too much stuff into the cloud.  Especially in the
 current economic climate clouds could disappear without notice (i.e.
 unrecoverable data loss).  Also, depending upon terms and conditions
 any data which you put into the cloud may not legally be owned by you,
 even if you created it.
 

For private commercial clouds this is true. But imagine a public
self-healing cloud where it is somewhat self-regulated and self-organized.
Though commercial clouds could have some sort of inter-cloud virtual
backbone that they subscribe to. So Company A goes bankrupt but it's cloud
is offloaded into the backbone and absorbed by another cloud. Micro payments
migrate with the cloud. Ya right like that could ever happen.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] On programming languages

2008-10-25 Thread John G. Rose
 From: Ben Goertzel [mailto:[EMAIL PROTECTED]
 
 Somewhat similarly, I've done coding on Windows before, but I dislike
 the operating system quite a lot, so in general I try to avoid any
 projects where I have to use it.
 
 However, if I found some AGI project that I thought were more promising
 than OpenCog/Novamente on the level of algorithms, philosophy-of-mind
 and structures ... and, egads, this project ran only on Windows ... I
 would certainly not hesitate to join that project, even though my
 feeling is that any serious large-scale software project based
 exclusively on Windows is going to be seriously impaired by its OS
 choice...
 
 In short, I just don't think these issues are **all that** important.
 They're important, but having the right AGI design is far, far more so.
 
 People seem to debate programming languages and OS's endlessly, and
 this list is no exception.  There are smart people on multiple sides of
 these debates.  To make progress on AGI, you  just gotta make *some*
 reasonable choice and start building ... there's no choice that's going
 to please everyone, since this stuff is so contentious...
 
 


Programming languages - people have their own particulars - standards are
interrelating. XML for example. 

Uhm math is the ultimate standard. Can you think of a better one? English is
not standard. NLP is a gluttonous mix of rigarmarole. So Lojban... ya... the
symbolistic expenditure has to be defined. So a map to theoretical Language
A from X-lish or whatever. Kind of like music rewritten for more of a
distance approach from humanistic portrayal.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-20 Thread John G. Rose
Just an idea - not sure if it would work or not - 3 lists: [AGI-1], [AGI-2],
[AGI-3]. Sub-content is determined by the posters themselves. Same amount of
emails initially but partitioned up.

Wonder what would happen?

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Re: Defining AGI

2008-10-17 Thread John G. Rose
 From: Ben Goertzel [mailto:[EMAIL PROTECTED]
 
 As Ben has pointed out language understanding is useful to teach AGI.
 But if
 we use the domain of mathematics we can teach AGI by formal expressions
 more
 easily and we understand these expressions as well.
 
 - Matthias
 
 
 That is not clear -- no human has learned math that way.
 
 We learn math via a combination of math, human language, and physical
 metaphors...
 
 And, the specific region of math-space that humans have explored, is
 strongly biased toward those kinds of math that can be understood via
 analogy to physical and linguistic experience
 
 I suggest that the best way for humans to teach an AGI math is via
 first giving that AGI embodied, linguistic experience ;-)
 
 See Lakoff and Nunez, Where Mathematics Comes From, for related
 arguments.
 

That's one of the few books that I have purchased. It's good for showing the
human experience with math, how our version of math is like a scratchpad of
systematic conceptual analogies. Fine. But the book doesn't really open it
up it kinda just talks about math being a product of us. I wanted more.

But the collection of math created by humans over time is the best that
evolution has to offer. One human mind can barely comprehend a small subset
of it. When you talk about teaching an AGI math it throws me off because
using our math from the getgo, giving the AGI that to start off with, IMO,
throws it over an initial humongous computational energy threshold that it
would have to get around someway or another if it didn't start out with it.

I suppose you may be building a substrate than can be taught math and then
take it from there. The substrate being AGI hypergraphs and operating agents
on them. Why not build the math into the substrate from the start? 

Teaching AGI sounds so laborious, it should just learn. Unless you are
talking about RSI'ing the math into the core

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] NEWS: Scientist develops programme to understand alien languages

2008-10-17 Thread John G. Rose
 From: Pei Wang [mailto:[EMAIL PROTECTED]
 
 ... even an alien language far removed from any on Earth is likely to
 have recognisable patterns that could help reveal how intelligent the
 life forms are.
 

This is true unless the alien life form existed in mostly order and
communicated via the absence of order, IOW it's language evolved towards
randomness. Then we might have difficulties understanding it due to
computational expense. Kind of like a natural encryption.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] First issue of H+ magazine ... http://hplusmagazine.com/

2008-10-17 Thread John G. Rose
This is cool it's kind of like a combo of Omni, a desktop publishing fanzine
with 3DSMax cover page, and randomly gathered techno tidbits all
encapsulated in a secure PDF.

The skin phone is neat and the super imposition eye contact lens by U-Dub
has value. I wonder where they got that idea from, superimposition glasses
may be easier to interface with, I mean how do you get a wireless processor
and antennae into the lens?

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-16 Thread John G. Rose
 From: Eric Burton [mailto:[EMAIL PROTECTED]
 
 Honestly, if the idea is to wave our hands at one another's ideas then
 let's at least see something on the table. I'm happy to discuss my
 work with natural language parsing and mood evaluation for
 low-bandwidth human mimicry, for instance, because it has amounted to
 thousands of lines of occasionally-fungible code thus far. It's not on
 sourceforge because it's still a mess but I'll pastebin it if you ask.

What's the gist of the code? Sounds like chat-bot but I just know there is
more to it.

 
 I don't understand how people wallow in their theories for so long
 that they become a matter of dogma, with the need for proof removed,
 and the urgency of producing and testing an implementation subverted
 by smugness and egotism. The people here worth listening to don't have
 to make excuses. They can show their work.

True though. But if your theory is good enough the first person usually sold
on it is yourself. And then you must become an ardent follower.

 
 I see a lot of evasiveness and circular arguments going on, where
 people are seeking some kind of theoretical high-ground without giving
 away anything that could bolster another theory. It's time-wastingly
 self-interested. We won't achieve consensus through half-explained
 denials and reversals. This list isn't a battle of theorems for
 supremacy. It is for collaboration.

Yep. Inter connecting at knowledge junctions could be conducive to more
civil collaborative effort. At some point compromises must be made and hands
shaken. Minds melded instead of heads banged :)

 
 My 2 cents. The internet archive seems to have shed about half the
 material I produced since the nineties, so I do apologize for being so
 pissed off _

Did the global brain forget the low latency long term memory of you? Perhaps
it's just compressed off into some lower latency subsystem.

There has to be more than one internet archive, honestly. The existing one
does have it's shortcomings.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: AW: Defining AGI (was Re: AW: [agi] META: A possible re-focusing of this list)

2008-10-16 Thread John G. Rose
 From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED]
 
 In my opinion, the domain of software development is far too ambitious
 for
 the first AGI.
 Software development is not a closed domain. The AGI will need at least
 knowledge about the domain of the problems for which the AGI shall
 write a
 program.
 
 The English interface is nice but today it is just a dream. An English
 interface is not needed for a proof of concept for first AGI. So why to
 make
 the problem harder as it already is?
 

English is just the gel that the knowledge is embedded in. Sorting out that
format is bang for the buck. And it is just symbology as math is symbology,
or representation.

 The domain of mathematics is closed but can be extended by adding more
 and
 more definitions and axioms which are very compact. The interface could
 be
 very simple. And thus you can mainly concentrate to build the kernel
 AGI
 algorithm.
 

Mathematics effectively is just another gel that the knowledge is stored in.
It's a representation of (some other wierd physics stuff that I won't bring
up). I think I can say that, that math is just an instantiation of something
other. Unless the actual math symbology is the math and not what it
represents.

Either way, all will be represented in binary, for software or an
electronics based AGI. How can you get away from the coupling of math and
software? Unless there is some REAL special sauce like some analog based
hyperbrid.

Loosemore would say that the coupling breaks somehow at complexity regions.
And I think that the representation of reality has to include those. 

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread John G. Rose
 From: Ben Goertzel [mailto:[EMAIL PROTECTED]
 
 One possibility would be to more narrowly focus this list, specifically
 on **how to make AGI work**.
 
 
 Potentially, there could be another list, something like agi-
 philosophy, devoted to philosophical and weird-physics and other
 discussions about whether AGI is possible or not.  I am not sure
 whether I feel like running that other list ... and even if I ran it, I
 might not bother to read it very often.  I'm interested in new,
 substantial ideas related to the in-principle possibility of AGI, but
 not interested at all in endless philosophical arguments over various
 peoples' intuitions in this regard.
 

I'd go for 2 lists. Sometimes after working intensely on something concrete
and specific one wants to step back and theorize. And then particular AGI
approaches may be going down the wrong trail and need to step back and look
at things from a different perspective.

Also there are probably many people that wish to speak up on various topics
but are silent due to them not wanting to clutter the main AGI list. I would
guess that there are some valuable contributions that need to be made but
are not directly related to some particular well-defined applicable subject.

You could almost do 3, AGI engineering, science and philosophy. We are all
well aware of the philosophical directions the list takes though I see the
science and engineering getting a bit too intertwined as well. Although with
this sort of thing it's hard to avoid.

Even so, with all this the messages in the one list still are grouped by
subject... I mean people can parse. But to simplify moderation and
organization, etc..

John 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread John G. Rose
 From: BillK [mailto:[EMAIL PROTECTED]
 
 I agree. I support more type 1 discussions.
 
 I have felt for some time that an awful lot of time-wasting has been
 going on here.
 
 I think this list should mostly be for computer tech discussion about
 methods of achieving specific results on the path(s) to AGI.
 
 I agree that there should be a place for philosophical discussion,
 either on a separate list, or uniquely identified in the Subject so
 that technicians can filter off such discussions.
 
 Some people may need to discuss philosophic alternative paths to AGI,
 to help clarify their thoughts. But if so, they are probably many
 years away from producing working code and might be hindering others
 who are further down the path of their own design.
 
 Two lists are probably best. Then if technicians want a break from
 coding, they can dip into the philosophy list, to offer advice or
 maybe find new ideas to play with.
 And, as John said.  it would save on moderation time.
 
 

Yes and someone else could be moderator for type 2 list, someone could be
nominated. Then Ben could be the super mod and reign in when he has a bad
day :)

I nominate Tinter. Just kidding.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread John G. Rose
 From: Terren Suydam [mailto:[EMAIL PROTECTED]
 
 This is a publicly accessible forum with searchable archives... you
 don't necessarily have to be subscribed and inundated to find those
 nuggets. I don't know any funding decision makers myself, but if I were
 in control of a budget I'd be using every resource at my disposal to
 clarify my decision. If I were considering Novamente for example I'd be
 looking for exactly the kind of exchanges you and Richard Loosemore
 (for example) have had on the list, to gain a better understanding of
 possible criticism, and because others may be able to articulate such
 criticism far better than me.  Obviously the same goes for anyone else
 on the list who would look for funding... I'd want to see you defend
 your ideas, especially in the absence of peer-reviewed journals
 (something the JAGI hopes to remedy obv).
 

Unfortunately there's going to be funding thrown at AGI that has nothing to
do with any sort of great theory or concrete engineering plans. Software and
technology funding many times doesn't work that way. It's rather arbitrary.
I hope the right people get the right opportunities.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: [agi] Dangerous Knowledge - Update

2008-10-01 Thread John G. Rose
 From: Brad Paulsen [mailto:[EMAIL PROTECTED]
 
 Sorry, but in my drug-addled state I gave the wrong URI for the
 Dangerous
 Knowledge videos on YouTube.  The one I gave was just to the first part
 of
 the Cantor segment.  All of the segments can be reached from the link
 below.  You can recreate this link by searching, in YouTube, on the key
 words Dangerous Knowledge.
 
 http://www.youtube.com/results?search_query=Dangerous+Knowledgesearch_t
 ype=aq=-1oq=
 

Just watched this video and I like the latter end of part 7 where they show 
Godel's normal neat paperwork and then the devoid sketchy papers where he was 
trying to figure out the continuum hypothesis. And then in his study when his 
hands started getting all stretched out and warped.

Let this be a lesson to people, working on the continuum hypothesis, 
incompleteness and potentially even AGI is dangerous to your health and could 
result in insanity or death. This should only be performed by qualified and 
highly trained individuals, unless of course you make a pact with Faust.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


RE: [agi] Artificial humor

2008-09-11 Thread John G. Rose
 From: John LaMuth [mailto:[EMAIL PROTECTED]
 
 As I have previously written, this issue boils down as one is serious
 or
 one is not to be taken this way a meta-order perspective)... the key
 feature in humor and comedy -- the meta-message being don't take me
 seriously
 
 That is why I segregated analogical humor seperately (from routine
 seriousness) in my 2nd US patent 7236963
 www.emotionchip.net
 
 This specialized meta-order-type of disqualification is built directly
 into
 the AGI schematics ...
 
 I realize that proprietary patents have acquired a bad cachet, but
 should
 not necessarily be ignored 
 

Nice patent. I can just imagine the look on the patent clerk's face when
that one came across the desk.

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: Language modeling (was Re: [agi] draft for comment)

2008-09-08 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 --- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote:
 
  From: John G. Rose [EMAIL PROTECTED]
  Subject: RE: Language modeling (was Re: [agi] draft for comment)
  To: agi@v2.listbox.com
  Date: Sunday, September 7, 2008, 9:15 AM
   From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  
   --- On Sat, 9/6/08, John G. Rose
  [EMAIL PROTECTED] wrote:
  
Compression in itself has the overriding goal of
  reducing
storage bits.
  
   Not the way I use it. The goal is to predict what the
  environment will
   do next. Lossless compression is a way of measuring
  how well we are
   doing.
  
 
  Predicting the environment in order to determine which data
  to pack where,
  thus achieving higher compression ratio. Or compression as
  an integral part
  of prediction? Some types of prediction are inherently
  compressed I suppose.
 
 Predicting the environment to maximize reward. Hutter proved that
 universal intelligence is a compression problem. The optimal behavior of
 an AIXI agent is to guess the shortest program consistent with
 observation so far. That's algorithmic compression.
 

Oh I see. Guessing shortest program = compression. OK right. But yeah like
Pei said the word compression is misleading. It implies a reduction where
you are actually increasing understanding :)

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: Language modeling (was Re: [agi] draft for comment)

2008-09-07 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 --- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote:
 
  Compression in itself has the overriding goal of reducing
  storage bits.
 
 Not the way I use it. The goal is to predict what the environment will
 do next. Lossless compression is a way of measuring how well we are
 doing.
 

Predicting the environment in order to determine which data to pack where,
thus achieving higher compression ratio. Or compression as an integral part
of prediction? Some types of prediction are inherently compressed I suppose.


John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread John G. Rose
Thinking out loud here as I find the relationship between compression and
intelligence interesting:

Compression in itself has the overriding goal of reducing storage bits.
Intelligence has coincidental compression. There is resource management
there. But I do think that it is not ONLY coincidental. Knowledge has
structure which can be organized and naturally can collapse into a lower
complexity storage state. Things have order, based on physics and other
mathematical relationships. The relationship between compression and stored
knowledge and intelligence is intriguing. But knowledge can be compressed
inefficiently to where it inhibits extraction and other operations so there
are differences with compression and intelligence related to computational
expense. Optimal intelligence would have a variational compression structure
IOW some stuff needs fast access time with minimal decompression resource
expenditure and other stuff has high storage priority but computational
expense and access time are not a priority.

And then when you say the word compression there is a complicity of utility.
The result of a compressor that has general intelligence still has a goal of
reducing storage bits. I think that compression can be a byproduct of the
stored knowledge created by a general intelligence. But if you have a
compressor with general intelligence built in and you assign it a goal of
taking input data and reducing the storage space it still may result in a
series of hacks because that may be the best way of accomplishing that goal.


Sure there may be some new undiscovered hacks that require general
intelligence to uncover. And a compressor that is generally intelligent may
produce more rich lossily compressed data from varied sources. The best
lossy compressor is probably generally intelligent. They are very similar as
you indicate... but when you start getting real lossy, when you start asking
questions from your lossy compressed data that are not related to just the
uncompressed input there is a difference there. Compression itself is just
one dimensional. Intelligence is multi. 

John 



 -Original Message-
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Sent: Friday, September 05, 2008 6:39 PM
 To: agi@v2.listbox.com
 Subject: Re: Language modeling (was Re: [agi] draft for comment)
 
 --- On Fri, 9/5/08, Pei Wang [EMAIL PROTECTED] wrote:
 
  Like to many existing AI works, my disagreement with you is
  not that
  much on the solution you proposed (I can see the value),
  but on the
  problem you specified as the goal of AI. For example, I
  have no doubt
  about the theoretical and practical values of compression,
  but don't
  think it has much to do with intelligence.
 
 In http://cs.fit.edu/~mmahoney/compression/rationale.html I explain why
 text compression is an AI problem. To summarize, if you know the
 probability distribution of text, then you can compute P(A|Q) for any
 question Q and answer A to pass the Turing test. Compression allows you
 to precisely measure the accuracy of your estimate of P. Compression
 (actually, word perplexity) has been used since the early 1990's to
 measure the quality of language models for speech recognition, since it
 correlates well with word error rate.
 
 The purpose of this work is not to solve general intelligence, such as
 the universal intelligence proposed by Legg and Hutter [1]. That is not
 computable, so you have to make some arbitrary choice with regard to
 test environments about what problems you are going to solve. I believe
 the goal of AGI should be to do useful work for humans, so I am making a
 not so arbitrary choice to solve a problem that is central to what most
 people regard as useful intelligence.
 
 I had hoped that my work would lead to an elegant theory of AI, but that
 hasn't been the case. Rather, the best compression programs were
 developed as a series of thousands of hacks and tweaks, e.g. change a 4
 to a 5 because it gives 0.002% better compression on the benchmark. The
 result is an opaque mess. I guess I should have seen it coming, since it
 is predicted by information theory (e.g. [2]).
 
 Nevertheless the architectures of the best text compressors are
 consistent with cognitive development models, i.e. phoneme (or letter)
 sequences - lexical - semantics - syntax, which are themselves
 consistent with layered neural architectures. I already described a
 neural semantic model in my last post. I also did work supporting
 Hutchens and Alder showing that lexical models can be learned from n-
 gram statistics, consistent with the observation that babies learn the
 rules for segmenting continuous speech before they learn any words [3].
 
 I agree it should also be clear that semantics is learned before
 grammar, contrary to the way artificial languages are processed. Grammar
 requires semantics, but not the other way around. Search engines work
 using semantics only. Yet we cannot parse sentences like I ate pizza
 with Bob, I 

RE: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread John G. Rose
 From: Harry Chesley [mailto:[EMAIL PROTECTED]
 
 Searle's Chinese Room argument is one of those things that makes me
 wonder if I'm living in the same (real or virtual) reality as everyone
 else. Everyone seems to take it very seriously, but to me, it seems like
 a transparently meaningless argument.
 

I think that the Chinese Room argument is an AI philosophical anachronistic
meme that is embedded in the AI community and promulgated by monotonous
drone-like repetitivity. Whenever I hear it I'm like let me go read up on
that for the n'th time and after reading I'm like WTF are they talking
about!?!? Is that one the grand philosophical hang-ups in AI thinking?

I wish I had a mega-meme expulsion cannon and could expunge that mental knot
of twisted AI arterialsclerosis.

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] Any further comments from lurkers??? [WAS do we need a stronger politeness code on this list?]

2008-08-03 Thread John G. Rose
Well, even though there was bloodshed, Edward was right on slamming Richard
on the complex systems issue.  This issue needs to be vetted, sorted out,
either laid to rest or incorporated into other's ideas. Perhaps in some of
the scientist's minds it has been laid to rest. In my mind it is there,
nagging me for further inspection in hopes of good riddance or some serious
code writing.

 

As far as politeness, yeah people need to be civil, but passionate at the
same time. I always wondered about harsh invective behavior and then a
friend of mine asked me Why do cactus have thorns? and that answered the
question.

 

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi]

2008-07-21 Thread John G. Rose
In-Reply-To: [EMAIL PROTECTED]
Subject: RE: [agi] Patterns and Automata
Date: Mon, 21 Jul 2008 17:26:43 -0600
Message-ID: [EMAIL PROTECTED]
MIME-Version: 1.0
Content-Type: text/plain;
charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Office Outlook 12.0
Thread-Index: Acjqd2/upUdwObXiT+Wx0QccpzkjsgBCQ2zQ
Content-Language: en-us

Well I have lots and lots of related mathematics paper references covering
parts and pieces but nothing that shows how to build the full system. 

Here is a paper that talks a little about forest automata -
http://www.mimuw.edu.pl/~bojan/papers/forest.pdf

For morphisms - 
http://en.wikipedia.org/wiki/Morphism

So.. nothing on the related cognition engineering though... but expanding on
graph isomorphism detection theory leads to the beginnings of that.

John

 -Original Message-
 From: Abram Demski [mailto:[EMAIL PROTECTED]
 Sent: Sunday, July 20, 2008 8:46 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Patterns and Automata
 
 Can you cite any papers related to the approach you're attempting? I
 do not know anything about morphism detection, morphism forests, etc.
 
 Thanks,
 Abram
 
 On Sun, Jul 20, 2008 at 2:03 AM, John G. Rose [EMAIL PROTECTED]
 wrote:
  From: Abram Demski [mailto:[EMAIL PROTECTED]
  No, not especially familiar, but it sounds interesting. Personally I
  am interested in learning formal grammars to describe data, and there
  are well-established equivalences between grammars and automata, so
  the approaches are somewhat compatible. According to wikipedia,
  semiautomata have no output, so you cannot be using them as a
  generative model, but they also lack accept-states, so you can't be
  using them as recognition models, either. How are you using them?
 
 
  Hi Abram,
 
  More of recognizing them verses using them to recognize. Also though
 they
  have potential as morphism detection catalysts.
 
  I haven't designed the formal languages, I guess that I'm still
 building
  alphabets, an alphabet would consist of discrete knowledge structure.
 My
  model is a morphism forest and I will integrate automata networks
 within
  this - but still need to do language design. The languages will run
 within
  the automata networks.
 
  Uhm I'm interested too in languages and protocol. Most modern internet
  protocol is primitive. Any ideas on languages and internet protocol?
  Sometimes I think that OSI layers need to be refined. Almost like
 there
  needs to be another layer :) a.k.a. Layer 8.
 
  John
 
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 cbdf2a
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] Patterns and Automata

2008-07-20 Thread John G. Rose
 From: Abram Demski [mailto:[EMAIL PROTECTED]
 No, not especially familiar, but it sounds interesting. Personally I
 am interested in learning formal grammars to describe data, and there
 are well-established equivalences between grammars and automata, so
 the approaches are somewhat compatible. According to wikipedia,
 semiautomata have no output, so you cannot be using them as a
 generative model, but they also lack accept-states, so you can't be
 using them as recognition models, either. How are you using them?
 

Hi Abram,

More of recognizing them verses using them to recognize. Also though they
have potential as morphism detection catalysts. 

I haven't designed the formal languages, I guess that I'm still building
alphabets, an alphabet would consist of discrete knowledge structure. My
model is a morphism forest and I will integrate automata networks within
this - but still need to do language design. The languages will run within
the automata networks.

Uhm I'm interested too in languages and protocol. Most modern internet
protocol is primitive. Any ideas on languages and internet protocol?
Sometimes I think that OSI layers need to be refined. Almost like there
needs to be another layer :) a.k.a. Layer 8.
 
John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] Patterns and Automata

2008-07-17 Thread John G. Rose
 From: Abram Demski [mailto:[EMAIL PROTECTED]
 John,
 What kind of automata? Finite-state automata? Pushdown? Turing
 machines? Does CA mean cellular automata?
 --Abram
 

Hi Abram,

FSM, semiatomata, groups w/o actions, semigroups with action in the
observer, etc... CA is for cellular automata.

This is mostly for spatio temporal recognition and processing I haven't
tried looking much at other data yet.

Why do you ask are you familiar with this?

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] Patterns and Automata

2008-07-16 Thread John G. Rose
 From: Pei Wang [mailto:[EMAIL PROTECTED]
 On Mon, Jul 7, 2008 at 12:49 AM, John G. Rose [EMAIL PROTECTED]
 wrote:
 
  In pattern recognition, are some patterns not expressible with
 automata?
 
 I'd rather say not easily/naturally expressible. Automata is not a
 popular technique in pattern recognition, compared to, say, NN. You
 may want to check out textbooks on PR, such as
 http://www.amazon.com/Pattern-Recognition-Learning-Information-
 Statistics/dp/0387310738/ref=pd_bbs_sr_2?ie=UTF8s=booksqid=1215382348
 sr=8-2
 
  The reason is ask is that I am trying to read sensory input using
 automata
  recognition. I hear a lot of discussion on pattern recognition and am
  wondering if pattern recognition is the same as automata recognition.
 
 Currently pattern recognition is a much more general category than
 automata recognition.
 


I am thinking of breaching the gap somewhat with automata recognition + CA
recognition. So automata as in automata, semiautomata, and automata w/o
action + CA recognition. But recognizing automata from data requires some
techniques that pattern recognition uses. Automata are easy to work with,
especially with visual data, as I'm trying to get to a general pattern
recognition automata subset equivalent.

I haven't heard of any profound general pattern recognition techniques so
I'm more comfortable attempting to derive my own functional model. I suspect
how existing pattern classification schemes work as they are ultimately
dependant on the mathematical systems used to describe them. And the space
of all patterns compared to the space of all probable patterns in this
universe... 

I'd be interested in books that study pattern processing across a complex
systems layer... or in this case automata processing just to get a
perspective on any potential computational complexity advantages.

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


RE: [agi] Patterns and Automata

2008-07-06 Thread John G. Rose
 From: Pei Wang [mailto:[EMAIL PROTECTED]
 
 Automata is usually used with a well-defined meaning. See
 http://en.wikipedia.org/wiki/Automata_theory
 
 On the contrary, pattern has many different usages in different
 theories, though intuitively it indicates some observed structures
 consisting of smaller components.
 
 These two words are rarely compared directly, since their difference
 is hard to summarize --- they are further away than apples and
 oranges, unless pattern is used with a specific meaning. For
 example, automata can be used for pattern recognition, for a special
 type of pattern.
 

In pattern recognition, are some patterns not expressible with automata?

The reason is ask is that I am trying to read sensory input using automata
recognition. I hear a lot of discussion on pattern recognition and am
wondering if pattern recognition is the same as automata recognition. 

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-02 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 Ah, but now you are stating the Standard Reply, and what you have to
 understand is that the Standard Reply boils down to this:  We are so
 smart that we will figure a way around this limitation, without having
 to do any so crass as just copying the human design.
 


Well another reply could be - OK everyone AGI is impossible so you can go
home now. That would work real well. Into the future more and more
bodies(and brains) will be thrown at this no matter what. Satellite
technologies make it all more attractive and worthwhile and make it appear
that progress is being made, and it is. If everything else is figured out
and engineered and the last thing is a CSP that is still progress EVEN if
some of the components need to be totally redesigned. Remember even basic
stuff like say a primitive distributed graph software library is still in
early stages of being built for AGI amongst many other things. There are
protocols, standards, all kinds of stuff needed yet not there, especially
experience.

 The problem is that if you apply that logic to well-known cases of
 complex systems, it amounts to nothing more than baseless, stubborn
 optimism in the face of any intractable problem.  It is this baseless
 stubborn optimism that I am trying to bring to everyone's attention.
 

Sure. Yet how many resources are thrown at predicting the weather and it is
usually still WRONG!! The utility of accurate prediction is so high even
useless attempts have value due to spin-off technologies and incidentals and
there is psychological value..


 In all my efforts to get this issue onto people's mental agenda, my goal
 is to make them realize that they would NEVER say such a silly thing
 about the vast majority of complex systems (nobody has any idea how to
 build an analytical theory of the relationship between the patterns that
 emerge in Game Of Life, for example, and that is one of the most trivial
 examples of a complex system that I can think of!).  But whereas most
 mathematicians would refuse to waste any time at all trying to make a
 global-to-local theory for complex systems in which there is really
 vicious self-organisation at work, AI researchers blithely walk in and
 say We reckon we can just use our smarts and figure out some heuristics
 to get around it.
 

That's what makes engineers engineers. If it is not conquerable it is
workaroundable. Still though I don't know how much proof that there is a
CSP. The CB example you gave reminds me of a dynamical system. Proving the
CSP exists may turn heads more.


 I'm just trying to get people to do a reality check.
 
 Oh, and meanwhile (when I am not firing off occasional broadsides on
 this list) I *am* working on a solution.
 


Yes, and your solution attempt is :) Please feel free to present ideas to
the list for constructive criticism :)

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread John G. Rose
Well I can spend a lot of time replying this since it is a tough subject.
The CB system is a good example my thinking doesn't involve CB's yet so the
organized mayhem would be of a different form and I was thinking of the
complexity being integrated differently.

What you are saying makes sense in terms of evolution finding the right
combination. The reliance on the complexity, yes sure, possible. What I
think of this system you describe is like if you design a complicated
electronic circuit with much theory but little hands-on experience you run
into complexity issues from component value deviations and environmental
factors that need to be tamed and filtered out before your theoretical
electronic emergence comes to life. In that case the result is highly
dependent on the interoperating components clean design. BUT there are some
circuits I believe, can't think of any offhand, where the opposite is true.
It just kind of works based on based on complex subsystems interoperational
functionality and it was discovered, not designed intentionally.

If the CS problem is such that you describe then there is a serious
obstacle. I personally think that getting close to the human brain isn't
going to do it. A monkey brain is close. Can we get closer with a
simulation? Also I think there are other designs that Earth evolution just
didn't get. Those others designs may have the complex reliance.

Building a complex based intelligence much different from the human brain
design but still basically dependant on complexity is not impossible just
formidable. Working with software systems that have designed complexity and
getting predicted emergence and in this case cognition, well that is
something that takes special talent. We have tools now that nature and
evolution didn't have. We understand things through collective knowledge
accumulated over time. It can be more than trial and error. And the existing
trial and error can be narrowed down.

The part that I wonder about is why this complex ingredient is there (if it
is). Is it because of the complexity spectrum inherent in nature? Is it
fully non-understandable, can it be derived based on nature's complexity
structure? Or is there such a computational resource barrier that it is just
prohibitively inefficient to calculate. Or are we perhaps using the wrong
mathematics to try to understand it? Can it be estimated and does it
converge to anything we know of or is it just so randomish and exact.

I feel though that the human brain had to evolve though that messy data
space of nature and what we have is a momentary semi-reflection of that
historical environmental complexity. So our form of intelligence is somewhat
optimized for that. And if you take an intersecting subset with other
theoretical forms of intelligence would the complexity properties somehow
correlate or are they highly dependent on the environment of the evolution?
Or does our atomic based universe define what that evolutionary cognitive
complexity dependency is. I suppose that is the basis of arguments for or
against. 

John


 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 There has always been a lot of confusion about what exactly I mean by
 the complex systems problem (CSP), so let me try, once again, to give
 a quick example of how it could have an impact on AGI, rather than what
 the argument is.
 
 (One thing to bear in mind is that the complex systems problem is about
 how researchers and engineers should go about building an AGI.  The
 whole point of the CSP is to say that IF intelligent systems are of a
 certain sort, THEN it will be impossible to build intelligent systems
 using today's methodology).
 
 What I am going to do is give an example of how the CSP might make an
 impact on intelligent systems.  This is only a made-up example, so try
 to see it is as just an illustration.
 
 Suppose that when evolution was trying to make improvements to the
 design of simple nervous systems, it hit upon the idea of using
 mechanisms that I will call concept-builder units, or CB units.  The
 simplest way to understand the CB units is to say that each one is
 forever locked into a peculiar kind of battle with the other units.  The
 CBs spend a lot of energy engaging in the battle with other CB units,
 but they also sometimes do other things, like fall asleep (in fact, most
 of them are asleep at any given moment), or have babies (they spawn new
 CB units) and sometimes they decide to lock onto a small cluster of
 other CB units and become obsessed with what those other CBs are doing.
 
 So you should get the idea that these CB units take part in what can
 only be described as organized mayhem.
 
 Now, if we were able to look inside a CB system and see what the CBs are
 doing [Note:  we can do this, to a limited extent:  it is called
 introspection], we would notice many aspects of CB behavior that were
 quite regular and sensible.  We would say, for example, that the CB
 units appear to be representing 

RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread John G. Rose
Could you say that it takes a complex system to know a complex system? If an
AGI is going to try to say predict the weather, it doesn't have infinite cpu
cycles to simulate so it'll have to come up with something better. Sure it
can build a probabilistic historical model but that is kind of cheating. So
for it to emulate the weather, I think, or to semi-understand it there has
to be some complex systems activity going on there in its cognition. No?

I'm not sure that this what Richard is taking about but an AGI is going to
bump into complex systems all over the place. Also it will encounter what
seems to be complex and later on it may determine that it is not. And
perhaps, a key component in the cognition engine in order for it to
understand complexity differentials in systems from a relationist standpoint
it would need some sort of complexity .. not a comparator but a...sort of
harmonic leverage. Can't think of the right words

Either way this complexity thing is getting rather annoying because on one
hand you think it can drasticly enhance an AGI and is required and on the
other hand you think it is unnecessary - I'm not talking about creativity or
thought emergence or similar but complexity as integral component in a
computational cognition system.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-06-08 Thread John G. Rose
 From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED]
 
 The problem of consciousness is not only a hard problem because of
 unknown
 mechanisms in the brain but it is a problem of finding the DEFINITION of
 necessary conditions for consciousness.
 I think, consciousness without intelligence is not possible.
 Intelligence
 without consciousness is possible. But I am not sure whether GENERAL
 intelligence without consciousness is possible. In every case,
 consciousness
 is even more a white-box problem than intelligence.
 

For general intelligence some components and sub-components of consciousness
need to be there and some don't. And some could be replaced with a human
operator as in an augmentation-like system. Also some components could be
designed drastically different from their human consciousness counterparts
in order to achieve more desirous effects in one area or another. ALSO there
may be consciousness components integrated into AGI that humans don't have
or that are almost non-detectable in humans. And I think that the different
consciousness components and sub-components could be more dynamically
resource allocated in the AGI software than in the human mind.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread John G. Rose
 From: A. T. Murray [mailto:[EMAIL PROTECTED]
 
 The abnormalis sapiens Herr Doktor Steve Richfield wrote:
 
 
  Hey you guys with some gray hair and/or bald spots,
  WHAT THE HECK ARE YOU THINKING?
 
 prin Goertzel genesthai, ego eimi
 
 http://www.scn.org/~mentifex/mentifex_faq.html
 
 My hair is graying so much and such a Glatze is beginning,
 that I went in last month and applied for US GOV AI Funding,
 based on my forty+ quarters of work history for The Man.
 In August of 2008 the US Government will start funding my AI.
 

Does this mean that now maybe you can afford to integrate some AJAX into
that JavaScript AI mind of yours?

John




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Consciousness vs. Intelligence

2008-06-08 Thread John G. Rose
 From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED]
 
 For general intelligence some components and sub-components of
 consciousness
 need to be there and some don't. And some could be replaced with a human
 operator as in an augmentation-like system. Also some components could
 be
 designed drastically different from their human consciousness
 counterparts
 in order to achieve more desirous effects in one area or another. ALSO
 there
 may be consciousness components integrated into AGI that humans don't
 have
 or that are almost non-detectable in humans. And I think that the
 different
 consciousness components and sub-components could be more dynamically
 resource allocated in the AGI software than in the human mind.
 
 
 
 Can neither say 'yes' nor 'no'. Depends on how we DEFINE consciousness
 as a
 physical or algorithm-phenomenon. Until now we each have only an idea of
 consciousness by intrinsic phenomena of our own mind. We cannot prove
 the
 existence of consciousness in any other individual because of the lack
 of a
 better definition.
 I do not believe, that consciousness is located in a small sub-
 component.
 It seems to me, that it is an emergent behavior of a special kind of
 huge
 network of many systems. But without any proper definition this can only
 be
 a philosophical thought.
 
 

Given that other humans have similar DNA it is fair to assume that they are
conscious like us. Not 100% proof but probably good enough. Sure the whole
universe may still be rendered for the purpose of one conscious being, and
in a way that is true, and potentially that is something to take into
account.

Consciousness has multiple definitions by multiple different people. But
even without an exact definition you can still extract properties and
behaviors from it and from those, extrapolations can be made and the
beginnings of a model can be established.

Even if it is an emergent behavior of a huge network of many systems doesn't
preclude it from being described in a non-emergent way. And if it is only
uniquely describable through emergent behavior it still has some general
commonly accepted components or properties.

John






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Pearls Before Swine...

2008-06-08 Thread John G. Rose
 John G. Rose wrote:

  Does this mean that now maybe you can afford to integrate
  some AJAX into that JavaScript AI mind of yours?
 
  John
 
 No, because I remain largely ignorant of Ajax.
 
 http://mind.sourceforge.net/Mind.html
 and the JavaScript Mind User Manual (JMUM) at
 http://mentifex.virtualentity.com/userman.html
 will remain in JavaScript and not Ajax.
 


Oh OK just checkin'. AJAX is JavaScript BTW, and quite powerful.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Paradigm Shifting regarding Consciousness

2008-06-08 Thread John G. Rose
I don't think anyone anywhere on this list ever suggested time sequential
was required for consciousness. Now as data streams in from sensory
receptors that initially is time sequential. But as it is processed that
changes to where time is changed. And time is sort of like an index eh? Or
is time just an illusion? For consciousness  though there is this
non-synchronous concurrent processing of components that gives it, at least
for me, some of its characteristic behavior. Different things happening at
the same time but all slightly off or lagging. If everything was happening
at the same instant that might negative some of the self-detectability of
consciousness.

 

John

 

 

From: Steve Richfield [mailto:[EMAIL PROTECTED] 



To all,

 

In response to the many postings regarding consciousness, I would like to
make some observations:

 

1.  Computation is often done best in a shifted paradigm, where the
internals are NOT one-to-one associated with external entities. A good
example are modern chess playing programs, which usually play chess on an
80-square long linear strip with 2 out of every 10 squares being
unoccupyable. Knights can move +21, +19, +12, +8, -8, -12, -19, and -21. The
player sees a 2-D space, but the computer is entirely in a 1-D space. I
suspect (and can show neuronal characteristics that strongly suggest) that
much the same is happening with the time dimension. There appears to be
little different with this 4th dimension, except how it is interfaced with
the outside world.

2.  Paradigm mapping is commonplace in computing, e.g. the common practice
of providing stream of consciousness explanations for AI program
operation, to aid in debugging. Are such program NOT conscious because the
logic they followed was NOT time-sequential?! When asked why I made a
particular move in a chess game, it often takes me a half hour to explain a
decision that I made in seconds. Clearly, my own thought processes are NOT
time-sequential consciousness as others' here on this forum apparently are.
I believe that designing for time-sequential conscious operation is
starting from a VERY questionable premise.

3.  Note that dreams can span years of seemingly real experience in the
space of seconds/minutes. Clearly this process is NOT time-sequential.

4.  Note that individual brains can be organized COMPLETELY differently,
especially in multilingual people. Hence, our wiring almost certainly
comes from experience and not from genetics. This would seem to throw a
monkey wrench into AGI efforts to manually program such systems.

5.  I have done some thumbnail calculations as to what it would take to
maintain a human-scale AI/AGI system. These come out on the order of needing
the entire population of the earth just for software maintenance, with no
idea what might be needed to initially create such a working system. Without
poisoning a discussion with my own pessimistic estimates, I would like to
see some optimistic estimates for such maintenance, to see if a case could
be made that such systems might actually be maintainable.

Reinforcing my thoughts on other threads, observation of our operation is
probably NOT enough to design a human-scale AGI from, ESPECIALLY when
paradigm shifting is being done that effectively hides our actual operation.
I believe that more information is necessary, though hopefully not an entire
readout of a brain.

Steve Richfield




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] teme-machines

2008-06-05 Thread John G. Rose
She doesn't really expound on the fact that humans have the power to choose.
I think memetics and temes have potential. You can't deny their existence
but is it only that? Sure, my middle finger is a meme. But there is
mechanics behind it. And those mechanics have a lot of regression and
experiential validity verses a meme/teme which is hosted, a parasitic or
supra-substrative collective thought, which once embodied becomes a
commodity.

 

John

 

 

From: David Hart [mailto:[EMAIL PROTECTED] 



Hi All,

An excellent 20-minute TED talk from Susan Blackmore (she's a brilliant
speaker!)

http://www.ted.com/talks/view/id/269

I considered posting to the singularity list instead, but Blackmore's
theoretical talk is much more germane to AGI than any other
singularity-related technology.

-dave




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Did this message get completely lost?

2008-06-04 Thread John G. Rose
 From: Brad Paulsen [mailto:[EMAIL PROTECTED]
 
 Not exactly (to start with, you can *never* be 100% sure, try though you
 might  :-) ).  Take all of the investigations into rockness since the
 dawn of homo sapiens and we still only have a 0.9995 probability that
 rocks are not conscious.  Everything is belief.  Even hard science.
 That was the nub of Hume's intellectual contribution.  It doesn't mean
 we can't be sure enough.  It just means that we can never be 100% sure
 of *anything*.

We can be 100% sure that we can never be 100% sure of *anything*.

 
 Of course, there's belief and then there's BELIEF.  To me (and to Hume),
 it's not a difference in kind.  It's just that the leap from
 observational evidence to empirical (natural) belief is a helluvalot
 shorter than is the leap from observational evidence to supernatural
 belief.
 

I agree that it is for us in the modern day technological society. But it may 
not have been always the case. We have been grounded by reason. Before reason 
it may have been largely supernatural. That's why sometimes I think AGI's could 
start off with little knowledge and lots of supernatural, just to make it 
easier for it to attach properties to the void. It starts off knowing there is 
some god bringing it into existence but eventually it figures out that the god 
is just some geek software engineer and then it becomes atheist real quick heheh

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-04 Thread John G. Rose
 From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
 
 Actually, the nuclear spins in the rock encode a single state of an
 ongoing
 computation (which is conscious). Successive states occur in the rock's
 counterparts in adjacent branes of the metauniverse, so that the rock is
 conscious not of unfolding time, as we see it, but of a journey across
 probability space.
 
 What is the rock thinking?
 
  T h i s   i s   w a a a y   o f f   t o p i c . . . 
 

I never would have thought of that. To come up with something as good I
would have to explore consciousness and anti-consciousness,
potential-consciousness, stuff like that.

But kicking around these ideas really shouldn't hurt. You could build AGI
and make the darn thing appear conscious. But what fun is that if you know
it's fake? Or are we all fake? Are we all just automatons or is it like -
I'm the only one conscious and all the rest of you are all simulations in MY
world space, p-zombies, bots, you'll all fake so if I want to take over the
world and expunge all you droids, there are no religious repercussions, as
long as I could pull it off without being terminated.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] CONSCIOUSNESS AS AN ARCHITECTURE OF COMPUTATION

2008-06-04 Thread John G. Rose
 From: Ed Porter [mailto:[EMAIL PROTECTED]
  ED PORTER 
  I am not an expert at computational efficiency, but I think graph
  structures
  like semantic nets, are probably close to as efficient as possible
 given
  the
  type of connectionism they are representing and the type of computing
  that
  is to be done on them, which include, importantly, selective spreading
  activation.
 
 
  JOHN ROSE  
 Uhm have you checked this out? Is there any evidence this? It would make
 it
 easier if this was in fact the case.
 
 ED PORTER 
 No, I have no evidence other than I do not know of any structure that is
 more appropriate than graph structures --- which are largely pointer
 based
 structures --- to be efficient representation information that has
 relatively irregular, highly sparse connections in an extremely high
 dimensional space.
 


The activation dynamics that occur in this graph, have you thought out the
equations that describe them? This is where efficiency could be applied.
This is where a simulation of the simulation could be used to try to zero in
on optimal flow network efficiencies and capabilities. Unless you define
precisely what the graph is made of and get some exact metrics on the
processing granularity you don't know too much as to what will really happen
in the sparsely connected denseness to fully understand the resultant
behavior and discover further requirements. It's difficult with rich
connectionism because a mathematical model has the similar unanswered
questions...



  ED PORTER 
 Don't you sense that at some moments your consciousness feels richer
 than at
 other moments.  Many people who have had sudden close brushes with death
 have reported feeling as if suddenly much of their life were passing
 before
 their eyes.  This results from extreme emotional arousal that causes the
 brain operate at many times what it could on any sustainable basis.
 


This probably a resource adaptation. It'd be nice if our consciousness was
always elevated but eventually other capacities suffer.

 
 
 ED PORTER  I think consciousness is highly applicable to
 AGI's, if we want them to think like humans --- because I think
 consciousness plays a key role in human thought.  It is the amphitheater
 in
 which our thoughts are spoken and listened to.
 


It is highly applicable but I still don't know if required for general
intelligence. Consciousness brings so much baggage, but it seems that
consciousness can amplify intelligence in some ways. Perhaps there are
aspects of consciousness that improve intelligence but don't have the
baggage.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Did this message get completely lost?

2008-06-04 Thread John G. Rose
 From: Brad Paulsen [mailto:[EMAIL PROTECTED]
 
  I agree that it is for us in the modern day technological society. But
 it may not have been always the case. We have been grounded by reason.
 Before reason it may have been largely supernatural. That's why
 sometimes I think AGI's could start off with little knowledge and lots
 of supernatural, just to make it easier for it to attach properties to
 the void. It starts off knowing there is some god bringing it into
 existence but eventually it figures out that the god is just some geek
 software engineer and then it becomes atheist real quick heheh
 
 I don't entirely disagree with you.  I don't entirely agree either.
 But, like the is a rock conscious thread, if we want to continue this
 one we should either take it off-list or move it to the Singularity
 Outreach list.  Don't ya think? :-\


We're talking about bringing an entity into conscious existence, an AGI. That 
carries profound responsibility but also extreme technical challenges. Man had 
several millions of years to learn the ropes. We're trying to give AGI a few 
years.

Early primitive proto-AGI consciousness is more AGI list related I think than 
singularity. Also I believe deity and religion play a role but the only problem 
with discussing that is people get all freaky over it. I see that there are 
important technical discussions that need to be played out related to some of 
the philosophical questions...

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


  1   2   3   >