Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-20 Thread Abram Demski
Jim,

An example reference on the theory of computability is Computability and
Logic by Boolos, Burgess and Jeffrey. For those who accept the
church-turing thesis, this mathematical theory provides a sufficient account
of the notion of computability, including the space of possible programs
(which is formalized as the set of Turing machines).

--Abram

On Mon, Jul 19, 2010 at 6:44 AM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 I feel a responsibility to make an effort to explain myself when someone
 doesn't understand what I am saying, but once I have gone over the material
 sufficiently, if the person is still arguing with me about it I will just
 say that I have already explained myself in the previous messages.  For
 example if you can point to some authoritative source outside the
 Solomonoff-Kolmogrov crowd that agrees that full program space, as it
 pertains to definitions like, all possible programs, or my example
 of, all possible mathematical functions, represents an comprehensible
 concept that is open to mathematical analysis then tell me about it.  We use
 concepts like the set containing sets that are not members of themselves
 as a philosophical tool that can lead to the discovery of errors in our
 assumptions, and in this way such contradictions are of tremendous value.
 The ability to use critical skills to find flaws in one's own presumptions
 are critical in comprehension, and if that kind of critical thinking has
 been turned off for some reason, then the consequences will be predictable.
 I think compression is a useful field but the idea of universal induction
 aka Solomonoff Induction is garbage science.  It was a good effort on
 Solomonoff's part, but it didn't work and it is time to move on, as the
 majority of theorists have.
 Jim Bromer

 On Sun, Jul 18, 2010 at 10:59 PM, Abram Demski abramdem...@gmail.comwrote:

 Jim,

 I'm still not sure what your point even is, which is probably why my
 responses seem so strange to you. It still seems to me as if you are jumping
 back and forth between different positions, like I said at the start of this
 discussion.

 You didn't answer why you think program space does not represent a
 comprehensible concept. (I will drop the full if it helps...)

 My only conclusion can be that you are (at least implicitly) rejecting
 some classical mathematical principles and using your own very different
 notion of which proofs are valid, which concepts are well-defined, et
 cetera.

 (Or perhaps you just don't have a background in the formal theory of
 computation?)

 Also, not sure what difference you mean to say I'm papering over.

 Perhaps it *is* best that we drop it, since neither one of us is getting
 through to the other; but, I am genuinely trying to figure out what you are
 saying...

 --Abram

   On Sun, Jul 18, 2010 at 9:09 PM, Jim Bromer jimbro...@gmail.comwrote:

   Abram,
 I was going to drop the discussion, but then I thought I figured out why
 you kept trying to paper over the difference.  Of course, our personal
 disagreement is trivial; it isn't that important.  But the problem with
 Solomonoff Induction is that not only is the output hopelessly tangled and
 seriously infinite, but the input is as well.  The definition of all
 possible programs, like the definition of all possible mathematical
 functions, is not a proper mathematical problem that can be comprehended in
 an analytical way.  I think that is the part you haven't totally figured out
 yet (if you will excuse the pun).  Total program space, does not represent
 a comprehensible computational concept.  When you try find a way to work out
 feasible computable examples it is not enough to limit the output string
 space, you HAVE to limit the program space in the same way.  That second
 limitation makes the entire concept of total program space, much too
 weak for our purposes.  You seem to know this at an intuitive operational
 level, but it seems to me that you haven't truly grasped the implications.

 I say that Solomonoff Induction is computational but I have to use a
 trick to justify that remark.  I think the trick may be acceptable, but I am
 not sure.  But the possibility that the concept of all possible programs,
 might be computational doesn't mean that that it is a sound mathematical
 concept.  This underlies the reason that I intuitively came to the
 conclusion that Solomonoff Induction was transfinite.  However, I wasn't
 able to prove it because the hypothetical concept of all possible program
 space, is so pretentious that it does not lend itself to mathematical
 analysis.

 I just wanted to point this detail out because your implied view that you
 agreed with me but total program space was mathematically well-defined
 did not make any sense.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --

Re: [agi] Seeking Is-a Functionality

2010-07-20 Thread Steve Richfield
Arthur,

Your call for an AGI roadmap is well targeted. I suspect that others here
have their own, somewhat different roadmaps. These should all be merged,
like decks of cards being shuffled together, maybe with percentages
attached, so that people could announce that, say, I am 31% of the way to
having an AGI. At least this would provide SOME metric for progress.

This would apparently place Ben in a awkward position, because on the one
hand he is somewhat resistant to precisely defining his efforts, while on
the other hand he desperately needs to be able to demonstrate some progress
as he works toward something that is useful/salable.

Is a is too vague, e.g. in A robot is a machine, it is unclear whether
robots and machines are simply two different words for the same thing, or
whether robots are a member of the class known as machines. There are also
other more perverse potential meanings, e.g. that a single robot is a
machine, but that multiple robots are something different, e.g. a junk pile.

In Dr. Eliza, I (attempt to) deal with ambiguous statements by having the
final parser demand an unambiguous statement, and utilize my idiom
resolver to recognize common ambiguous statements and fill in the gaps
with clearer words. Hence, simple unambiguous statements and common gapping
works, but less common gapping fails, as do complex statements that can't be
split into 2 or more simple statements.

I suspect that you may be heading toward the common brick wall of paradigm
limitation, where you initially adopt an oversimplified paradigm to get
something to work, and then run into the limitations of that oversimplified
paradigm. For example, Dr. Eliza is up against its own paradigm limitations
that we have discussed here. Hence, it may be time for some paradigm
overhaul if your efforts are to continue smoothly ahead.

I hope this helps.

Steve
=
On Tue, Jul 20, 2010 at 7:20 AM, A. T. Murray menti...@scn.org wrote:

 Tues.20.JUL.2010 -- Seeking Is-a Functionality

 Recently our overall goal in coding MindForth
 has been to build up an ability for the AI to
 engage in self-referential thought. In fact,
 SelfReferentialThought is the Milestone
 next to be achieved on the RoadMap of the
 Google Code MindForth project. However, we are
 jumping ahead a little when we allow ourselves
 to take up the enticing challenge of coding
 Is-a functionality when we have work left over
 to perform on fleshing out question-word queries
 and pronominal gender assignments. Such tasks
 are the loathsome scutwork of coding an AI Mind,
 so we reinvigorate our sense of AI ambition by
 breaking new ground and by leaving old ground to
 be conquered more thoroughly as time goes by.

 We simply want our budding AI mind to think
 thoughts like the following.

 A robin is a bird.
 Birds have wings.

 Andru is a robot.
 A robot is a machine.

 We are not aiming directly at inference or
 logical thinking here. We want rather to
 increase the scope of self-referential AI
 conversations, so that the AI can discuss
 classes and categories of entities in the
 world. If people ask the AI what it is,
 and it responds that it is a robot and
 that a robot is a machine, we want the
 conversation to flow unimpeded and
 naturally in any direction that occurs
 to man or machine.

 We have already built in the underlying
 capabilities such as the usage of articles
 like a or the, and the usage of verbs
 of being. Teaching the AI how to use am
 or is or are was a major problem that
 we worried about solving during quite a
 few years of anticipation of encountering
 an impassable or at least difficult roadblock
 on our AGI Roadmap. Now we regard introducing
 Is-a functionality not so much as an
 insurmountable ordeal as an enjoyable
 challenge that will vastly expand the
 self-referential wherewithal of the
 incipient AI.

 Arthur
 --
 http://robots.net/person/AI4U/diary/22.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] The Collective Brain

2010-07-20 Thread Mike Tintner
http://www.ted.com/talks/matt_ridley_when_ideas_have_sex.html?utm_source=newsletter_weekly_2010-07-20utm_campaign=newsletter_weeklyutm_medium=email

Good lecture worth looking at about how trade - exchange of both goods and 
ideas - has fostered civilisation. Near the end introduces a v. important idea 
- the collective brain. In other words, our apparently individual 
intelligence is actually a collective intelligence. Nobody he points out 
actually knows how to make a computer mouse, although that may seem 
counterintuitive  - it's an immensely complex piece of equipment, simple as it 
may appear, that engages the collective, interdependent intelligence and 
productive efforts of vast numbers of people.

When you start thinking like that, you realise that there is v. little we know 
how to do, esp of an intellectual nature, individually, without the implicit 
and explicit collaboration of vast numbers of people and sectors of society. 

The fantasy of a superAGI machine that can grow individually without a vast 
society supporting it, is another one of the wild fantasies of AGI-ers and 
Singularitarians that violate truly basic laws of nature. Individual brains 
cannot flourish individually in the real world, only societies of brains (and 
bodies) can. 

(And of course computers can do absolutely nothing or in any way survive 
without their human masters - even if it may appear that way, if you don't look 
properly at their whole operation)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking Is-a Functionality

2010-07-20 Thread Jan Klauck
Steve Richfield wrote

 maybe with percentages
 attached, so that people could announce that, say, I am 31% of the
 way to having an AGI.

Not useful. AGI is still a hypothetical state and its true composition
remains unknown. At best you can measure how much of an AGI plan is
completed, but that's not necessarily equal to actually having an AGI.

Of course, you could use a human brain as an upper bound, but that's
still questionable, because--as I see it--most AGI designs arent'
intended to be isomorphic and I don't know how good the brain is
understood today that we can use it as an invariant measure.

cu Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-20 Thread Jim Bromer
The question was asked whether, given infinite resources could Solmonoff
Induction work.  I made the assumption that it was computable and found that
it wouldn't work.  It is not computable, even with infinite resources, for
the kind of thing that was claimed it would do. (I believe that with a
governance program it might actually be programmable) but it could not be
used to predict (or compute the probability of) a subsequent string
given some prefix string.  Not only is the method impractical it is
theoretically inane.  My conclusion suggests, that the use of Solmonoff
Induction as an ideal for compression or something like MDL is not only
unsubstantiated but based on a massive inability to comprehend the idea of a
program that runs every possible program.

I am comfortable with the conclusion that the claim that Solomonoff
Induction is an ideal for compression or induction or anything else is
pretty shallow and not based on careful consideration.

There is a chance that I am wrong, but I am confident that there is nothing
in the definition of Solmonoff Induction that could be used to prove it.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-20 Thread Jim Bromer
I am not going in circles.  I probably should not express myself in
replies.  I made a lot of mistakes getting to the conclusion that I got to,
and I am a little uncertain as to whether the construction of the diagonal
set actually means that there would be uncountable sets for this
particular example, but that, for example, has nothing to do with anything
that you said.
Jim Bromer

On Tue, Jul 20, 2010 at 5:07 PM, Abram Demski abramdem...@gmail.com wrote:

 Jim,

 *sigh* My response to that would just be to repeat certain questions I
 already asked you... I guess we should give it up after all. The best I can
 understand you is to assume that you simply don't understand the relevant
 mathematical constructions, and you've reached pretty much the same point
 with me. I'd continue in private if you're interested, but we should
 probably stop going in circles on a public list.

 --Abram

   On Tue, Jul 20, 2010 at 3:10 PM, Jim Bromer jimbro...@gmail.com wrote:

   The question was asked whether, given infinite
 resources could Solmonoff Induction work.  I made the assumption that it was
 computable and found that it wouldn't work.  It is not computable, even with
 infinite resources, for the kind of thing that was claimed it would do. (I
 believe that with a governance program it might actually be programmable)
 but it could not be used to predict (or compute the probability of) a
 subsequent string given some prefix string.  Not only is the method
 impractical it is theoretically inane.  My conclusion suggests, that the use
 of Solmonoff Induction as an ideal for compression or something like MDL is
 not only unsubstantiated but based on a massive inability to comprehend the
 idea of a program that runs every possible program.

 I am comfortable with the conclusion that the claim that Solomonoff
 Induction is an ideal for compression or induction or anything else is
 pretty shallow and not based on careful consideration.

 There is a chance that I am wrong, but I am confident that there is
 nothing in the definition of Solmonoff Induction that could be used to prove
 it.
 Jim Bromer
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




 --
 Abram Demski
 http://lo-tho.blogspot.com/
 http://groups.google.com/group/one-logic
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Mike Tintner
No, the collective brain is actually a somewhat distinctive idea.  It's 
saying a lot more than the individual brain is embedded in society, much 
more like interdependently functioning with society - that you can't say 
do maths or art or any subject, or produce products or perform most of our 
activities except as part of a whole culture and society. Did you watch the 
talk? My Googlings show that this does seem to be a distinctive formulation 
by Ridley.


The evidence of the idea's newness is precisely the discussions of 
superAGI's and AGI futures by the groups here - show me how much of these 
discussions if anything at all raises the social dimension (i.e society of 
robots dimension)  -  considers what I am suggesting is the truth that you 
will not be able to have an independent AGI  system without a society of 
such systems.  If the collective brain idea were established culturally, 
AGI-ers would not talk as naively as they do.


Your last question is also an example of cocooned-AGI thinking? Which 
brains?  The only real AGI brains are those of living systems - animals and 
humans - living in the real world.  All machines to date are only extensions 
of humans not living systems - though I'm not sure how many AGI-ers truly 
realise this.  And all those systems can and do only function in societies.


Why? Well, when you or y'all ever get around to dealing with AGI/creative 
problems you will realise why.  The risk of failure and injury when dealing 
with the creative problems of the real world is so great that you need a 
social network a) to support you and b) by virtue of a collective, to 
increase the chances of at least some individuals successfully reaching 
difficult goals. Also, social division of labour massively amplifies the 
productive power of the individual.  Plus you get sexual benefits.

--
From: Jan Klauck jkla...@uni-osnabrueck.de
Sent: Tuesday, July 20, 2010 8:25 PM
To: agi agi@v2.listbox.com
Subject: Re: [agi] The Collective Brain


Mike Tintner wrote


Near the end introduces a v. important
idea - the collective brain. In other words, our apparently individual
intelligence is actually a collective intelligence.


That individuals are embedded into social networks of specialization
and exchange, care etc. is already known both in sociology and economics,
probably in philosophy and social psychology, too.


and productive efforts of vast numbers of people.


Already known to economists.


The fantasy of a superAGI machine that can grow individually without a
vast society supporting it, is another one of the wild fantasies of
AGI-ers and Singularitarians that violate truly basic laws of nature.


AGIers and Singularitarians say so?


Individual brains cannot flourish individually in the real world, only
societies of brains (and bodies) can.


What kind of brains? What kind of societies? And why?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Mike Tintner
Ah the collective brain is saying something else as well -  wh. is another 
reason why I was hoping to get a discussion. It's exemplified in the example 
of the mouse.


Actually, Ridley is saying, the complete knowledge to build a mouse does not 
reside in any individual brain, or indeed by extension in any group of 
individual brains.   That complete knowledge only effectively comes into 
being when you get all those brains along with all their relevant 
technologies and libraries, working together.


Hence one talks of a collective brain, which is of course a (useful) 
fiction. There is no such brain and nor is there any complete locatable 
store of knowledge to perform the great majority of our activities. They are 
the result of societies of individuals working together.


And that - although no doubt I'm not expressing it well at all - is a rather 
magical idea and magical reality.


{Note this is something different from but loosely related to the crude, 
rather atavistic idea beloved by AGI-ers that the internet will somehow 
magically come alive and acquire an individual brain of its own] 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Jan Klauck
Mike Tintner wrote

 No, the collective brain is actually a somewhat distinctive idea.

Just a way of looking at social support networks. Even social
philosophers centuries ago had similar ideas--they were lacking our
technical understanding and used analogies from biology (organicism)
instead.

 more like interdependently functioning with society

As I said it's long known to economists and sociologists. There's even
an African proverb pointing at this: It takes a village to raise a
child.
System researcher investigate those interdependencies since decades.

 Did you watch the talk?

No flash here. I just answer on what you're writing.

 The evidence of the idea's newness is precisely the discussions of
 superAGI's and AGI futures by the groups here

We talked about the social dimensions some times. It's not the most
important topic around here, but that doesn't mean we're all ignorant.

In case you haven't noticed I'm not building an AGI, I'm interested
in the stuff around, e.g., tests, implementation strategies etc. by
the means of social simulation.

 Your last question is also an example of cocooned-AGI thinking? Which
 brains?  The only real AGI brains are those of living systems

A for Artificial. Living systems don't qualify for A.

My question was about certain attributes of brains (whether natural or
artificial). Societies are constrained by their members' capacities.
A higher individual capacity can lead to different dependencies and
new ways groups and societies are working.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Mike Tintner
You partly illustrate my point - you talk of artificial brains as if they 
actually exist  - there aren't any; there are only glorified, extremely 
complex calculators/computer programs  - extensions/augmentations of 
individual faculties of human brains.  To obviously exaggerate, it's 
somewhat as if you were to talk of cameras as brains.


By implicitly pretending that artificial brains exist - in the form of 
computer programs -  you (and most AGI-ers), deflect attention away from all 
the unsolved dimensions of what is required for an independent 
brain-cum-living system, natural or artificial. One of those dimensions is a 
society of brains/systems. Another is a body. And there are more., none of 
wh. are incorporated in computer programs - they only represent one 
dimension of what is needed for a brain.


Yes you may know these things some times as you say, but most of the time 
they're forgotten.


--
From: Jan Klauck jkla...@uni-osnabrueck.de
Sent: Wednesday, July 21, 2010 1:56 AM
To: agi agi@v2.listbox.com
Subject: Re: [agi] The Collective Brain


Mike Tintner wrote


No, the collective brain is actually a somewhat distinctive idea.


Just a way of looking at social support networks. Even social
philosophers centuries ago had similar ideas--they were lacking our
technical understanding and used analogies from biology (organicism)
instead.


more like interdependently functioning with society


As I said it's long known to economists and sociologists. There's even
an African proverb pointing at this: It takes a village to raise a
child.
System researcher investigate those interdependencies since decades.


Did you watch the talk?


No flash here. I just answer on what you're writing.


The evidence of the idea's newness is precisely the discussions of
superAGI's and AGI futures by the groups here


We talked about the social dimensions some times. It's not the most
important topic around here, but that doesn't mean we're all ignorant.

In case you haven't noticed I'm not building an AGI, I'm interested
in the stuff around, e.g., tests, implementation strategies etc. by
the means of social simulation.


Your last question is also an example of cocooned-AGI thinking? Which
brains?  The only real AGI brains are those of living systems


A for Artificial. Living systems don't qualify for A.

My question was about certain attributes of brains (whether natural or
artificial). Societies are constrained by their members' capacities.
A higher individual capacity can lead to different dependencies and
new ways groups and societies are working.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Michael Swan

On Wed, 2010-07-21 at 02:25 +0100, Mike Tintner wrote:

 By implicitly pretending that artificial brains exist - in the form of 
 computer programs -  you (and most AGI-ers), deflect attention away from all 
 the unsolved dimensions of what is required for an independent 
 brain-cum-living system,
I for one would like to see this brain-cum living system. It's erotic
intelligence would be astronomical!

  natural or artificial. One of those dimensions is a 
 society of brains/systems. Another is a body. And there are more., none of 
 wh. are incorporated in computer programs - they only represent one 
 dimension of what is needed for a brain.

 --
 From: Jan Klauck jkla...@uni-osnabrueck.de
 Sent: Wednesday, July 21, 2010 1:56 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] The Collective Brain
 
  Mike Tintner wrote
 
  No, the collective brain is actually a somewhat distinctive idea.
 
  Just a way of looking at social support networks. Even social
  philosophers centuries ago had similar ideas--they were lacking our
  technical understanding and used analogies from biology (organicism)
  instead.
 
  more like interdependently functioning with society
 
  As I said it's long known to economists and sociologists. There's even
  an African proverb pointing at this: It takes a village to raise a
  child.
  System researcher investigate those interdependencies since decades.
 
  Did you watch the talk?
 
  No flash here. I just answer on what you're writing.
 
  The evidence of the idea's newness is precisely the discussions of
  superAGI's and AGI futures by the groups here
 
  We talked about the social dimensions some times. It's not the most
  important topic around here, but that doesn't mean we're all ignorant.
 
  In case you haven't noticed I'm not building an AGI, I'm interested
  in the stuff around, e.g., tests, implementation strategies etc. by
  the means of social simulation.
 
  Your last question is also an example of cocooned-AGI thinking? Which
  brains?  The only real AGI brains are those of living systems
 
  A for Artificial. Living systems don't qualify for A.
 
  My question was about certain attributes of brains (whether natural or
  artificial). Societies are constrained by their members' capacities.
  A higher individual capacity can lead to different dependencies and
  new ways groups and societies are working.
 
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: 
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Collective Brain

2010-07-20 Thread Michael Swan

The most powerful concept in the universe is working together.

If atoms didn't attract and repel each other, then we'd have a universe
where nothing ever happened.

So, Collective Brain is a subset of the collective intelligence of
the universe.


On Wed, 2010-07-21 at 02:25 +0100, Mike Tintner wrote:
 You partly illustrate my point - you talk of artificial brains as if they 
 actually exist  - there aren't any; there are only glorified, extremely 
 complex calculators/computer programs  - extensions/augmentations of 
 individual faculties of human brains.  To obviously exaggerate, it's 
 somewhat as if you were to talk of cameras as brains.
 
 By implicitly pretending that artificial brains exist - in the form of 
 computer programs -  you (and most AGI-ers), deflect attention away from all 
 the unsolved dimensions of what is required for an independent 
 brain-cum-living system, natural or artificial. One of those dimensions is a 
 society of brains/systems. Another is a body. And there are more., none of 
 wh. are incorporated in computer programs - they only represent one 
 dimension of what is needed for a brain.
 
 Yes you may know these things some times as you say, but most of the time 
 they're forgotten.
 
 --
 From: Jan Klauck jkla...@uni-osnabrueck.de
 Sent: Wednesday, July 21, 2010 1:56 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] The Collective Brain
 
  Mike Tintner wrote
 
  No, the collective brain is actually a somewhat distinctive idea.
 
  Just a way of looking at social support networks. Even social
  philosophers centuries ago had similar ideas--they were lacking our
  technical understanding and used analogies from biology (organicism)
  instead.
 
  more like interdependently functioning with society
 
  As I said it's long known to economists and sociologists. There's even
  an African proverb pointing at this: It takes a village to raise a
  child.
  System researcher investigate those interdependencies since decades.
 
  Did you watch the talk?
 
  No flash here. I just answer on what you're writing.
 
  The evidence of the idea's newness is precisely the discussions of
  superAGI's and AGI futures by the groups here
 
  We talked about the social dimensions some times. It's not the most
  important topic around here, but that doesn't mean we're all ignorant.
 
  In case you haven't noticed I'm not building an AGI, I'm interested
  in the stuff around, e.g., tests, implementation strategies etc. by
  the means of social simulation.
 
  Your last question is also an example of cocooned-AGI thinking? Which
  brains?  The only real AGI brains are those of living systems
 
  A for Artificial. Living systems don't qualify for A.
 
  My question was about certain attributes of brains (whether natural or
  artificial). Societies are constrained by their members' capacities.
  A higher individual capacity can lead to different dependencies and
  new ways groups and societies are working.
 
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: 
  https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-20 Thread Matt Mahoney
Mike, I think we all agree that we should not have to tell an AGI the steps to 
solving problems. It should learn and figure it out, like the way that people 
figure it out.

The question is how to do that. We know that it is possible. For example, I 
could write a chess program that I could not win against. I could write the 
program in such a way that it learns to improve its game by playing against 
itself or other opponents. I could write it in such a way that initially does 
not know the rules for chess, but instead learns the rules by being given 
examples of legal and illegal moves.

What we have not yet been able to do is scale this type of learning and problem 
solving up to general, human level intelligence. I believe it is possible, but 
it will require lots of training data and lots of computing power. It is not 
something you could do on a PC, and it won't be cheap.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 9:07:53 PM
Subject: Re: [agi] Of definitions and tests of AGI


The issue isn't what a computer can do. The issue  is how you structure the 
computer's or any agent's thinking about a  problem. Programs/Turing machines 
are only one way of structuring  thinking/problemsolving - by, among other 
things, giving the  computer a method/process of solution. There is an 
alternative way of  structuring a computer's thinking, which incl., among other 
things, not giving  it a method/ process of solution, but making it rather than 
a human  programmer do the real problemsolving.  More of that another  time.


From: Matt Mahoney 
Sent: Tuesday, July 20, 2010 1:38 AM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI

Creativity is the good feeling you get when you discover a clever solution  to 
a 
hard problem without knowing the process you used to discover it.

I think a computer could do that.

 -- Matt Mahoney, matmaho...@yahoo.com 





 From: Mike Tintner tint...@blueyonder.co.uk
To: agi agi@v2.listbox.com
Sent: Mon, July 19, 2010 2:08:28  PM
Subject: Re: [agi] Of  definitions and tests of AGI


Yes that's what people do, but it's not what  programmed computers do.
 
The useful formulation that emerges here  is:
 
narrow AI (and in fact all rational) problems   have *a method of solution*  
(to 
be equated with general  method)   - and are programmable (a program is a 
method of  solution)
 
AGI  (and in fact all creative) problems do  NOT have *a method of solution* 
(in 
the general sense)  -  rather  a one.off *way of solving the problem* has to be 
improvised each  time.
 
AGI/creative problems do not in fact have a method  of solution, period. There 
is no (general) method of solving either the toy box  or the build-a-rock-wall 
problem - one essential feature which makes them  AGI.
 
You can learn, as you indicate, from *parts* of any  given AGI/creative 
solution, and apply the lessons to future problems - and  indeed with practice, 
should improve at solving any given kind of AGI/creative  problem. But you can 
never apply a *whole* solution/way to further  problems.
 
P.S. One should add that in terms of computers, we  are talking here of 
*complete, step-by-step* methods of  solution.
 


From: rob levy 
Sent: Monday, July 19, 2010 5:09 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI
  
And are you happy with:
 
AGI is about devising *one-off* methods ofproblemsolving (that only apply 
to 
the individual problem, and cannot bere-used - at 


least not in their totality)
 

Yes exactly, isn't that what people do?  Also, I think that being  able to 
recognize where past solutions can be generalized and where past  solutions can 
be varied and reused is a detail of how intelligence works that is  likely to 
be 
universal.

 
vs
 
narrow AI is about applying pre-existing*general* methods of 
problemsolving  
(applicable to whole classes ofproblems)?
 
 


From: rob levy 
Sent: Monday, July 19, 2010 4:45 PM
To: agi 
Subject: Re: [agi] Of definitions and tests ofAGI

Well, solving ANY problem is a little too strong.  This isAGI, not AGH 
(artificial godhead), though AGH could be an unintendedconsequence ;).  So 
I 
would rephrase solving any problem as being ableto come up with 
reasonable 
approaches and strategies to any problem (just ashumans are able to do).


On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.uk 
wrote:

Whaddya mean by solve the problem of how to  solve problems? Develop a 
universal approach to solving any problem?  Or find a method of solving a 
class of problems? Or what?


From: rob levy 
Sent: Monday, July 19, 2010 1:26 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of  AGI


 
However, I see that there are no validdefinitions of AGI that 
explain 
what AGI is generally , and why these