Re: [agi] Let's face it, this is just dumb.

2008-10-03 Thread Brad Paulsen
Wow, that's a pretty strong response there, Matt.  Friends of yours?

If I were in control of such things, I wouldn't DARE walk out of a lab and
announce results like that.  So I have no fear of being the one to bring
that type of criticism on myself.  But, I'm just as vulnerable as any of us
to having colleagues do it for (to) me.

So, yeah.  I have a problem with premature release, or announcement, of a
technology that's associated with an industry in which I work.  It's
irresponsible science when scientists do it.  It's irresponsible marketing
(now, there's a redundant phrase for you) when company management does it.

And, it's irresponsible for you to defend such practices.  That stuff
deserved to be mocked.  Get over it.

Cheers,
Brad


Matt Mahoney wrote:
 So here is another step toward AGI, a hard image classification problem
 solved with near human-level ability, and all I hear is criticism.
 Sheesh! I hope your own work is not attacked like this.
 
 I would understand if the researchers had proposed something stupid like
 using the software in court to distinguish adult and child pornography.
 Please try to distinguish between the research and the commentary by the
 reporters. A legitimate application could be estimating the average age
 plus or minus 2 months of a group of 1000 shoppers in a marketing study.
 
 
 In any case, machine surveillance is here to stay. Get used to it.
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 --- On Thu, 10/2/08, Bob Mottram [EMAIL PROTECTED] wrote:
 
 From: Bob Mottram [EMAIL PROTECTED] Subject: Re: [agi] Let's face
 it, this is just dumb. To: agi@v2.listbox.com Date: Thursday, October
 2, 2008, 6:21 AM 2008/10/2 Brad Paulsen [EMAIL PROTECTED]:
 It boasts a 50% recognition accuracy rate
 +/-5 years and an 80%
 recognition accuracy rate +/-10 years.  Unless, of
 course, the subject is
 wearing a big floppy hat, makeup or has had Botox
 treatment recently.  Or
 found his dad's Ronald Reagan mask.  'Nuf
 said.
 
 
 Yes.  This kind of accuracy would not be good enough to enforce age 
 related rules surrounding the buying of certain products, nor does it 
 seem likely to me that refinements of the technique will give the 
 needed accuracy.  As you point out people have been trying to fool 
 others about their age for millenia, and this trend is only going to 
 complicate matters further.  In future if De Grey gets his way this 
 kind of recognition will be useless anyway.
 
 
 P.S. Oh, yeah, and the guy responsible for this
 project claims it doesn't
 violate anyone's privacy because it can't be
 used to identify individuals.
 Right.  They don't say who sponsored this
 research, but I sincerely doubt
 it was the vending machine companies or purveyors of
 Internet porn.
 
 
 It's good to question the true motives behind something like this, and
  where the funding comes from.  I do a lot of stuff with computer 
 vision, and if someone came to me saying they wanted something to 
 visually recognise the age of a person I'd tell them that they're 
 probably wasting their time, and that indicators other than visual 
 ones would be more likely to give a reliable result.
 
 
 
 --- agi Archives:
 https://www.listbox.com/member/archive/303/=now RSS Feed:
 https://www.listbox.com/member/archive/rss/303/ Modify Your
 Subscription:
 https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-03 Thread Ben Goertzel
Hi,


 CMR (my proposal) has no centralized control (global brain). It is a
 competitive market in which information has negative value. The environment
 is a peer-to-peer network where peers receive messages in natural language,
 cache a copy, and route them to appropriate experts based on content.


You seem to misunderstand the notion of a Global Brain, see

http://pespmc1.vub.ac.be/GBRAIFAQ.html

http://en.wikipedia.org/wiki/Global_brain

It does not require centralized control, but is in fact more focused on
emergent dynamical control mechanisms.



 I believe that CMR is initially friendly in the sense that a market is
 friendly.



Which is to say: dangerous, volatile, hard to predict ... and often not
friendly at all!!!


 A market is the most efficient way to satisfy the collective goals of its
 participants. It is fair, but not benevolent.


I believe this is an extremely oversimplistic and dangerous view of
economics ;-)

Traditional economic theory which argues that free markets are optimally
efficient, is based on a patently false assumption of infinitely rational
economic actors.This assumption is **particularly** poor when the
economic actors are largely **humans**, who are highly nonrational.

As a single isolated example, note that in the US right now, many people are
withdrawing their $$ from banks even if they have less than $100K in their
accounts ... even though the government insures bank accounts up to $100K.
What are they doing?  Insuring themselves against a total collapse of the US
economic system?  If so they should be buying gold with their $$, but only a
few of them are doing that.  People are in large part emotional not rational
actors, and for this reason pure free-markets involving humans are far from
the most efficient way to satisfy the collective goals of a set of humans.

Anyway a deep discussion of economics would likely be too big of a
digression, though it may be pertinent insofar as it's a metaphor for the
internal dynamics of an AGI ... (for instance Eric Baum, who is a fairly
hardcore libertarian politically, is in favor of free markets as a model for
credit assignment in AI systems ... and OpenCog/NCE contains an economic
attention allocation component...)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-03 Thread Gabriel Recchia
I remember reading awhile back that certain Japanese vending machines
dispensing adult-only materials actually employed such age-estimation
software for a short time, but quickly pulled it after discovering that
teens were thwarting it by holding magazine covers up to the camera. No
floppy hat or Ronald Reagan mask necessary.

On Fri, Oct 3, 2008 at 6:00 AM, Brad Paulsen [EMAIL PROTECTED] wrote:

 Wow, that's a pretty strong response there, Matt.  Friends of yours?

 If I were in control of such things, I wouldn't DARE walk out of a lab and
 announce results like that.  So I have no fear of being the one to bring
 that type of criticism on myself.  But, I'm just as vulnerable as any of us
 to having colleagues do it for (to) me.

 So, yeah.  I have a problem with premature release, or announcement, of a
 technology that's associated with an industry in which I work.  It's
 irresponsible science when scientists do it.  It's irresponsible marketing
 (now, there's a redundant phrase for you) when company management does it.

 And, it's irresponsible for you to defend such practices.  That stuff
 deserved to be mocked.  Get over it.

 Cheers,
 Brad


 Matt Mahoney wrote:
  So here is another step toward AGI, a hard image classification problem
  solved with near human-level ability, and all I hear is criticism.
  Sheesh! I hope your own work is not attacked like this.
 
  I would understand if the researchers had proposed something stupid like
  using the software in court to distinguish adult and child pornography.
  Please try to distinguish between the research and the commentary by the
  reporters. A legitimate application could be estimating the average age
  plus or minus 2 months of a group of 1000 shoppers in a marketing study.
 
 
  In any case, machine surveillance is here to stay. Get used to it.
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
 
  --- On Thu, 10/2/08, Bob Mottram [EMAIL PROTECTED] wrote:
 
  From: Bob Mottram [EMAIL PROTECTED] Subject: Re: [agi] Let's face
  it, this is just dumb. To: agi@v2.listbox.com Date: Thursday, October
  2, 2008, 6:21 AM 2008/10/2 Brad Paulsen [EMAIL PROTECTED]:
  It boasts a 50% recognition accuracy rate
  +/-5 years and an 80%
  recognition accuracy rate +/-10 years.  Unless, of
  course, the subject is
  wearing a big floppy hat, makeup or has had Botox
  treatment recently.  Or
  found his dad's Ronald Reagan mask.  'Nuf
  said.
 
 
  Yes.  This kind of accuracy would not be good enough to enforce age
  related rules surrounding the buying of certain products, nor does it
  seem likely to me that refinements of the technique will give the
  needed accuracy.  As you point out people have been trying to fool
  others about their age for millenia, and this trend is only going to
  complicate matters further.  In future if De Grey gets his way this
  kind of recognition will be useless anyway.
 
 
  P.S. Oh, yeah, and the guy responsible for this
  project claims it doesn't
  violate anyone's privacy because it can't be
  used to identify individuals.
  Right.  They don't say who sponsored this
  research, but I sincerely doubt
  it was the vending machine companies or purveyors of
  Internet porn.
 
 
  It's good to question the true motives behind something like this, and
   where the funding comes from.  I do a lot of stuff with computer
  vision, and if someone came to me saying they wanted something to
  visually recognise the age of a person I'd tell them that they're
  probably wasting their time, and that indicators other than visual
  ones would be more likely to give a reliable result.
 
 
 
  --- agi Archives:
  https://www.listbox.com/member/archive/303/=now RSS Feed:
  https://www.listbox.com/member/archive/rss/303/ Modify Your
  Subscription:
  https://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com
 


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Testing, and a question....

2008-10-03 Thread Colin Hales

Dear AGI folk,
I am testing my registration on the system,, saying an inaugural 'hi' 
and seeking guidance in respect of potential submissions for a 
presentation spot at the next AGI conference.


It is time for me to become more visible in AGI after 5 years of 
research and reprogramming my brain into the academic side of things 
My plans as a post-doc are to develop a novel chip technology. It will 
form the basis for what I have called 'A-Fauna'. I call it A-Fauna 
because it will act like biological organisms and take their place 
alongside natural fauna in a chosen ecological niche. Like tending a 
field as a benign 'artificial weed-killer'...they know and prefer their 
weeds...you get the idea. They are AGI robots that learn (are coached) 
to operate in a specific role and then are 'intellectually nobbled' 
(equivalent to biology), so their ability to handle novelty is 
specifically and especially curtailed. They will also be a whole bunch 
cheaper in that form...They are then deployed into that specific role 
and will be happy little campers. These creatures are different to 
typical mainstream AI fare because they cannot be taught how to learn. 
They are like us: they learn how to learn. As a result they can handle 
novelty better...a long story...Initially the A-Fauna is very small but 
potentially it could get to human level. The first part of the 
development is the initial proof of specific physics, which requires a 
key experiment. I can't wait to do this! The success of the experiment 
then leads to development and miniaturisation and eventual application 
into a prototype 'critter', which will then have to be proven to have 
P-consciousness (using the test in 3 below)anyway...that's the rough 
'me' of it.


I am in NICTA  www.nicta.com.au
Victoria Research Lab in the Life-Sciences theme.
Department of Electrical/Electronic Eng, University of Melbourne

Sothe AGI-09 basic topics to choose from are:

1) Empirical refutation of computationalism
2) Another thought experiment refutation of computationalism. The 
Totally Blind Zombie Homunculus Room

3) An objective test for Phenomenal consciousness.
4) A novel backpropagation mechanism in an excitable cell 
membrane/syncytium context.


1) and 2) are interesting because the implication is that if anyone 
doing AGI lifts their finger over a keyboard thinking they can be 
directly involved in programming anything to do with the eventual 
knowledge of the creature...they have already failed. I don't know 
whether the community has internalised this yet. BTW that makes 4 ways 
that computationalism has been shot. How dead does it have to get? :-) I 
am less interested in these than the others.


3) Is a special test which can be used to empirically test for 
P-consciousness in an embedded, embodied artificial agent. I need this 
test framework for my future AGI developments...one day I need to be 
able to point at at my AGI robot and claim it is having experiences of a 
certain type and to be believed. AGI needs a test like this to get 
scientific credibility. So you claim it's conscious?prove it!. 
This is problematic but I am reasonably sure I have worked out a way 
So it needs some attention (a paper is coming out sometime soon I hope. 
They told me it was accepted, anyway...). The test is 
double-blind/clinical style with 'wild-type' control and 'test 
subject'...BTW the computationalist contender (1/2 above) is quite 
validly tested but would operate as a sham/placebo control... because it 
is known they will always fail. Although anyone serious enough can offer 
it as a full contender. Funnily enough it also proves humans are 
conscious! In case you were wondering...humans are the wild-type control.


4) Is my main PhD topic. I submit this time next year. (I'd prefer to do 
this because I can get funded to go to the conference!). It reveals a 
neural adaptation mechanism that is completely missing from present 
neural models. It's based on molecular electrodynamics of the neural 
membrane. The effect then operates in the syncytium as a regulatory 
(synchrony) bias operating in quadrature with (and roughly independent 
of) the normal synaptic adaptation.


I prefer 4) because of the funding but also because I'd much rather 
reveal it to the AGI community - because that is my future...but I will 
defer to preferences of the groupI can always cover 1,2,3 informally 
when I am there if there's any interestso...which of these (any) is 
of interest?...I'm not sure of the kinds of things you folk want to hear 
about. All comments are appreciated.


regards to all,

Colin Hales


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Testing, and a question....

2008-10-03 Thread Ben Goertzel
In terms of a paper submission to AGI-09, I think that your option 4 would
be of the most interest to the audience there.   By and large it's not a
philosophy of AI crowd so much as a how to build an AI crowd...

I am also organizing a workshop on machine consciousness that will be in
Hong Kong in June 09, following the major consciousness conference there ...
for that workshop, your option 3 would be of great interest...

ben

On Fri, Oct 3, 2008 at 5:01 PM, Colin Hales [EMAIL PROTECTED]wrote:

 Dear AGI folk,
 I am testing my registration on the system,, saying an inaugural 'hi' and
 seeking guidance in respect of potential submissions for a presentation spot
 at the next AGI conference.

 It is time for me to become more visible in AGI after 5 years of research
 and reprogramming my brain into the academic side of things My plans as
 a post-doc are to develop a novel chip technology. It will form the basis
 for what I have called 'A-Fauna'. I call it A-Fauna because it will act like
 biological organisms and take their place alongside natural fauna in a
 chosen ecological niche. Like tending a field as a benign 'artificial
 weed-killer'...they know and prefer their weeds...you get the idea. They are
 AGI robots that learn (are coached) to operate in a specific role and then
 are 'intellectually nobbled' (equivalent to biology), so their ability to
 handle novelty is specifically and especially curtailed. They will also be a
 whole bunch cheaper in that form...They are then deployed into that specific
 role and will be happy little campers. These creatures are different to
 typical mainstream AI fare because they cannot be taught how to learn. They
 are like us: they learn how to learn. As a result they can handle novelty
 better...a long story...Initially the A-Fauna is very small but potentially
 it could get to human level. The first part of the development is the
 initial proof of specific physics, which requires a key experiment. I can't
 wait to do this! The success of the experiment then leads to development and
 miniaturisation and eventual application into a prototype 'critter', which
 will then have to be proven to have P-consciousness (using the test in 3
 below)anyway...that's the rough 'me' of it.

 I am in NICTA  www.nicta.com.au
 Victoria Research Lab in the Life-Sciences theme.
 Department of Electrical/Electronic Eng, University of Melbourne

 Sothe AGI-09 basic topics to choose from are:

 1) Empirical refutation of computationalism
 2) Another thought experiment refutation of computationalism. The Totally
 Blind Zombie Homunculus Room
 3) An objective test for Phenomenal consciousness.
 4) A novel backpropagation mechanism in an excitable cell
 membrane/syncytium context.

 1) and 2) are interesting because the implication is that if anyone doing
 AGI lifts their finger over a keyboard thinking they can be directly
 involved in programming anything to do with the eventual knowledge of the
 creature...they have already failed. I don't know whether the community has
 internalised this yet. BTW that makes 4 ways that computationalism has been
 shot. How dead does it have to get? :-) I am less interested in these than
 the others.

 3) Is a special test which can be used to empirically test for
 P-consciousness in an embedded, embodied artificial agent. I need this test
 framework for my future AGI developments...one day I need to be able to
 point at at my AGI robot and claim it is having experiences of a certain
 type and to be believed. AGI needs a test like this to get scientific
 credibility. So you claim it's conscious?prove it!. This is
 problematic but I am reasonably sure I have worked out a way So it needs
 some attention (a paper is coming out sometime soon I hope. They told me it
 was accepted, anyway...). The test is double-blind/clinical style with
 'wild-type' control and 'test subject'...BTW the computationalist contender
 (1/2 above) is quite validly tested but would operate as a sham/placebo
 control... because it is known they will always fail. Although anyone
 serious enough can offer it as a full contender. Funnily enough it also
 proves humans are conscious! In case you were wondering...humans are the
 wild-type control.

 4) Is my main PhD topic. I submit this time next year. (I'd prefer to do
 this because I can get funded to go to the conference!). It reveals a neural
 adaptation mechanism that is completely missing from present neural models.
 It's based on molecular electrodynamics of the neural membrane. The effect
 then operates in the syncytium as a regulatory (synchrony) bias operating in
 quadrature with (and roughly independent of) the normal synaptic adaptation.

 I prefer 4) because of the funding but also because I'd much rather reveal
 it to the AGI community - because that is my future...but I will defer to
 preferences of the groupI can always cover 1,2,3 informally when I am
 there if there's any interestso...which of 

Re: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Ben Goertzel
yah, I discuss this in chapter 2 of The Hidden Pattern ;-) ...

the short of it is: the self-model of such a mind will be radically
different than that of a current human, because we create our self-models
largely by analogy to our physical organisms ...

intelligences w/o fixed physical embodiment will still have self-models but
they will be less grounded in body metaphors ... hence radically different


we can explore this different analytically, but it's hard for us to grok
empathically...

a hint of this is seen in the statement my son Zeb (who plays too many
videogames) made: i don't like the real world as much as videogames because
in the real world I always have first person view and can never switch to
third person

one would suspect that minds w/o fixed embodiment would have more explicitly
contextualized inference, rather than so often positioning all their
inferences/ideas within one default context ... for starters...

ben

On Fri, Oct 3, 2008 at 8:43 PM, Mike Tintner [EMAIL PROTECTED]wrote:

 The foundation of the human mind and system is that we can only be in one
 place at once, and can only be directly, fully conscious of that place. Our
 world picture,  which we and, I think, AI/AGI tend to take for granted, is
 an extraordinary triumph over that limitation   - our ability to conceive of
 the earth and universe around us, and of societies around us, projecting
 ourselves outward in space, and forward and backward in time. All animals
 are similarly based in the here and now.

 But,if only in principle, networked computers [or robots] offer the
 possibility for a conscious entity to be distributed and in several places
 at once, seeing and interacting with the world simultaneously from many
 POV's.

 Has anyone thought about how this would change the nature of identity and
 intelligence?



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Testing, and a question....

2008-10-03 Thread Mike Tintner

Colin:

1) Empirical refutation of computationalism...

.. interesting because the implication is that if anyone
doing AGI lifts their finger over a keyboard thinking they can be
directly involved in programming anything to do with the eventual
knowledge of the creature...they have already failed. I don't know
whether the community has internalised this yet.

Colin,

I'm sure Ben is right, but I'd be interested to hear the essence of your 
empirical refutation. Please externalise it so we can internalise it :) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Mike Tintner
I think either way - computers or robots - a distributed entity has to be 
looking at the world from different POV's more or less simultaneously, even if 
rapidly switching. My immediate intuitive response is that that would make the 
entity much less self-ish -much more open to merging or uniting with others.

The idea of a distributed entity may well have the power to change our ideas 
about God/ the divine force/principle ,  I suspect our ideas are directly or 
indirectly v. located. Even if we, say, think about God or the force being 
everywhere, it's hard not to think of that being the same force spread out.

But the idea of a distributed entity IMO  opens up the possibility of an entity 
with a highly multiple personality  - and perhaps also might make it possible 
to see all humans, say, and/or animals as one  - an idea which has always given 
me, personally, a headache.


Ben:yah, I discuss this in chapter 2 of The Hidden Pattern ;-) ...

the short of it is: the self-model of such a mind will be radically different 
than that of a current human, because we create our self-models largely by 
analogy to our physical organisms ...

intelligences w/o fixed physical embodiment will still have self-models but 
they will be less grounded in body metaphors ... hence radically different 

we can explore this different analytically, but it's hard for us to grok 
empathically...

a hint of this is seen in the statement my son Zeb (who plays too many 
videogames) made: i don't like the real world as much as videogames because in 
the real world I always have first person view and can never switch to third 
person   

one would suspect that minds w/o fixed embodiment would have more explicitly 
contextualized inference, rather than so often positioning all their 
inferences/ideas within one default context ... for starters...

ben


  On Fri, Oct 3, 2008 at 8:43 PM, Mike Tintner [EMAIL PROTECTED] wrote:

The foundation of the human mind and system is that we can only be in one 
place at once, and can only be directly, fully conscious of that place. Our 
world picture,  which we and, I think, AI/AGI tend to take for granted, is an 
extraordinary triumph over that limitation   - our ability to conceive of the 
earth and universe around us, and of societies around us, projecting ourselves 
outward in space, and forward and backward in time. All animals are similarly 
based in the here and now.

But,if only in principle, networked computers [or robots] offer the 
possibility for a conscious entity to be distributed and in several places at 
once, seeing and interacting with the world simultaneously from many POV's.

Has anyone thought about how this would change the nature of identity and 
intelligence? 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Testing, and a question....

2008-10-03 Thread Colin Hales

Hi Ben,

Excellent. #4 it is. I'll proceed on that basis. I can't get funding 
unless I present...and the timing is perfect for my PhD, so I'll be 
working towards that.


Hong Kong sounds good. I assume it's the Toward a Science of 
Consciousness 2009 I'll chase it up. I didn't realise it was in Hong 
Kong. The last one I went to was Tucson. 2006. It was a hoot. I wonder 
if Dave Chalmers will do the 'end of consciousness' party and 
blues-slam. :-) We'll see. Consider me 'applied for' as a workshop. I'll 
do the applications ASAP.


regards,

Colin Hales


Ben Goertzel wrote:


In terms of a paper submission to AGI-09, I think that your option 4 
would be of the most interest to the audience there.   By and large 
it's not a philosophy of AI crowd so much as a how to build an AI 
crowd...


I am also organizing a workshop on machine consciousness that will be 
in Hong Kong in June 09, following the major consciousness conference 
there ... for that workshop, your option 3 would be of great interest...


ben

On Fri, Oct 3, 2008 at 5:01 PM, Colin Hales 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
wrote:


Dear AGI folk,
I am testing my registration on the system,, saying an inaugural
'hi' and seeking guidance in respect of potential submissions for
a presentation spot at the next AGI conference.

It is time for me to become more visible in AGI after 5 years of
research and reprogramming my brain into the academic side of
things My plans as a post-doc are to develop a novel chip
technology. It will form the basis for what I have called
'A-Fauna'. I call it A-Fauna because it will act like biological
organisms and take their place alongside natural fauna in a chosen
ecological niche. Like tending a field as a benign 'artificial
weed-killer'...they know and prefer their weeds...you get the
idea. They are AGI robots that learn (are coached) to operate in a
specific role and then are 'intellectually nobbled' (equivalent to
biology), so their ability to handle novelty is specifically and
especially curtailed. They will also be a whole bunch cheaper in
that form...They are then deployed into that specific role and
will be happy little campers. These creatures are different to
typical mainstream AI fare because they cannot be taught how to
learn. They are like us: they learn how to learn. As a result they
can handle novelty better...a long story...Initially the A-Fauna
is very small but potentially it could get to human level. The
first part of the development is the initial proof of specific
physics, which requires a key experiment. I can't wait to do this!
The success of the experiment then leads to development and
miniaturisation and eventual application into a prototype
'critter', which will then have to be proven to have
P-consciousness (using the test in 3 below)anyway...that's the
rough 'me' of it.

I am in NICTA  www.nicta.com.au http://www.nicta.com.au
Victoria Research Lab in the Life-Sciences theme.
Department of Electrical/Electronic Eng, University of Melbourne

Sothe AGI-09 basic topics to choose from are:

1) Empirical refutation of computationalism
2) Another thought experiment refutation of computationalism. The
Totally Blind Zombie Homunculus Room
3) An objective test for Phenomenal consciousness.
4) A novel backpropagation mechanism in an excitable cell
membrane/syncytium context.

1) and 2) are interesting because the implication is that if
anyone doing AGI lifts their finger over a keyboard thinking they
can be directly involved in programming anything to do with the
eventual knowledge of the creature...they have already failed. I
don't know whether the community has internalised this yet. BTW
that makes 4 ways that computationalism has been shot. How dead
does it have to get? :-) I am less interested in these than the
others.

3) Is a special test which can be used to empirically test for
P-consciousness in an embedded, embodied artificial agent. I need
this test framework for my future AGI developments...one day I
need to be able to point at at my AGI robot and claim it is having
experiences of a certain type and to be believed. AGI needs a test
like this to get scientific credibility. So you claim it's
conscious?prove it!. This is problematic but I am reasonably
sure I have worked out a way So it needs some attention (a
paper is coming out sometime soon I hope. They told me it was
accepted, anyway...). The test is double-blind/clinical style with
'wild-type' control and 'test subject'...BTW the computationalist
contender (1/2 above) is quite validly tested but would operate as
a sham/placebo control... because it is known they will always
fail. Although anyone serious enough can offer it as a full
contender. Funnily enough 

Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-03 Thread Matt Mahoney
--- On Fri, 10/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 You seem to misunderstand the notion of a Global Brain, see

 http://pespmc1.vub.ac.be/GBRAIFAQ.html
 
 http://en.wikipedia.org/wiki/Global_brain

You are right. That is exactly what I am proposing.

I believe that CMR is initially friendly in the sense that a market is 
friendly.

Which is to say: dangerous, volatile, hard to predict ... and often not 
friendly at all!!!

I am open to alternative suggestions.
 
 A market is the most efficient way to satisfy the collective goals of its 
 participants. It is fair, but not benevolent. 

I believe this is an extremely oversimplistic and dangerous view of economics 
;-)

 Traditional economic theory which argues that free markets are optimally 
 efficient, is based on a patently false assumption of infinitely rational 
 economic actors.    This assumption is **particularly** poor when the 
 economic actors are largely **humans**, who are highly nonrational.

I think that CMR will make markets more rational. Humans will have more access 
to information, which will enable them to make more rational decisions. I 
believe that AGI will result in pervasive public surveillance of everyone. All 
of your movements, communication, and financial transactions will be public and 
instantly accessible to anyone. We will demand it, and AGI will make it cheap. 
Sure you could have secrets, but nobody will hire you, loan you money, or buy 
or sell you anything without knowing everything about you.

Anyway a deep discussion of economics would likely be too big of a digression, 
though it may be pertinent insofar as it's a metaphor for the internal 
dynamics of an AGI ... (for instance Eric Baum, who is a fairly hardcore 
libertarian politically, is in favor of free markets as a model for credit 
assignment in AI systems ... and OpenCog/NCE contains an economic attention 
allocation component...)

Economics is not a metaphor, but is central to the design of distributed AGI. 
There are hard problems that need to be solved. Economic systems have positive 
feedback loops such as speculative investment that are unstable and can crash. 
AGI and instant communication can lead to events where most of the world's 
wealth can disappear in a wave of panic selling traveling at the speed of 
light. I don't believe that competition for resources and a market where 
information has negative value has positive feedback loops, but it is something 
that needs to be studied.

My concern is that trust networks are unstable. They may lead to monopolies, 
and rare but catastrophic failures when a peer with high reputation decides to 
cheat. This is not just a problem for CMR, but any AGI where knowledge comes 
from many people. How do you know which information to trust?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-03 Thread Colin Hales

Hi Mike,
I can give the highly abridged flow of the argument:

!) It refutes COMP , where COMP = Turing machine-style abstract symbol 
manipulation. In particular the 'digital computer' as we know it.
2) The refutation happens in one highly specific circumstance. In being 
false in that circumstance it is false as a general claim.
3) The circumstances:  If COMP is true then it should be able to 
implement an artificial scientist with the following faculties:
   (a) scientific behaviour (goal-delivery of a 'law of nature', an 
abstraction BEHIND the appearances of the distal natural world, not 
merely the report of what is there),

   (b) scientific observation based on the visual scene,
   (c) scientific behaviour in an encounter with radical novelty. (This 
is what humans do)


The argument's empirical knowledge is:
1) The visual scene is visual phenomenal consciousness. A highly 
specified occipital lobe deliverable.
2) In the context of a scientific act, scientific evidence is 'contents 
of phenomenal consciousness'. You can't do science without it. In the 
context of this scientific act, visual P-consciousness and scientific 
evidence are identities. P-consciousness is necessary but on its own is 
not sufficient. Extra behaviours are needed, but these are a secondary 
consideration here.


NOTE: Do not confuse scientific observation  with the scientific 
measurement, which is a collection of causality located in the distal 
external natural world. (Scientific measurement is not the same thing as 
scientific evidence, in this context). The necessary feature of a visual 
scene is that it operate whilst faithfully inheriting the actual 
causality of the distal natural world. You cannot acquire a law of 
nature without this basic need being met.


3) Basic physics says that it is impossible for a brain to create a 
visual scene using only the inputs acquired by the peripheral stimulus 
received at the retina. This is due to fundamentals of quantum 
degeneracy. Basically there are an infinite number of distal external 
worlds that can deliver the exact same photon impact. The transduction 
that occurs in the retinal rod/cones is entirely a result of protein 
isomerisation. All information about distal origins is irretievably 
gone. An impacting photon could have come across the room or across the 
galaxy. There is no information about origins in the transduced data in 
the retina.


That established, you are then faced with a paradox:

(i) (3) says a visual scene is impossible.
(ii) Yet the brain makes one.
(iii) To make the scene some kind of access to distal spatial relations 
must be acquired as input data in addition to that from the retina.

(iv) There are only 2 places that can come from...
   (a) via matter (which we already have - retinal impact at the 
boundary that is the agent periphery)
   (b) via space (at the boundary of the matter of the brain with 
space, the biggest boundary by far).
So, the conclusion is that the brain MUST acquire the necessary data via 
the spatial boundary route. You don't have to know how. You just have no 
other choice. There is no third party in there to add the necessary data 
and the distal world is unknown. There is literally nowhere else for the 
data to come from. Matter and Space exhaust the list of options. (There 
is alway magical intervention ... but I leave that to the space cadets.)


That's probably the main novelty for the reader to  to encounter. But we 
are not done yet.


Next empirical fact:
(v) When  you create a turing-COMP substrate the interface with space is 
completely destroyed and replaced with the randomised machinations of 
the matter of the computer manipulating a model of the distal world. All 
actual relationships with the real distal external world are destroyed. 
In that circumstance the COMP substrate is implementing the science of 
an encounter with a model, not an encounter with the actual distal 
natural world.


No amount of computation can make up for that loss, because you are in a 
circumstance of an intrinsically unknown distal natural world, (the 
novelty of an act of scientific observation).

.
= COMP is false.
==
OK.  There are subtleties here.
The refutation is, in effect, a result of saying you can't do it 
(replace a scientist with a computer) because you can't simulate inputs. 
It is just the the nature of 'inputs' has been traditionally 
impoverished by assumption born merely of cross-disciplinary blindness.. 
Not enough quantum mechanics or electrodynamics is done by those exposed 
to 'COMP' principles.


This result, at first appearance, says you can't simulate a scientist. 
But you can! If you already know what is out there in the natural world 
then you can simulate a scientific act. But you don't - by definition  - 
you are doing science to find out! So it's not that you can't simulate a 
scientist, it is just that in order to do it you already have to know 
everything, so you don't want to ... it's 

AW: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Dr. Matthias Heger
1. We feel ourselves not exactly at a single point in space. Instead, we
identify ourselves with our body which consist of several parts and which
are already at different points in space. Your eye is not at the same place
as your hand.
I think this is a proof that a distributed AGI will not need  to have a
complete different conscious state for a model of its position in space than
we already have.

2.But to a certain degree you are of course right that we have a map of our
environment and we know our position (which is not a point because of 1) in
this map. In the brain of a rat there are neurons which each represent a
position of the environment. Researches could predict the position of the
rat only by looking into the rat's brain.

3. I think it is extremely important, that we give an AGI no bias about
space and time as we seem to have. Our intuitive understanding of space and
time is useful for our life on earth but it is completely wrong as we know
from theory of relativity and quantum physics. 

-Matthias Heger



-Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 4. Oktober 2008 02:44
An: agi@v2.listbox.com
Betreff: [agi] I Can't Be In Two Places At Once.

The foundation of the human mind and system is that we can only be in one 
place at once, and can only be directly, fully conscious of that place. Our 
world picture,  which we and, I think, AI/AGI tend to take for granted, is 
an extraordinary triumph over that limitation   - our ability to conceive of

the earth and universe around us, and of societies around us, projecting 
ourselves outward in space, and forward and backward in time. All animals 
are similarly based in the here and now.

But,if only in principle, networked computers [or robots] offer the 
possibility for a conscious entity to be distributed and in several places 
at once, seeing and interacting with the world simultaneously from many 
POV's.

Has anyone thought about how this would change the nature of identity and 
intelligence? 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com