Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Vladimir Nesov
On Wed, Oct 15, 2008 at 2:18 PM, David Hart [EMAIL PROTECTED] wrote:
 On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales [EMAIL PROTECTED]
 wrote:

 So you'll just have to wait. Sorry. I also have patent/IP issues.

 Exactly what qualia am I expected to feel when you say the words
 'Intellectual Property'? (that's a rhetorical question, just in case there
 was any doubt!)

 I'd like to suggest that the COMP=false thread be considered a completely
 mis-placed, undebatable and dead topic on the AGI list.

That'd be great.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Ben Goertzel
Actually, I think COMP=false is a perfectly valid subject for discussion on
this list.

However, I don't think discussions of the form I have all the answers, but
they're top-secret and I'm not telling you, hahaha are particularly useful.

So, speaking as a list participant, it seems to me this thread has probably
met its natural end, with this reference to proprietary weird-physics IP.

However, speaking as list moderator, I don't find this thread so off-topic
or unpleasant as to formally kill the thread.

-- Ben

On Wed, Oct 15, 2008 at 6:18 AM, David Hart [EMAIL PROTECTED] wrote:

 On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales [EMAIL PROTECTED]
  wrote:


 So you'll just have to wait. Sorry. I also have patent/IP issues.


 Exactly what qualia am I expected to feel when you say the words
 'Intellectual Property'? (that's a rhetorical question, just in case there
 was any doubt!)

 I'd like to suggest that the COMP=false thread be considered a completely
 mis-placed, undebatable and dead topic on the AGI list. Maybe people who
 like Chinese Rooms will sign up for the new COMP=false list...

 -dave
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Jim Bromer
On Wed, Oct 15, 2008 at 10:14 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Actually, I think COMP=false is a perfectly valid subject for discussion on
 this list.

 However, I don't think discussions of the form I have all the answers, but
 they're top-secret and I'm not telling you, hahaha are particularly useful.

 So, speaking as a list participant, it seems to me this thread has probably
 met its natural end, with this reference to proprietary weird-physics IP.

 However, speaking as list moderator, I don't find this thread so off-topic
 or unpleasant as to formally kill the thread.

 -- Ben

If someone doesn't want to get into a conversation with Colin about
whatever it is that he is saying, then they should just exercise some
self-control and refrain from doing so.

I think Colin's ideas are pretty far out there. But that does not mean
that he has never said anything that might be useful.

My offbeat topic, that I believe that the Lord may have given me some
direction about a novel approach to logical satisfiability that I am
working on, but I don't want to discuss the details about the
algorithms until I have gotten a chance to see if they work or not,
was never intended to be a discussion about the theory itself.  I
wanted to have a discussion about whether or not a good SAT solution
would have a significant influence on AGI, and whether or not the
unlikely discovery of an unexpected breakthrough on SAT would serve as
rational evidence in support of the theory that the Lord helped me
with the theory.

Although I am skeptical about what I think Colin is claiming, there is
an obvious parallel between his case and mine.  There are relevant
issues which he wants to discuss even though his central claim seems
to private, and these relevant issues may be interesting.

Colin's unusual reference to some solid path which cannot be yet
discussed is annoying partly because it so obviously unfounded.  If he
had the proof (or a method), then why isn't he writing it up (or
working it out).  A similar argument was made against me by the way,
but the difference was that I never said that I had the proof or
method.  (I did say that you should get used to a polynomial time
solution to SAT but I never said that I had a working algorithm.)

My point is that even though people may annoy you with what seems like
unsubstantiated claims, that does not disqualify everything they have
said. That rule could so easily be applied to anyone who posts on that
list.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread David Hart
On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales
[EMAIL PROTECTED]wrote:


 So you'll just have to wait. Sorry. I also have patent/IP issues.


Exactly what qualia am I expected to feel when you say the words
'Intellectual Property'? (that's a rhetorical question, just in case there
was any doubt!)

I'd like to suggest that the COMP=false thread be considered a completely
mis-placed, undebatable and dead topic on the AGI list. Maybe people who
like Chinese Rooms will sign up for the new COMP=false list...

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Eric Burton
but I don't want to discuss the details about the
algorithms until I have gotten a chance to see if they work or not,

Hearing this makes my teeth gnash. GO AND IMPLEMENT THEM. THEN TELL US

On 10/15/08, Colin Hales [EMAIL PROTECTED] wrote:


 David Hart wrote:
 On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 wrote:


 So you'll just have to wait. Sorry. I also have patent/IP issues.


 Exactly what qualia am I expected to feel when you say the words
 'Intellectual Property'? (that's a rhetorical question, just in case
 there was any doubt!)

 I'd like to suggest that the COMP=false thread be considered a
 completely mis-placed, undebatable and dead topic on the AGI list.
 Maybe people who like Chinese Rooms will sign up for the new
 COMP=false list...

 -dave
 Hi,
 The attendees here would like to be involved in the parenthood of real
 AGI, yes?

 I am being rather forthright in scientifically suggesting an approach to
 that outcome focussed entirely on COMP may not achieve that goal, and
 that a diversity of views is needed...and I have a non-COMP approach
 which is possibly a way to AGI.

 I know may claims have not been scientifically backed up. I will fix that.

 The fact is - COMP has already been refuted twice in print. I will be
 adding 2 more refutations. That is already 2 counts that make term
 COMP-AGI  an oxymoron. COMP was always a conjecture and has never been
 proven. The only recent assessment in the literature ends with the words
 Computationalism is dead.  Basic common sense dictates that if you are
 really keen on real AGI that is scientifically viable, then a diversity
 of approaches is advisable. According to Ben that seems to be the way of
 the group as a whole. I take some comfort from this. The  necessary
 diversity requires all manner of multidisciplinary scientists become
 interested and contribute. I intend to be one of those.

 So having 'shaken the tree' I'll leave it at that for now. I'll come
 back with publications to discuss and we can pick up the science of AGI
 from there. The first paper will be an objective test for
 P-consciousness in an artificial agent. A test I hope everyone's AGI
 candidates will be subjected toso, back to work for me.

 enjoy.

 regards
 Colin Hales

 
 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | Modify
 https://www.listbox.com/member/?;
 Your Subscription[Powered by Listbox] http://www.listbox.com




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-14 Thread Matt Mahoney
--- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote:

 The only reason for not connecting consciousness with AGI is a
 situation where one can see no mechanism or role for it. That inability
 is no proof there is noneand I have both to the point of having a
 patent in progress.  Yes, I know it's only my claim at the moment...but
 it's behind why I believe the links to machine consciousness are not
 optional, despite the cultural state/history of the field at the moment
 being less than perfect and folks cautiously sidling around
 consciousness like it was bomb under their budgets.

Colin, I read your paper in publication that you were so kind to send me. For 
those who have not seen it, it is a well written, comprehensive survey of 
research in machine consciousness. It does not take a position on whether 
consciousness plays an essential role in AGI. (I understand that taking a 
controversial position probably would have resulted in rejection).

With regard to COMP, I assume you define COMP to be the position that 
everything the mind does is, in principle, computable. If I understand your 
position, consciousness does play a critical role in AGI. However, we don't 
know what it is. Therefore we need to find out by using scientific research, 
then duplicate that process (if possible) in a machine before it can achieve 
AGI.

Here and in your paper, you have not defined what consciousness is. Most 
philosophical arguments can be traced to disagreements about the meanings of 
words. In your paper you say that consciousness means having phenomenal states, 
but you don't define what a phenomenal state is.

Without a definition, we default to what we think it means. Everybody knows 
what consciousness is. It is something that all living humans have. We 
associate consciousness with properties of humans, such as having a name, a 
face, emotions, the ability to communicate in natural language, the ability to 
learn, to behave in ways we expect people to behave, to look like a human. 
Thus, we ascribe partial degrees of consciousness (with appropriate ethical 
treatment) to animals, video game characters, human shaped robots, and teddy 
bears.

To argue your position, you need to nail down a definition of consciousness. 
But that is hard. For example, you could define consciousness as having goals. 
So if a dog wants to go for a walk, it is conscious. But then a thermostat 
wants to keep the room at a set temperature, and a linear regression algorithm 
wants to find the best straight line fit to a set of points.

You could define consciousness as the ability to experience pleasure and pain. 
But then you need a test to distinguish experience from mere reaction, or else 
I could argue that simple reinforcement learners like 
http://www.mattmahoney.net/autobliss.txt experience pain. It boils down to how 
you define experience.

You could define consciousness as being aware of your own thoughts. But again, 
you must define aware. We distinguish conscious or episodic memories, such as 
when I recalled yesterday something that happened last month, and unconscious 
or procedural memories, such as the learned skills in coordinating my leg 
muscles while walking. We can do studies to show that conscious memories are 
stored in the hippocampus and higher layers of the cerebral cortex, and 
unconscious memories are stored in the cerebellum. But that is not really 
helpful for AGI design. The important distinction is that we remember 
remembering conscious memories but not unconscious. Reading from conscious 
memory also writes into it. But I can simulate this process in simple programs, 
for example, a database that logs transactions.

So if you can nail down a definition of consciousness without pointing to a 
human, I am willing to listen. Otherwise we default to the possibility of 
building AGI on COMP principles and then ascribing consciousness to it since it 
behaves just like a human.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-14 Thread Colin Hales



Matt Mahoney wrote:

--- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote:

  

The only reason for not connecting consciousness with AGI is a
situation where one can see no mechanism or role for it. That inability
is no proof there is noneand I have both to the point of having a
patent in progress.  Yes, I know it's only my claim at the moment...but
it's behind why I believe the links to machine consciousness are not
optional, despite the cultural state/history of the field at the moment
being less than perfect and folks cautiously sidling around
consciousness like it was bomb under their budgets.



Colin, I read your paper in publication that you were so kind to send me. For 
those who have not seen it, it is a well written, comprehensive survey of 
research in machine consciousness. It does not take a position on whether 
consciousness plays an essential role in AGI. (I understand that taking a 
controversial position probably would have resulted in rejection).

With regard to COMP, I assume you define COMP to be the position that 
everything the mind does is, in principle, computable. If I understand your 
position, consciousness does play a critical role in AGI. However, we don't 
know what it is. Therefore we need to find out by using scientific research, 
then duplicate that process (if possible) in a machine before it can achieve 
AGI.
  



Here and in your paper, you have not defined what consciousness is. Most 
philosophical arguments can be traced to disagreements about the meanings of 
words. In your paper you say that consciousness means having phenomenal states, 
but you don't define what a phenomenal state is.

Without a definition, we default to what we think it means. Everybody knows 
what consciousness is. It is something that all living humans have. We associate 
consciousness with properties of humans, such as having a name, a face, emotions, the 
ability to communicate in natural language, the ability to learn, to behave in ways we 
expect people to behave, to look like a human. Thus, we ascribe partial degrees of 
consciousness (with appropriate ethical treatment) to animals, video game characters, 
human shaped robots, and teddy bears.

To argue your position, you need to nail down a definition of consciousness. 
But that is hard. For example, you could define consciousness as having goals. 
So if a dog wants to go for a walk, it is conscious. But then a thermostat 
wants to keep the room at a set temperature, and a linear regression algorithm 
wants to find the best straight line fit to a set of points.

You could define consciousness as the ability to experience pleasure and pain. But then 
you need a test to distinguish experience from mere reaction, or else I could argue that 
simple reinforcement learners like http://www.mattmahoney.net/autobliss.txt experience 
pain. It boils down to how you define experience.

You could define consciousness as being aware of your own thoughts. But again, you must 
define aware. We distinguish conscious or episodic memories, such as when I 
recalled yesterday something that happened last month, and unconscious or procedural 
memories, such as the learned skills in coordinating my leg muscles while walking. We can 
do studies to show that conscious memories are stored in the hippocampus and higher 
layers of the cerebral cortex, and unconscious memories are stored in the cerebellum. But 
that is not really helpful for AGI design. The important distinction is that we remember 
remembering conscious memories but not unconscious. Reading from conscious memory also 
writes into it. But I can simulate this process in simple programs, for example, a 
database that logs transactions.

So if you can nail down a definition of consciousness without pointing to a 
human, I am willing to listen. Otherwise we default to the possibility of 
building AGI on COMP principles and then ascribing consciousness to it since it 
behaves just like a human.

-- Matt Mahoney, [EMAIL PROTECTED]
  


I am way past merely defining anything. I know what phenomenal fields 
are constructed of: Virtual Nambu Goldstone Bosons. Brain material is 
best regarded as a radically anisotropic quasi-fluid undergoing massive 
phase changes on multiple time scales. The problem is one of 
thermodynamics, not abstract computation. Duplicating the boson 
generation inorganically and applying that process to regulatory 
mechanisms of learning is exactly what I plan for my AGI chips. The 
virtual particles were named Qualeons by some weird guy here that i was 
talking to one day. I forgot is name. I better find that out! I digress. :-)


It would take 3 PhD dissertations to cover everything from quantum 
mechanics to psychology. You have to be a polymath. And to see how they 
explain consciousness you need to internalise 'dual aspect science', 
from which perspective its all obvious. I have to change the whole of 
science from single to dual aspect to make it understood.