[agi] Paper: Voodoo Correlations in Social Neuroscience

2009-01-15 Thread Mark Waser
http://machineslikeus.com/news/paper-voodoo-correlations-social-neuroscience

http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Mark Waser

But how can it dequark the tachyon antimatter containment field?


Richard,

   You missed Mike Tintner's explanation . . . .

You're not thinking your argument through. Look carefully at my 
spontaneous

COW - DOG - TAIL - CURRENT CRISIS - LOCAL VS GLOBAL
THINKING - WHAT A NICE DAY - MUST GET ON- CANT SPEND MUCH MORE TIME ON
THIS etc. etc



It can do this partly because
a) single ideas have multiple, often massively mutiple, idea/domain 
connections in the human brain, and allow one to go off in any of multiple 
tangents/directions
b) humans have many things - and therefore multiple domains - on their 
mind at the same time concurrently - and can switch as above from the 
immediate subject to some other pressing subject domain (e.g. from 
economics/politics (local vs global) to the weather (what a nice day).


So maybe it's worth taking 20 secs. of time - producing your own 
chain-of-free-association starting say with MAHONEY and going on for 
another 10 or so items - and trying to figure out how






- Original Message - 
From: Richard Loosemore r...@lightlink.com

To: agi@v2.listbox.com
Sent: Thursday, January 08, 2009 8:05 PM
Subject: Re: [agi] The Smushaby of Flatway.



Ronald C. Blue wrote:

[snip]
[snip] ... chaos stimulation because ... correlational wavelet opponent 
processing machine ... globally entangled ... Paul rf trap ... parallel

 modulating string pulses ... a relative zero energy value or

opponent process  ...   phase locked ... parallel opponent process
... reciprocal Eigenfunction ...  opponent process ... summation 
interference ... gaussian reference rf trap ...

 oscillon output picture ... locked into the forth harmonic ...
 ... entangled with its Eigenfunction ..

 [snip]
 That is what entangled memory means.



Okay, I got that.

But how can it dequark the tachyon antimatter containment field?



Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Future of AGI

2008-11-26 Thread Mark Waser
- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

I should explain rationality


No Mike, you *really* shouldn't.  Repurposing words like you do merely leads 
to confusion not clarity . . . .


Actual general intelligence in humans and animals is indisputably 
continuously screen-based.


You keep contending this with absolutely no evidence or proof.

You can have conscious intelligence without language, logic or maths. You 
can't have it without a screen - the continuous movie of consciousness. 
And that screen is not just vision but sound.


And how do you know this?

If you're smart,  I suggest, you'll acknowledge the truth here, which is 
that you know next to nothing about imaginative intelligence


I see, so if Ben is smart he'll acknowledge that you, with far less 
knowledge and experience, have the correct answer (despite being unable to 
explain it coherently enough to convince *anyone*).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Mark Waser
Yeah.  Great headline -- Man beats dead horse beyond death!

I'm sure that there will be more details at 11.

Though I am curious . . . .  BillK, why did you think that this was worth 
posting?
  - Original Message - 
  From: Derek Zahn 
  To: agi@v2.listbox.com 
  Sent: Thursday, November 20, 2008 9:43 AM
  Subject: **SPAM** RE: [agi] Professor Asim Roy Finally Publishes 
Controversial Brain Theory



  From the paper:
   
   This paper has proposed a new paradigm for the 
   internal mechanisms of the brain, one that postulates 
   that there are parts of the brain that control other parts. 
   
  Sometimes I despair.
   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Mark Waser

???  Did you read the article?


Absolutely.  I don't comment on things without reading them (unlike some 
people on this list).  Not only that, I also read the paper that someone was 
nice enough to send the link for.


Now his 'new' theory may be old hat to you personally,  but apparently 
not to the majority of AI researchers, (according to the article).


The phrase according to the article is what is telling.  It is an improper 
(and incorrect) portrayal of the majority of AI researchers.


He must be saying something a bit unusual to have been fighting for ten 
years to get it published and accepted enough for him to now have been 
invited to do a workshop on his theory.


Something a bit unusual like Mike Tintner fighting us on this list for ten 
years and then finding someone to accept his theories and run a workshop? 
Note who is running the workshop . . . . not the normal BICA community for 
sure . . . .




- Original Message - 
From: BillK [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, November 20, 2008 10:37 AM
Subject: **SPAM** Re: [agi] Professor Asim Roy Finally Publishes 
Controversial Brain Theory



On Thu, Nov 20, 2008 at 3:06 PM, Mark Waser [EMAIL PROTECTED] wrote:

Yeah.  Great headline -- Man beats dead horse beyond death!

I'm sure that there will be more details at 11.

Though I am curious . . . .  BillK, why did you think that this was worth
posting?




???  Did you read the article?

---
Quote:
In the late '90s, Asim Roy, a professor of information systems at
Arizona State University, began to write a paper on a new brain
theory. Now, 10 years later and after several rejections and
resubmissions, the paper Connectionism, Controllers, and a Brain
Theory has finally been published in the November issue of IEEE
Transactions on Systems, Man, and Cybernetics – Part A: Systems and
Humans.

Roy's theory undermines the roots of connectionism, and that's why his
ideas have experienced a tremendous amount of resistance from the
cognitive science community. For the past 15 years, Roy has engaged
researchers in public debates, in which it's usually him arguing
against a dozen or so connectionist researchers. Roy says he wasn't
surprised at the resistance, though.

I was attempting to take down their whole body of science, he
explained. So I would probably have behaved the same way if I were in
their shoes.

No matter exactly where or what the brain controllers are, Roy hopes
that his theory will enable research on new kinds of learning
algorithms. Currently, restrictions such as local and memoryless
learning have limited AI designers, but these concepts are derived
directly from that idea that control is local, not high-level.
Possibly, a controller-based theory could lead to the development of
truly autonomous learning systems, and a next generation of
intelligent robots.

The sentiment that the science is stuck is becoming common to AI
researchers. In July 2007, the National Science Foundation (NSF)
hosted a workshop on the Future Challenges for the Science and
Engineering of Learning. The NSF's summary of the Open Questions in
Both Biological and Machine Learning [see below] from the workshop
emphasizes the limitations in current approaches to machine learning,
especially when compared with biological learners' ability to learn
autonomously under their own self-supervision:

Virtually all current approaches to machine learning typically
require a human supervisor to design the learning architecture, select
the training examples, design the form of the representation of the
training examples, choose the learning algorithm, set the learning
parameters, decide when to stop learning, and choose the way in which
the performance of the learning algorithm is evaluated. This strong
dependence on human supervision is greatly retarding the development
and ubiquitous deployment of autonomous artificial learning systems.
Although we are beginning to understand some of the learning systems
used by brains, many aspects of autonomous learning have not yet been
identified.

Roy sees the NSF's call for a new science as an open door for a new
theory, and he plans to work hard to ensure that his colleagues
realize the potential of the controller model. Next April, he will
present a four-hour workshop on autonomous machine learning, having
been invited by the Program Committee of the International Joint
Conference on Neural Networks (IJCNN).
-


Now his 'new' theory may be old hat to you personally,  but apparently
not to the majority of AI researchers, (according to the article).  He
must be saying something a bit unusual to have been fighting for ten
years to get it published and accepted enough for him to now have been
invited to do a workshop on his theory.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your

Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Mark Waser

I mean that people are free to decide if others feel pain.


Wow!  You are one sick puppy, dude.  Personally, you have just hit my Do 
not bother debating with list.


You can decide anything you like -- but that doesn't make it true.

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 4:44 PM
Subject: RE: FW: [agi] A paper that actually does solve the problem of 
consciousness--correction




--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:

First, it is not clear people
are free to decide what makes pain real, at least
subjectively real.


I mean that people are free to decide if others feel pain. For example, a 
scientist may decide that a mouse does not feel pain when it is stuck in 
the eye with a needle (the standard way to draw blood) even though it 
squirms just like a human would. It is surprisingly easy to modify one's 
ethics to feel this way, as proven by the Milgram experiments and Nazi war 
crime trials.


If we have anything close to the advances in brain scanning and brain 
science

that Kurzweil predicts 1, we should come to understand the correlates of
consciousness quite well


No. I used examples like autobliss ( 
http://www.mattmahoney.net/autobliss.txt ) and the roundworm c. elegans as 
examples of simple systems whose functions are completely understood, yet 
the question of whether such systems experience pain remains a 
philosophical question that cannot be answered by experiment.


-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Mark Waser

My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.


Maybe I missed it but why do you assume that because qualia are atomic that 
they have no differentiable details?  Evolution is, quite correctly, going 
to give pain qualia higher priority and less ability to be shut down than 
red qualia.  In a good representation system, that means that searing hot is 
going to be *very* whatever and very tough to ignore.




- Original Message - 
From: Harry Chesley [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 18, 2008 1:57 PM
Subject: **SPAM** Re: [agi] A paper that actually does solve the problem of 
consciousness




Richard Loosemore wrote:

Harry Chesley wrote:

Richard Loosemore wrote:

I completed the first draft of a technical paper on consciousness
the other day.   It is intended for the AGI-09 conference, and it
can be found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf



One other point: Although this is a possible explanation for our
subjective experience of qualia like red or soft, I don't see
it explaining pain or happy quite so easily. You can
hypothesize a sort of mechanism-level explanation of those by
relegating them to the older or lower parts of the brain (i.e.,
they're atomic at the conscious level, but have more effects at the
physiological level (like releasing chemicals into the system)),
but that doesn't satisfactorily cover the subjective side for me.


I do have a quick answer to that one.

Remember that the core of the model is the *scope* of the analysis
mechanism.  If there is a sharp boundary (as well there might be),
then this defines the point where the qualia kick in.  Pain receptors
are fairly easy:  they are primitive signal lines.  Emotions are, I
believe, caused by clusters of lower brain structures, so the
interface between lower brain and foreground is the place where
the foreground sees a limit to the analysis mechanisms.

More generally, the significance of the foreground is that it sets
a boundary on how far the analysis mechanisms can reach.

I am not sure why that would seem less satisfactory as an explanation
of the subjectivity.  It is a raw feel, and that is the key idea,
no?


My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
Now you are making up new rules to decide that autobliss doesn't feel 
pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.


I made up no rules.  I merely asked a question.  You are the one who makes a 
definition like this and then says that it is up to people to decide whether 
other humans feel pain or not.  That is hypocritical to an extreme.


I also believe that your definition is a total crock that was developed for 
no purpose other than to support your BS.


You stated that machines can feel pain, and you stated that we don't get 
to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) and prove that 
these criteria are valid?


I stated that *SOME* future machines will be able to feel pain.  I can 
define grounding, internal feedback and volition but feel no need to do so 
as properties of a Turing machine and decline to attempt to prove anything 
to you since you're so full of it that your mother couldn't prove to you 
that you were born.



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 18, 2008 6:26 PM
Subject: Definition of pain (was Re: FW: [agi] A paper that actually does 
solve the problem of consciousness--correction)




--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:


Autobliss has no grounding, no internal feedback, and no
volition.  By what definitions does it feel pain?


Now you are making up new rules to decide that autobliss doesn't feel 
pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.


You stated that machines can feel pain, and you stated that we don't get 
to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) and prove that 
these criteria are valid?


And just to avoid confusion, my question has nothing to do with ethics.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
 I am just trying to point out the contradictions in Mark's sweeping 
 generalizations about the treatment of intelligent machines

Huh?  That's what you're trying to do?  Normally people do that by pointing to 
two different statements and arguing that they contradict each other.  Not by 
creating new, really silly definitions and then trying to posit a universe 
where blue equals red so everybody is confused.

 But to be fair, such criticism is unwarranted. 

So exactly why are you persisting?

 Ethical beliefs are emotional, not rational,

Ethical beliefs are subconscious and deliberately obscured from the conscious 
mind so that defections can be explained away without triggering other 
primate's lie-detecting senses.  However, contrary to your antiquated beliefs, 
they are *purely* a survival trait with a very solid grounding.

 Ethical beliefs are also algorithmically complex

Absolutely not.  Ethical beliefs are actually pretty darn simple as far as the 
subconscious is concerned.  It's only when the conscious rational mind gets 
involved that ethics are twisted beyond recognition (just like all your 
arguments).

 so the result of this argument could only result in increasingly complex 
 rules to fit his model

Again, absolutely not.  You have no clue as to what my argument is yet you 
fantasize that you can predict it's results.  BAH!

 For the record, I do have ethical beliefs like most other people

Yet you persist in arguing otherwise.  *Most* people would call that dishonest, 
deceitful, and time-wasting. 

 The question is not how should we interact with machines, but how will we? 

No, it isn't.  Study the results on ethical behavior when people are convinced 
that they don't have free will.

= = = = = 

BAH!  I should have quit answering you long ago.  No more.


  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Tuesday, November 18, 2008 7:58 PM
  Subject: Re: Definition of pain (was Re: FW: [agi] A paper that actually does 
solve the problem of consciousness--correction)


Just to clarify, I'm not really interested in whether machines feel 
pain. I am just trying to point out the contradictions in Mark's sweeping 
generalizations about the treatment of intelligent machines. But to be fair, 
such criticism is unwarented. Mark is arguing about ethics. Everyone has 
ethical beliefs. Ethical beliefs are emotional, not rational, although we often 
forget this. Ethical beliefs are also algorithmically complex, so the result of 
this argument could only result in increasingly complex rules to fit his model. 
It would be unfair to bore the rest of this list with such a discussion.

For the record, I do have ethical beliefs like most other people, but 
they are irrelevant to the design of AGI. The question is not how should we 
interact with machines, but how will we? For example, when we develop the 
technology to simulate human minds in general, or to simulate specific humans 
who have died, common ethical models among humans will probably result in the 
granting of legal and property rights to these simulations. Since these 
simulations could reproduce, evolve, and acquire computing resources much 
faster than humans, the likely result will be human extinction, or viewed 
another way, our evolution into a non-DNA based life form. I won't offer an 
opinion on whether this is desirable or not, because my opinion would be based 
on my ethical beliefs.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:

  From: Ben Goertzel [EMAIL PROTECTED]
  Subject: Re: Definition of pain (was Re: FW: [agi] A paper that 
actually does solve the problem of consciousness--correction)
  To: agi@v2.listbox.com
  Date: Tuesday, November 18, 2008, 6:29 PM





  On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney [EMAIL PROTECTED] 
wrote:

--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

 Autobliss has no grounding, no internal feedback, and no
 volition.  By what definitions does it feel pain?

Now you are making up new rules to decide that autobliss doesn't 
feel pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.

You stated that machines can feel pain, and you stated that we 
don't get to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) 

  Clearly, this can be done, and has largely been done already ... 
though cutting and pasting or summarizing the relevant literature in emails 
would not a productive use of time
   
and prove that these criteria are valid?


  That is a different issue, as it depends on the criteria of validity, 
of course...

  I think one can argue that these properties are necessary for a 
finite

Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Mark Waser
 Seed AI is a myth.

Ah.  Now I get it.  You are on this list solely to try to slow down progress as 
much as possible . . . . (sorry that I've been so slow to realize this)

add-rule kill-file Matt Mahoney
  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Tuesday, November 18, 2008 8:23 PM
  Subject: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other 
dangerous technologies...


Steve, what is the purpose of your political litmus test? If you are 
trying to assemble a team of seed-AI programmers with the correct ethics, 
forget it. Seed AI is a myth.
http://www.mattmahoney.net/agi2.html (section 2).

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Steve Richfield [EMAIL PROTECTED] wrote:

  From: Steve Richfield [EMAIL PROTECTED]
  Subject: Re: [agi] My prospective plan to neutralize AGI and other 
dangerous technologies...
  To: agi@v2.listbox.com
  Date: Tuesday, November 18, 2008, 6:39 PM


  Richard and Bill,


  On 11/18/08, BillK [EMAIL PROTECTED] wrote: 
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
 I see how this would work:  crazy people never tell lies, so 
you'd be able
 to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.

  In 1994 I was literally sold into servitude in Saudi Arabia as a sort 
of slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air 
Force. I managed to escape that situation with the help of the same Wahhabist 
Sunni Muslims that are now causing so many problems. With that background, I 
think I understand them better than most people.

  As in all other societies, they are not given the whole truth, e.g. 
most have never heard of the slaughter at Medina, and believe that Mohamed 
never hurt anyone at all.

  My hope and expectation is that, by allowing people to research 
various issues as they work on their test, that a LOT of people who might 
otherwise fail the test will instead reevaluate their beliefs, at least enough 
to come up with the right answers, whether or not they truly believe them. At 
least that level of understanding assures that they can carry on a reasoned 
conversation. This is a MAJOR problem now. Even here on this forum, many people 
still don't get reverse reductio ad absurdum.

  BTW, I place most of the blame for the middle east impasse on the 
West rather than on the East. The Koran says that most of the evil in the world 
is done by people who think they are doing good, which brings with it a good 
social mandate to publicly reconsider and defend any actions that others claim 
to be evil. The next step is to proclaim evil doers as unwitting agents of 
Satan. If there is still no good defense, then they drop the unwitting. Of 
course, us stupid uncivilized Westerners have fallen into this, and so 19 brave 
men sacrificed their lives just to get our attention, but even that failed to 
work as planned. Just what DOES it take to get our attention - a nuke in NYC? 
What the West has failed to realize is that they are playing a losing hand, but 
nonetheless, they just keep increasing the bet on the expectation that the 
other side will fold. They won't. I was as much intending my test for the sort 
of stupidity that nearly all Americans harbor as that carried by Al Queda. 
Neither side seems to be playing with a full deck.

  Steve Richfield


--
agi | Archives  | Modify Your Subscription  
   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser

How do you propose grounding ethics?


Ethics is building and maintaining healthy relationships for the betterment 
of all.  Evolution has equipped us all with a good solid moral sense that 
frequently we don't/can't even override with our short-sighted selfish 
desires (that, more frequently than not, eventually end up screwing us over 
when we follow them).  It's pretty easy to ground ethics as long as you 
realize that there are some cases that are just too close to call with the 
information that you possess at the time you need to make a decision.  But 
then again, that's precisely what intelligence is -- making effective 
decisions under uncertainty.


I have a complex model that says some things are right and others are 
wrong.


That's nice -- but you've already pointed out that your model has numerous 
shortcomings such that you won't even stand behind it.  Why do you keep 
bringing it up?  It's like saying I have an economic theory when you 
clearly don't have the expertise to form a competent one.



So does everyone else. These models don't agree.


And lots of people have theories of creationism.  Do you want to use that to 
argue that evolution is incorrect?



How do you propose testing whether a model is correct or not?


By determining whether it is useful and predictive -- just like what we 
always do when we're practicing science (as opposed to spouting BS).


If everyone agreed that torturing people was wrong, then torture wouldn't 
exist.


Wrong.  People agree that things are wrong and then they go and do them 
anyways because they believe that it is beneficial for them.  Why do you 
spout obviously untrue BS?


How do you prove that Richard's definition of consciousness is correct and 
Colin's is wrong, or vice versa? All you can say about either definition 
is that some entities are conscious and others are not, according to 
whichever definition you accept. But so what?


Wow!  You really do practice useless sophistry.  For definitions, correct 
simply means useful and predictive.  I'll go with whichever definition most 
accurately reflects the world.  Are you trying to propose that there is an 
absolute truth out there as far as definitions go?


Because people nevertheless make this arbitrary distinction in order to 
make ethical decisions.


So when lemmings go into the river you believe that they are correct and you 
should follow them?



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 9:35 AM
Subject: Re: [agi] A paper that actually does solve the problem of 
consciousness




--- On Sun, 11/16/08, Mark Waser [EMAIL PROTECTED] wrote:

I wrote:

 I think the reason that the hard question is
interesting at all is that it would presumably be OK to
torture a zombie because it doesn't actually experience
pain, even though it would react exactly like a human being
tortured. That's an ethical question. Ethics is a belief
system that exists in our minds about what we should or
should not do. There is no objective experiment you can do
that will tell you whether any act, such as inflicting pain
on a human, animal, or machine, is ethical or not. The only
thing you can measure is belief, for example, by taking a
poll.

What is the point to ethics?  The reason why you can't
do objective experiments is because *YOU* don't have a
grounded concept of ethics.  The second that you ground your
concepts in effects that can be seen in the real
world, there are numerous possible experiments.


How do you propose grounding ethics? I have a complex model that says some 
things are right and others are wrong. So does everyone else. These models 
don't agree. How do you propose testing whether a model is correct or not? 
If everyone agreed that torturing people was wrong, then torture wouldn't 
exist.



The same is true of consciousness.  The hard problem of
consciousness is hard because the question is ungrounded.
Define all of the arguments in terms of things that appear
and matter in the real world and the question goes away.
It's only because you invent ungrounded unprovable
distinctions that the so-called hard problem appears.


How do you prove that Richard's definition of consciousness is correct and 
Colin's is wrong, or vice versa? All you can say about either definition 
is that some entities are conscious and others are not, according to 
whichever definition you accept. But so what?



Torturing a p-zombie is unethical because whether it feels
pain or not is 100% irrelevant in the real
world.  If it 100% acts as if it feels pain, then for
all purposes that matter it does feel pain.  Why invent this
mystical situation where it doesn't feel pain yet acts
as if it does?


Because people nevertheless make this arbitrary distinction in order to 
make ethical decisions. Torturing a p-zombie is only wrong according to 
some ethical models but not others. The same is true about doing animal 
experiments, or running

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
I have no doubt that if you did the experiments you describe, that the 
brains would be rearranged consistently with your predictions. But what 
does that say about consciousness?


What are you asking about consciousness?


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 1:11 PM
Subject: Re: [agi] A paper that actually does solve the problem of 
consciousness




--- On Mon, 11/17/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Okay, let me phrase it like this:  I specifically say (or
rather I should have done... this is another thing I need to
make more explicit!) that the predictions are about making
alterations at EXACTLY the boundary of the analysis
mechanisms.

So, when we test the predictions, we must first understand
the mechanics of human (or AGI) cognition well enough to be
able to locate the exact scope of the analysis mechanisms.

Then, we make the tests by changing things around just
outside the reach of those mechanisms.

Then we ask subjects (human or AGI) what happened to their
subjective experiences.  If the subjects are ourselves -
which I strongly suggest must be the case - then we can ask
ourselves what happened to our subjective experiences.

My prediction is that if the swaps are made at that
boundary, then things will be as I state.  But if changes
are made within the scope of the analysis mechanisms, then
we will not see those changes in the qualia.

So the theory could be falsified if changes in the qualia
are NOT consistent with the theory, when changes are made at
different points in the system.  The theory is all about the
analysis mechanisms being the culprit, so in that sense it
is extremely falsifiable.

Now, correct me if I am wrong, but is there anywhere else
in the literature where you have you seen anyone make a
prediction that the qualia will be changed by the alteration
of a specific mechanism, but not by other, fairly similar
alterations?


Your predictions are not testable. How do you know if another person has 
experienced a change in qualia, or is simply saying that they do? If you 
do the experiment on yourself, how do you know if you really experience a 
change in qualia, or only believe that you do?


There is a difference, you know. Belief is only a rearrangement of your 
neurons. I have no doubt that if you did the experiments you describe, 
that the brains would be rearranged consistently with your predictions. 
But what does that say about consciousness?


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser

An excellent question from Harry . . . .

So when I don't remember anything about those towns, from a few minutes 
ago on my road trip, is it because (a) the attentional mechanism did not 
bother to lay down any episodic memory traces, so I cannot bring back the 
memories and analyze them, or (b) that I was actually not experiencing any 
qualia during that time when I was on autopilot?


I believe that the answer is (a), and that IF I can stopped at any point 
during the observation period and thought about the experience I just had, 
I would be able to appreciate the last few seconds of subjective 
experience.


So . . . . what if the *you* that you/we speak of is simply the attentional 
mechanism?  What if qualia are simply the way that other brain processes 
appear to you/the attentional mechanism?


Why would you be experiencing qualia when you were on autopilot?  It's 
quite clear from experiments that human's don't see things in their visual 
field when they are concentrating on other things in their visual field (for 
example, when you are told to concentrate on counting something that someone 
is doing in the foreground while a man in an ape suit walks by in the 
background).  Do you really have qualia from stuff that you don't sense 
(even though your sensory apparatus picked it up, it was clearly discarded 
at some level below the conscious/attentional level)?




- Original Message - 
From: Richard Loosemore [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 1:46 PM
Subject: **SPAM** Re: [agi] A paper that actually does solve the problem of 
consciousness




Harry Chesley wrote:

On 11/14/2008 9:27 AM, Richard Loosemore wrote:


 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:


http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


Good paper.

A related question: How do you explain the fact that we sometimes are 
aware of qualia and sometimes not? You can perform the same actions 
paying attention or on auto pilot. In one case, qualia manifest, 
while in the other they do not. Why is that?


I actually *really* like this question:  I was trying to compose an answer 
to it while lying in bed this morning.


This is what I started referring to (in a longer version of the paper) as 
a Consciousness Holiday.


In fact, if start unpacking the idea of what we mean by conscious 
experience, we start to realize that it inly really exists when we look at 
it.  It is not even logically possible to think about consciousness - any 
form of it, including *memories* of the consciousness that I had a few 
minutes ago, when I was driving along the road and talking to my companion 
without bothering to look at several large towns that we drove through - 
without applying the analysis mechanism to the consciousness episode.


So when I don't remember anything about those towns, from a few minutes 
ago on my road trip, is it because (a) the attentional mechanism did not 
bother to lay down any episodic memory traces, so I cannot bring back the 
memories and analyze them, or (b) that I was actually not experiencing any 
qualia during that time when I was on autopilot?


I believe that the answer is (a), and that IF I can stopped at any point 
during the observation period and thought about the experience I just had, 
I would be able to appreciate the last few seconds of subjective 
experience.


The real reply to your question goes much much deeper, and it is 
fascinating because we need to get a handle on creatures that probably do 
not do any reflective, language-based philosophical thinking (like guinea 
pigs and crocodiles).  I want to say more, but will have to set it down in 
a longer form.


Does this seem to make sense so far, though?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Mark Waser
I think the reason that the hard question is interesting at all is that 
it would presumably be OK to torture a zombie because it doesn't actually 
experience pain, even though it would react exactly like a human being 
tortured. That's an ethical question. Ethics is a belief system that 
exists in our minds about what we should or should not do. There is no 
objective experiment you can do that will tell you whether any act, such 
as inflicting pain on a human, animal, or machine, is ethical or not. The 
only thing you can measure is belief, for example, by taking a poll.


What is the point to ethics?  The reason why you can't do objective 
experiments is because *YOU* don't have a grounded concept of ethics.  The 
second that you ground your concepts in effects that can be seen in the 
real world, there are numerous possible experiments.


The same is true of consciousness.  The hard problem of consciousness is 
hard because the question is ungrounded.  Define all of the arguments in 
terms of things that appear and matter in the real world and the question 
goes away.  It's only because you invent ungrounded unprovable distinctions 
that the so-called hard problem appears.


Torturing a p-zombie is unethical because whether it feels pain or not is 
100% irrelevant in the real world.  If it 100% acts as if it feels pain, 
then for all purposes that matter it does feel pain.  Why invent this 
mystical situation where it doesn't feel pain yet acts as if it does?


Richard's paper attempts to solve the hard problem by grounding some of the 
silliness.  It's the best possible effort short of just ignoring the 
silliness and going on to something else that is actually relevant to the 
real world.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, November 15, 2008 10:02 PM
Subject: RE: [agi] A paper that actually does solve the problem of 
consciousness



--- On Sat, 11/15/08, Ed Porter [EMAIL PROTECTED] wrote:

With regard to the second notion,
that conscious phenomena are not subject to scientific explanation, there 
is

extensive evidence to the contrary. The prescient psychological writings of
William James, and Dr. Alexander Luria’s famous studies of the effects of
variously located bullet wounds on the minds of Russian soldiers after 
World

War II, both illustrate that human consciousness can be scientifically
studied. The effects of various drugs on consciousness have been
scientifically studied.


Richard's paper is only about the hard question of consciousness, that 
which distinguishes you from a P-zombie, not the easy question about mental 
states that distinguish between being awake or asleep.


I think the reason that the hard question is interesting at all is that it 
would presumably be OK to torture a zombie because it doesn't actually 
experience pain, even though it would react exactly like a human being 
tortured. That's an ethical question. Ethics is a belief system that exists 
in our minds about what we should or should not do. There is no objective 
experiment you can do that will tell you whether any act, such as inflicting 
pain on a human, animal, or machine, is ethical or not. The only thing you 
can measure is belief, for example, by taking a poll.


-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser

An understanding of what consciousness actually is, for
starters.

It is a belief.

No it is not.
And that statement (It is a belief) is a cop-out theory.


An understanding of what consciousness is requires a consensus definition 
of what it is.


For most people, it seems to be an undifferentiated mess that includes all 
of attentional components, intentional components, understanding components, 
and, frequently, experiential components (i.e. qualia).


If you only buy into the first three and do it in a very concrete fashion, 
consciousness (and ethics) isn't all that tough.


Or you can follow Alice and star debating the real meaning of the third 
and whether or not the truly fourth exists in anyone except yourself.


Personally, if something has a will (intentionality/goals) that it can focus 
effectively (attentional and understanding), I figure that you'd better 
start treating it ethically for your own long-term self-interest.


Of course, that then begs the question of what ethics is . . . . but I think 
that that is pretty easy to solve as well . . . . 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser
This does not mean that certain practices are good or bad. If there was 
such a thing, then there would be no debate about war, abortion, 
euthanasia, capital punishment, or animal rights, because these questions 
could be answered experimentally.


Given a goal and a context, there is absolutely such a thing as good or bad. 
The problem with the examples that you cited is that you're attempting to 
generalize to a universal answer across contexts (because I would argue that 
there is a useful universal goal) which is nonsensical.  All of this can be 
answered both logically and experimentally if you just ask the right 
question instead of engaging in vacuous hand-waving about how tough it all 
is after you've mindlessly expanded your problem beyond solution.



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 11, 2008 5:58 PM
Subject: **SPAM** Re: [agi] Ethics of computer-based cognitive 
experimentation




--- On Tue, 11/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:


Your 'belief' explanation is a cop-out because it
does not address any of the issues that need to be addressed
for something to count as a definition or an explanation of
the facts that need to be explained.


As I explained, animals that have no concept of death have nevertheless 
evolved to fear most of the things that can kill them. Humans have learned 
to associate these things with death, and invented the concept of 
consciousness as the large set of features which distinguishes living 
humans from dead humans. Thus, humans fear the loss or destruction of 
consciousness, which is equivalent to death.


Consciousness, free will, qualia, and good and bad are universal human 
beliefs. We should not confuse them with truth by asking the wrong 
questions. Thus, Turing sidestepped the question of can machines think? 
by asking instead can machines appear to think?  Since we can't (by 
definition) distinguish doing something from appearing to do something, it 
makes no sense for us to make this distinction.


Likewise, asking if it is ethical to inflict simulated pain on machines is 
asking the wrong question. Evolution favors the survival of tribes that 
practice altruism toward other tribe members and teach these ethical 
values to their children. This does not mean that certain practices are 
good or bad. If there was such a thing, then there would be no debate 
about war, abortion, euthanasia, capital punishment, or animal rights, 
because these questions could be answered experimentally.


The question is not how should machines be treated? The question is how 
will we treat machines?



My proposal is being written up now and will be available
at the end of tomorrow.  It does address all of the facts
that need to be explained.


I am looking forward to reading it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Mark Waser

I've noticed lately that the paranoid fear of computers becoming
intelligent and taking over the world has almost entirely disappeared
from the common culture.


Is this sarcasm, irony, or are you that unaware of current popular culture 
(i.e. Terminator Chronicles on TV, a new Terminator movie in the works, I, 
Robot, etc.)?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-31 Thread Mark Waser

Let's try this . . . .

In Universal Algorithmic Intelligence on page 20, Hutter uses Occam's razor 
in the definition of .


Then, at the bottom of the page, he merely claims that using  as an 
estimate for ? may be a reasonable thing to do


That's not a proof of Occam's Razor.

= = = = = =

He also references Occam's Razor on page 33 where he says:

We believe the answer to be negative, which on the positive side would show 
the necessity of Occam's razor assumption, and the distinguishedness of 
AIXI.


That's calling Occam's razor a necessary assumption and bases that upon a 
*belief*.


= = = = = =

Where do you believe that he proves Occam's razor?


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 29, 2008 10:46 PM
Subject: Re: [agi] Occam's Razor and its abuse



--- On Wed, 10/29/08, Mark Waser [EMAIL PROTECTED] wrote:


Hutter *defined* the measure of correctness using
simplicity as a component.
Of course, they're correlated when you do such a thing.
 That's not a proof,
that's an assumption.


Hutter defined the measure of correctness as the accumulated reward by the 
agent in AIXI.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-31 Thread Mark Waser

I think Hutter is being modest.


Huh?

So . . . . are you going to continue claiming that Occam's Razor is proved 
or are you going to stop (or are you going to point me to the proof)?


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 31, 2008 5:54 PM
Subject: Re: [agi] Occam's Razor and its abuse



I think Hutter is being modest.

-- Matt Mahoney, [EMAIL PROTECTED]


--- On Fri, 10/31/08, Mark Waser [EMAIL PROTECTED] wrote:


From: Mark Waser [EMAIL PROTECTED]
Subject: Re: [agi] Occam's Razor and its abuse
To: agi@v2.listbox.com
Date: Friday, October 31, 2008, 5:41 PM
Let's try this . . . .

In Universal Algorithmic Intelligence on page 20, Hutter
uses Occam's razor in the definition of .

Then, at the bottom of the page, he merely claims that
using  as an estimate for ? may be a reasonable thing
to do

That's not a proof of Occam's Razor.

= = = = = =

He also references Occam's Razor on page 33 where he
says:

We believe the answer to be negative, which on the
positive side would show the necessity of Occam's razor
assumption, and the distinguishedness of AIXI.

That's calling Occam's razor a necessary assumption
and bases that upon a *belief*.

= = = = = =

Where do you believe that he proves Occam's razor?


- Original Message - From: Matt Mahoney
[EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, October 29, 2008 10:46 PM
Subject: Re: [agi] Occam's Razor and its abuse


 --- On Wed, 10/29/08, Mark Waser
[EMAIL PROTECTED] wrote:

 Hutter *defined* the measure of correctness using
 simplicity as a component.
 Of course, they're correlated when you do such
a thing.
  That's not a proof,
 that's an assumption.

 Hutter defined the measure of correctness as the
accumulated reward by the agent in AIXI.

 -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives:
https://www.listbox.com/member/archive/303/=now
 RSS Feed:
https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
 However, it does seem clear that the integers (for instance) is not an 
 entity with *scientific* meaning, if you accept my formalization of science 
 in the blog entry I recently posted...

Huh?  Integers are a class (which I would argue is an entity) that is I would 
argue is well-defined and useful in science.  What is meaning if not 
well-defined and useful?  I need to go back to your paper because I didn't get 
that out of it at all.


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 6:41 PM
  Subject: Re: [agi] constructivist issues



  well-defined is not well-defined in my view...

  However, it does seem clear that the integers (for instance) is not an 
entity with *scientific* meaning, if you accept my formalization of science in 
the blog entry I recently posted...




  On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been WITH 
RESPECT TO THE DEFINITION OF NUMBERS since I was responding to Numbers are not 
well-defined and can never be.  Further, I should not have said information 
about numbers when I meant definition of numbers.  two radically different 
thingsArgh!

= = = = = = = = 

So Ben, how would you answer Abram's question So my question is, do you 
interpret this as meaning Numbers are not well-defined and can never be 
(constructivist), or do you interpret this as It is impossible to pack all 
true information about numbers into an axiom system (classical)?

Does the statement that a formal system is incomplete with respect to 
statements about numbers mean that Numbers are not well-defined and can never 
be.

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about constructivism as in 
Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
Ick!  I emphatically do not believe When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence.



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper hole  :-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your argument 
as to why uncomputable entities are useless for science.  I'm going to need to 
go back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...


  On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote:

  That is thanks to Godel's incompleteness theorem. Any formal system
  that describes numbers is doomed to be incomplete



Yes, any formal system is doomed to be incomplete.  Emphatically, NO!  
It is not true that any formal system is doomed to be incomplete WITH RESPECT 
TO NUMBERS.

It is entirely possible (nay, almost certain) that there is a larger 
system where the information about numbers is complete but that the other 
things that the system describes are incomplete. 



  So my question is, do you interpret this as meaning Numbers are not
  well-defined and can never be (constructivist), or do you interpret
  this as It is impossible to pack all true information about numbers
  into an axiom system (classical)?



Hmmm.  From a larger reference framework, the former 
claimed-to-be-constructivist view isn't true/correct because it clearly *is* 
possible that numbers may be well-defined within a larger system (i.e. the can 
never be is incorrect).

Does that mean that I'm a classicist or that you are mis-interpreting 
constructivism (because you're attributing a provably false statement to 
constructivists)?  I'm leaning towards the latter currently.  ;-) 


- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com

Sent: Tuesday, October 28, 2008 5:02 PM 

Subject: Re: [agi] constructivist issues



  Mark,

  That is thanks to Godel's incompleteness theorem. Any formal system

Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Mark Waser

(1) Simplicity (in conclusions, hypothesis, theories, etc.) is preferred.
(2) The preference to simplicity does not need a reason or justification.
(3) Simplicity is preferred because it is correlated with correctness.
I agree with (1), but not (2) and (3).


I concur but would add that (4) Simplicity is preferred because it is 
correlated with correctness *of implementation* (or ease of implementation 
correctly :-)



- Original Message - 
From: Pei Wang [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 10:15 PM
Subject: Re: [agi] Occam's Razor and its abuse



Eric,

I highly respect your work, though we clearly have different opinions
on what intelligence is, as well as on how to achieve it. For example,
though learning and generalization play central roles in my theory
about intelligence, I don't think PAC learning (or the other learning
algorithms proposed so far) provides a proper conceptual framework for
the typical situation of this process. Generally speaking, I'm not
building some system that learns about the world, in the sense that
there is a correct way to describe the world waiting to be discovered,
which can be captured by some algorithm. Instead, learning to me is a
non-algorithmic open-ended process by which the system summarizes its
own experience, and uses it to predict the future. I fully understand
that most people in this field probably consider this opinion wrong,
though I haven't been convinced yet by the arguments I've seen so far.

Instead of addressing all of the relevant issues, in this discussion I
have a very limited goal. To rephrase what I said initially, I see
that under the term Occam's Razor, currently there are three
different statements:

(1) Simplicity (in conclusions, hypothesis, theories, etc.) is preferred.

(2) The preference to simplicity does not need a reason or justification.

(3) Simplicity is preferred because it is correlated with correctness.

I agree with (1), but not (2) and (3). I know many people have
different opinions, and I don't attempt to argue with them here ---
these problems are too complicated to be settled by email exchanges.

However, I do hope to convince people in this discussion that the
three statements are not logically equivalent, and (2) and (3) are not
implied by (1), so to use Occam's Razor to refer to all of them is
not a good idea, because it is going to mix different issues.
Therefore, I suggest people to use Occam's Razor in its original and
basic sense, that is (1), and to use other terms to refer to (2) and
(3). Otherwise, when people talk about Occam's Razor, I just don't
know what to say.

Pei

On Tue, Oct 28, 2008 at 8:09 PM, Eric Baum [EMAIL PROTECTED] wrote:


Pei Triggered by several recent discussions, I'd like to make the
Pei following position statement, though won't commit myself to long
Pei debate on it. ;-)

Pei Occam's Razor, in its original form, goes like entities must not
Pei be multiplied beyond necessity, and it is often stated as All
Pei other things being equal, the simplest solution is the best or
Pei when multiple competing theories are equal in other respects,
Pei the principle recommends selecting the theory that introduces the
Pei fewest assumptions and postulates the fewest entities --- all
Pei from http://en.wikipedia.org/wiki/Occam's_razor

Pei I fully agree with all of the above statements.

Pei However, to me, there are two common misunderstandings associated
Pei with it in the context of AGI and philosophy of science.

Pei (1) To take this statement as self-evident or a stand-alone
Pei postulate

Pei To me, it is derived or implied by the insufficiency of
Pei resources. If a system has sufficient resources, it has no good
Pei reason to prefer a simpler theory.

With all due respect, this is mistaken.
Occam's Razor, in some form, is the heart of Generalization, which
is the essence (and G) of GI.

For example, if you study concept learning from examples,
say in the PAC learning context (related theorems
hold in some other contexts as well),
there are theorems to the effect that if you find
a hypothesis from a simple enough class of a hypotheses
it will with very high probability accurately classify new
examples chosen from the same distribution,

and conversely theorems that state (roughly speaking) that
any method that chooses a hypothesis from too expressive a class
of hypotheses will have a probability that can be bounded below
by some reasonable number like 1/7,
of having large error in its predictions on new examples--
in other words it is impossible to PAC learn without respecting
Occam's Razor.

For discussion of the above paragraphs, I'd refer you to
Chapter 4 of What is Thought? (MIT Press, 2004).

In other words, if you are building some system that learns
about the world, it had better respect Occam's razor if you
want whatever it learns to apply to new experience.
(I use the term Occam's razor loosely; using
hypotheses that are highly constrained in 

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
 but we never need arbitrarily large integers in any particular case, we only 
 need integers going up to the size of the universe ;-)

But measured in which units?  For any given integer, I can come up with (invent 
:-) a unit of measurement that requires a larger/greater number than that 
integer to describe the size of the universe.



;-)  Nice try, but . . . .  :-p

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 29, 2008 9:48 AM
  Subject: Re: [agi] constructivist issues



  but we never need arbitrarily large integers in any particular case, we only 
need integers going up to the size of the universe ;-)


  On Wed, Oct 29, 2008 at 7:24 AM, Mark Waser [EMAIL PROTECTED] wrote:

 However, it does seem clear that the integers (for instance) is not an 
entity with *scientific* meaning, if you accept my formalization of science in 
the blog entry I recently posted...

Huh?  Integers are a class (which I would argue is an entity) that is I 
would argue is well-defined and useful in science.  What is meaning if not 
well-defined and useful?  I need to go back to your paper because I didn't get 
that out of it at all.


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 6:41 PM
  Subject: Re: [agi] constructivist issues



  well-defined is not well-defined in my view...

  However, it does seem clear that the integers (for instance) is not an 
entity with *scientific* meaning, if you accept my formalization of science in 
the blog entry I recently posted...




  On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been 
WITH RESPECT TO THE DEFINITION OF NUMBERS since I was responding to Numbers 
are not well-defined and can never be.  Further, I should not have said 
information about numbers when I meant definition of numbers.  two 
radically different thingsArgh!

= = = = = = = = 

So Ben, how would you answer Abram's question So my question is, do 
you interpret this as meaning Numbers are not well-defined and can never be 
(constructivist), or do you interpret this as It is impossible to pack all 
true information about numbers into an axiom system (classical)?

Does the statement that a formal system is incomplete with respect to 
statements about numbers mean that Numbers are not well-defined and can never 
be.

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about constructivism as 
in Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
Ick!  I emphatically do not believe When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence.



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper hole  
:-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your 
argument as to why uncomputable entities are useless for science.  I'm going to 
need to go back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...


  On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote:

  That is thanks to Godel's incompleteness theorem. Any formal 
system
  that describes numbers is doomed to be incomplete



Yes, any formal system is doomed to be incomplete.  Emphatically, 
NO!  It is not true that any formal system is doomed to be incomplete WITH 
RESPECT TO NUMBERS.

It is entirely possible (nay, almost certain) that there is a 
larger system where the information about numbers is complete but that the 
other things that the system describes are incomplete. 



  So my question is, do you interpret this as meaning Numbers are 
not
  well-defined and can never be (constructivist), or do you 
interpret

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser

Here's another slant . . . .

I really liked Pei's phrasing (which I consider to be the heart of 
Constructivism: The Epistemology :-)

Generally speaking, I'm not
building some system that learns about the world, in the sense that
there is a correct way to describe the world waiting to be discovered,
which can be captured by some algorithm. Instead, learning to me is a
non-algorithmic open-ended process by which the system summarizes its
own experience, and uses it to predict the future.


Classicists (to me) seem to frequently want one and only one truth that must 
be accurate, complete, and not only provable but for proofs of all of it's 
implications to exist (which is obviously thwarted by Tarski and Gödel).


So . . . . is true that light is a particle? is it true that light is a 
wave?


That's why Ben and I are stuck answering many of your questions with 
requests for clarification -- Which question -- pi or cat?  Which subset of 
what *might* be considered mathematics/arithmetic?  Why are you asking the 
question?


Certain statements appear obviously untrue (read inconsistent with the 
empirical world or our assumed extensions of it) in the vast majority of 
cases/contexts but many others are just/simply context-dependent.




- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 29, 2008 10:08 AM
Subject: Re: [agi] constructivist issues



Ben,

Thanks, that writeup did help me understand your viewpoint. However, I
don't completely unserstand/agree with the argument (one of the two,
not both!). My comments to that effect are posted on your blog.

About the earlier question...

(Mark) So Ben, how would you answer Abram's question So my question
is, do you interpret this as meaning Numbers are not well-defined and
can never be (constructivist), or do you interpret this as It is
impossible to pack all true information about numbers into an axiom
system (classical)?
(Ben) well-defined is not well-defined in my view...

To rephrase. Do you think there is a truth of the matter concerning
formally undecidable statements about numbers?

--Abram

On Tue, Oct 28, 2008 at 5:26 PM, Ben Goertzel [EMAIL PROTECTED] wrote:


Hi guys,

I took a couple hours on a red-eye flight last night to write up in more
detail my
argument as to why uncomputable entities are useless for science:

http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html

Of course, I had to assume a specific formal model of science which may 
be

controversial.  But at any rate, I think I did succeed in writing down my
argument in a more
clear way than I'd been able to do in scattershot emails.

The only real AGI relevance here is some comments on Penrose's nasty AI
theories, e.g.
in the last paragraph and near the intro...

-- Ben G


On Tue, Oct 28, 2008 at 2:02 PM, Abram Demski [EMAIL PROTECTED] 
wrote:


Mark,

That is thanks to Godel's incompleteness theorem. Any formal system
that describes numbers is doomed to be incomplete, meaning there will
be statements that can be constructed purely by reference to numbers
(no red cats!) that the system will fail to prove either true or
false.

So my question is, do you interpret this as meaning Numbers are not
well-defined and can never be (constructivist), or do you interpret
this as It is impossible to pack all true information about numbers
into an axiom system (classical)?

Hmm By the way, I might not be using the term constructivist in
a way that all constructivists would agree with. I think
intuitionist (a specific type of constructivist) would be a better
term for the view I'm referring to.

--Abram Demski

On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] 
wrote:

 Numbers can be fully defined in the classical sense, but not in the

 constructivist sense. So, when you say fully defined question, do
 you mean a question for which all answers are stipulated by logical
 necessity (classical), or logical deduction (constructivist)?

 How (or why) are numbers not fully defined in a constructionist sense?

 (I was about to ask you whether or not you had answered your own
 question
 until that caught my eye on the second or third read-through).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, 
butcher

a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch 
manure,

program a computer, cook a tasty meal, fight efficiently, die gallantly

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
 sorry, I should have been more precise.   There is some K so that we never 
 need integers with algorithmic information exceeding K.

Ah . . . . but is K predictable?  Or do we need all the integers above it as 
a safety margin?   :-)

(What is the meaning of need?  :-)

The inductive proof to show that all integers are necessary as a safety margin 
is pretty obvious . . . .

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 29, 2008 10:38 AM
  Subject: Re: [agi] constructivist issues



  sorry, I should have been more precise.   There is some K so that we never 
need integers with algorithmic information exceeding K.


  On Wed, Oct 29, 2008 at 10:32 AM, Mark Waser [EMAIL PROTECTED] wrote:

 but we never need arbitrarily large integers in any particular case, we 
only need integers going up to the size of the universe ;-)

But measured in which units?  For any given integer, I can come up with 
(invent :-) a unit of measurement that requires a larger/greater number than 
that integer to describe the size of the universe.



;-)  Nice try, but . . . .  :-p

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 29, 2008 9:48 AM
  Subject: Re: [agi] constructivist issues



  but we never need arbitrarily large integers in any particular case, we 
only need integers going up to the size of the universe ;-)


  On Wed, Oct 29, 2008 at 7:24 AM, Mark Waser [EMAIL PROTECTED] wrote:

 However, it does seem clear that the integers (for instance) is 
not an entity with *scientific* meaning, if you accept my formalization of 
science in the blog entry I recently posted...

Huh?  Integers are a class (which I would argue is an entity) that is I 
would argue is well-defined and useful in science.  What is meaning if not 
well-defined and useful?  I need to go back to your paper because I didn't get 
that out of it at all.


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 6:41 PM
  Subject: Re: [agi] constructivist issues



  well-defined is not well-defined in my view...

  However, it does seem clear that the integers (for instance) is not 
an entity with *scientific* meaning, if you accept my formalization of science 
in the blog entry I recently posted...




  On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have 
been WITH RESPECT TO THE DEFINITION OF NUMBERS since I was responding to 
Numbers are not well-defined and can never be.  Further, I should not have 
said information about numbers when I meant definition of numbers.  two 
radically different thingsArgh!

= = = = = = = = 

So Ben, how would you answer Abram's question So my question is, 
do you interpret this as meaning Numbers are not well-defined and can never 
be (constructivist), or do you interpret this as It is impossible to pack all 
true information about numbers into an axiom system (classical)?

Does the statement that a formal system is incomplete with respect 
to statements about numbers mean that Numbers are not well-defined and can 
never be.

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about 
constructivism as in Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
Ick!  I emphatically do not believe When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence.



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper 
hole  :-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your 
argument as to why uncomputable entities are useless for science.  I'm going to 
need to go back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed

Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Mark Waser
Hutter proved (3), although as a general principle it was already a well 
established practice in machine learning. Also, I agree with (4) but this 
is not the primary reason to prefer simplicity.


Hutter *defined* the measure of correctness using simplicity as a component. 
Of course, they're correlated when you do such a thing.  That's not a proof, 
that's an assumption.


Regarding (4), I was deliberately ambiguous as to whether I meant 
implementation of thinking system or implementation of thought itself.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 29, 2008 11:11 AM
Subject: Re: [agi] Occam's Razor and its abuse



--- On Wed, 10/29/08, Mark Waser [EMAIL PROTECTED] wrote:


 (1) Simplicity (in conclusions, hypothesis, theories,
 etc.) is preferred.
 (2) The preference to simplicity does not need a
 reason or justification.
 (3) Simplicity is preferred because it is correlated
 with correctness.
 I agree with (1), but not (2) and (3).

I concur but would add that (4) Simplicity is preferred
because it is
correlated with correctness *of implementation* (or ease of
implementation correctly :-)


Occam said (1) but had no proof. Hutter proved (3), although as a general 
principle it was already a well established practice in machine learning. 
Also, I agree with (4) but this is not the primary reason to prefer 
simplicity.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

*That* is what I was asking about when I asked which side you fell on.

Do you think such extensions are arbitrary, or do you think there is a
fact of the matter?

The extensions are clearly judged on whether or not they accurately reflect 
the empirical world *as currently known* -- so they aren't arbitrary in that 
sense.


On the other hand, there may not be just a single set of extensions that 
accurately reflect the world so I guess that you could say that choosing 
among sets of extensions that both accurately reflect the world is 
(necessarily) an arbitrary process since there is no additional information 
to go on (though there are certainly heuristics like Occam's razor -- but 
they are more about getting a usable or more likely to hold up under 
future observations or more likely to be easily modified to match future 
observations theory . . . .).


The world is real.  Our explanations and theories are constructed.  For any 
complete system, you can take the classical approach but incompleteness (of 
current information which then causes undecidability) ever forces you into 
constructivism to create an ever-expanding series of shells of stronger 
systems to explain those systems contained by them.


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 27, 2008 5:43 PM
Subject: Re: [agi] constructivist issues


Mark,

Sorry, I accidentally called you Mike in the previous email!

Anyway, you said:

Also, you seem to be ascribing arbitrariness to constructivism which
is emphatically not the case.

I didn't mean to ascribe arbitrariness to constructivism-- what I
meant was that constructivists would (as I understand it) ascribe
arbitrariness to extensions of arithmetic. A constructivist sees the
fact of the matter as undefined for undecidable statements, so adding
axioms that make them decidable is necessarily an arbitrary process.
The classical view, on the other hand, sees it as an attempt to
increase the amount of true information contained in the axioms-- so
there is a right and wrong.

*That* is what I was asking about when I asked which side you fell on.
Do you think such extensions are arbitrary, or do you think there is a
fact of the matter?

--Abram

On Mon, Oct 27, 2008 at 3:33 PM, Mark Waser [EMAIL PROTECTED] wrote:

The number of possible descriptions is countable


I disagree.


if we were able to randomly pick a real number between 1 and 0, it would
be indescribable with probability 1.


If we were able to randomly pick a real number between 1 and 0, it would 
be

indescribable with probability *approaching* 1.


Which side do you fall on?


I still say that the sides are parts of the same coin.

In other words, we're proving arithmetic consistent only by adding to 
its
definition, which hardly counts. The classical viewpoint, of course, is 
that

the stronger system is actually correct. Its additional axioms are not
arbitrary. So, the proof reflects the truth.


What is the stronger system other than an addition?  And the viewpoint 
that

the stronger system is actually correct -- is that an assumption? a truth?
what?  (And how do you know?)

Also, you seem to be ascribing arbitrariness to constructivism which is
emphatically not the case.


- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, October 27, 2008 2:53 PM
Subject: Re: [agi] constructivist issues


Mark,

The number of possible descriptions is countable, while the number of
possible real numbers is uncountable. So, there are infinitely many
more real numbers that are individually indescribable, then
describable; so much so that if we were able to randomly pick a real
number between 1 and 0, it would be indescribable with probability 1.
I am getting this from Chaitin's book Meta Math!.

I believe that arithmetic is a formal and complete system.  I'm not a
constructivist where formal and complete systems are concerned (since
there is nothing more to construct).

Oh, I believe there is some confusion here because of my use of the
word arithmetic. I don't mean grade-school
addition/subtraction/multiplication/division. What I mean is the
axiomatic theory of numbers, which Godel showed to be incomplete if it
is consistent. Godel also proved that one of the incompletenesses in
arithmetic was that it could not prove its own consistency. Stronger
logical systems can and have proven its consistency, but any
particular logical system cannot prove its own consistency. It seems
to me that the constructivist viewpoint says, The so-called stronger
system merely defines truth in more cases; but, we could just as
easily take the opposite definitions. In other words, we're proving
arithmetic consistent only by adding to its definition, which hardly
counts. The classical viewpoint, of course, is that the stronger
system is actually correct. Its additional axioms are not arbitrary.
So, the proof reflects the truth.

Which side do you

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
Abram,

I could agree with the statement that there are uncountably many *potential* 
numbers but I'm going to argue that any number that actually exists is 
eminently describable.

Take the set of all numbers that are defined far enough after the decimal point 
that they never accurately describe anything manifest in the physical universe 
and are never described or invoked by any entity in the physical universe 
(specifically including a method for the generation of that number).

Pi is clearly not in the set since a) it describes all sorts of ratios in the 
physical universe and b) there is a clear formula for generating successive 
approximations of it.

My question is -- do these numbers really exist?  And, if so, by what 
definition of exist since my definition is meant to rule out any form of 
manifestation whether physical or as a concept.

Clearly these numbers have the potential to exist -- but it should be equally 
clear that they do not actually exist (i.e. they are never individuated out 
of the class).

Any number which truly exists has at least one description either of the type 
of a) the number which is manifest as or b) the number which is generated by. 

Classicists seem to want to insist that all of these potential numbers actually 
do exist -- so they can make statements like There are uncountably many real 
numbers that no one can ever describe in any manner.  

I ask of them (and you) -- Show me just one.:-)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

Hi,

   We keep going around and around because you keep dropping my distinction 
between two different cases . . . .


   The statement that The cat is red is undecidable by arithmetic because 
it can't even be defined in terms of the axioms of arithmetic (i.e. it has 
*meaning* outside of arithmetic).  You need to construct 
additions/extensions to arithmetic to even start to deal with it.


   The statement that Pi is a normal number is decidable by arithmetic 
because each of the terms has meaning in arithmetic (so it certainly can be 
disproved by counter-example).  It may not be deducible from the axioms but 
the meaning of the statement is contained within the axioms.


   The first example is what you call a constructivist view.  The second 
example is what you call a classical view.  Which one I take is eminently 
context-dependent and you keep dropping the context.  If the meaning of the 
statement is contained within the system, it is decidable even if it is not 
deducible.  If the meaning is beyond the system, then it is not decidable 
because you can't even express what you're deciding.


   Mark


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 9:32 AM
Subject: Re: [agi] constructivist issues



Mark,

You assert that the extensions are judged on how well they reflect the 
world.


The extension currently under discussion is one that allows us to
prove the consistency of Arithmetic. So, it seems, you count that as
something observable in the world-- no mathematician has ever proved a
contradiction from the axioms of arithmetic, so they seem consistent.
If this is indeed what you are saying, then you are in line with the
classical view in this respect (and with my opinion).

But, if this is your view, I don't see how you can maintain the
constructivist assertion that Godelian statements are undecidable
because they are undefined by the axioms. It seems that, instead, you
are agreeing with the classical notion that there is in fact a truth
of the matter concerning Godelian statements, we're just unable to
deduce that truth from the axioms.

--Abram

On Tue, Oct 28, 2008 at 7:21 AM, Mark Waser [EMAIL PROTECTED] wrote:

*That* is what I was asking about when I asked which side you fell on.


Do you think such extensions are arbitrary, or do you think there is a
fact of the matter?

The extensions are clearly judged on whether or not they accurately 
reflect
the empirical world *as currently known* -- so they aren't arbitrary in 
that

sense.

On the other hand, there may not be just a single set of extensions that
accurately reflect the world so I guess that you could say that choosing
among sets of extensions that both accurately reflect the world is
(necessarily) an arbitrary process since there is no additional 
information

to go on (though there are certainly heuristics like Occam's razor -- but
they are more about getting a usable or more likely to hold up under
future observations or more likely to be easily modified to match future
observations theory . . . .).

The world is real.  Our explanations and theories are constructed.  For 
any
complete system, you can take the classical approach but incompleteness 
(of
current information which then causes undecidability) ever forces you 
into

constructivism to create an ever-expanding series of shells of stronger
systems to explain those systems contained by them.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
 The question that is puzzling, though, is: how can it be that these 
 uncomputable, inexpressible entities are so bloody useful ;-)  ... for 
 instance in differential calculus ...

Differential calculus doesn't use those individual entities . . . . 

 Also, to say that uncomputable entities don't exist because they can't be 
 finitely described, is basically just to *define* existence as finite 
 describability.

I never said any such thing.  I referenced a class of numbers that I defined as 
never physically manifesting and never being conceptually distinct and then 
asked if they existed.  Clearly some portion of your liver that I can't define 
finitely still exists because it is physically manifest.

 So this is more a philosophical position on what exists  means than an 
 argument that could convince anyone.

Yes, in that I basically defined my version of exists as physically manifest 
and/or described or invoked and then asked if that matched Abram's definition.  
No, in that you're now coming in with half (or less) of my definition and 
arguing that I'm unconvincing.  :-)


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 11:44 AM
  Subject: Re: [agi] constructivist issues



  Mark,

  The question that is puzzling, though, is: how can it be that these 
uncomputable, inexpressible entities are so bloody useful ;-)  ... for instance 
in differential calculus ...

  Also, to say that uncomputable entities don't exist because they can't be 
finitely described, is basically just to *define* existence as finite 
describability.  So this is more a philosophical position on what exists  
means than an argument that could convince anyone.

  I have some more detailed thoughts on these issues that I'll write down 
sometime soon when I get the time.   My position is fairly close to yours but I 
think that with these sorts of issues, the devil is in the details.

  ben


  On Tue, Oct 28, 2008 at 6:53 AM, Mark Waser [EMAIL PROTECTED] wrote:

Abram,

I could agree with the statement that there are uncountably many 
*potential* numbers but I'm going to argue that any number that actually exists 
is eminently describable.

Take the set of all numbers that are defined far enough after the decimal 
point that they never accurately describe anything manifest in the physical 
universe and are never described or invoked by any entity in the physical 
universe (specifically including a method for the generation of that number).

Pi is clearly not in the set since a) it describes all sorts of ratios in 
the physical universe and b) there is a clear formula for generating successive 
approximations of it.

My question is -- do these numbers really exist?  And, if so, by what 
definition of exist since my definition is meant to rule out any form of 
manifestation whether physical or as a concept.

Clearly these numbers have the potential to exist -- but it should be 
equally clear that they do not actually exist (i.e. they are never 
individuated out of the class).

Any number which truly exists has at least one description either of the 
type of a) the number which is manifest as or b) the number which is generated 
by. 

Classicists seem to want to insist that all of these potential numbers 
actually do exist -- so they can make statements like There are uncountably 
many real numbers that no one can ever describe in any manner.  

I ask of them (and you) -- Show me just one.:-)




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects.  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

In that case, shouldn't
you agree with the classical perspective on Godelian incompleteness,
since Godel's incompleteness theorem is about mathematical systems?


It depends.  Are you asking me a fully defined question within the current 
axioms of what you call mathematical systems (i.e. a pi question) or a cat 
question (which could *eventually* be defined by some massive extensions to 
your mathematical systems but which isn't currently defined in what you're 
calling mathematical systems)?


Saying that Gödel is about mathematical systems is not saying that it's not 
about cat-including systems.


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 12:06 PM
Subject: Re: [agi] constructivist issues



Mark,

Yes, I do keep dropping the context. This is because I am concerned
only with mathematical knowledge at the moment. I should have been
more specific.

So, if I understand you right, you are saying that you take the
classical view when it comes to mathematics. In that case, shouldn't
you agree with the classical perspective on Godelian incompleteness,
since Godel's incompleteness theorem is about mathematical systems?

--Abram

On Tue, Oct 28, 2008 at 10:20 AM, Mark Waser [EMAIL PROTECTED] wrote:

Hi,

  We keep going around and around because you keep dropping my 
distinction

between two different cases . . . .

  The statement that The cat is red is undecidable by arithmetic 
because
it can't even be defined in terms of the axioms of arithmetic (i.e. it 
has

*meaning* outside of arithmetic).  You need to construct
additions/extensions to arithmetic to even start to deal with it.

  The statement that Pi is a normal number is decidable by arithmetic
because each of the terms has meaning in arithmetic (so it certainly can 
be
disproved by counter-example).  It may not be deducible from the axioms 
but

the meaning of the statement is contained within the axioms.

  The first example is what you call a constructivist view.  The second
example is what you call a classical view.  Which one I take is eminently
context-dependent and you keep dropping the context.  If the meaning of 
the
statement is contained within the system, it is decidable even if it is 
not

deducible.  If the meaning is beyond the system, then it is not decidable
because you can't even express what you're deciding.

  Mark


- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 9:32 AM
Subject: Re: [agi] constructivist issues



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

Numbers can be fully defined in the classical sense, but not in the

constructivist sense. So, when you say fully defined question, do
you mean a question for which all answers are stipulated by logical
necessity (classical), or logical deduction (constructivist)?

How (or why) are numbers not fully defined in a constructionist sense?

(I was about to ask you whether or not you had answered your own question 
until that caught my eye on the second or third read-through).



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 3:47 PM
Subject: Re: [agi] constructivist issues


Mark,

Thank you, that clarifies somewhat.

But, *my* answer to *your* question would seem to depend on what you
mean when you say fully defined. Under the classical interpretation,
yes: the question is fully defined, so it is a pi question. Under
the constructivist interpretation, no: the question is not fully
defined, so it is a cat question.

Numbers can be fully defined in the classical sense, but not in the
constructivist sense. So, when you say fully defined question, do
you mean a question for which all answers are stipulated by logical
necessity (classical), or logical deduction (constructivist)?

--Abram Demski

On Tue, Oct 28, 2008 at 3:28 PM, Mark Waser [EMAIL PROTECTED] wrote:

In that case, shouldn't
you agree with the classical perspective on Godelian incompleteness,
since Godel's incompleteness theorem is about mathematical systems?


It depends.  Are you asking me a fully defined question within the current
axioms of what you call mathematical systems (i.e. a pi question) or a cat
question (which could *eventually* be defined by some massive extensions 
to

your mathematical systems but which isn't currently defined in what you're
calling mathematical systems)?

Saying that Gödel is about mathematical systems is not saying that it's 
not

about cat-including systems.

- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 28, 2008 12:06 PM
Subject: Re: [agi] constructivist issues





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
 Any formal system that contains some basic arithmetic apparatus equivalent 
 to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be incomplete with 
 respect to statements about numbers... that is what Godel originally 
 showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been WITH 
RESPECT TO THE DEFINITION OF NUMBERS since I was responding to Numbers are not 
well-defined and can never be.  Further, I should not have said information 
about numbers when I meant definition of numbers.  two radically different 
thingsArgh!

= = = = = = = = 

So Ben, how would you answer Abram's question So my question is, do you 
interpret this as meaning Numbers are not well-defined and can never be 
(constructivist), or do you interpret this as It is impossible to pack all 
true information about numbers into an axiom system (classical)?

Does the statement that a formal system is incomplete with respect to 
statements about numbers mean that Numbers are not well-defined and can never 
be.

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about constructivism as in 
Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
Ick!  I emphatically do not believe When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence.



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper hole  :-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your argument as 
to why uncomputable entities are useless for science.  I'm going to need to go 
back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus equivalent to 
http://en.wikipedia.org/wiki/Peano_axioms is doomed to be incomplete with 
respect to statements about numbers... that is what Godel originally showed...


  On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser [EMAIL PROTECTED] wrote:

  That is thanks to Godel's incompleteness theorem. Any formal system
  that describes numbers is doomed to be incomplete



Yes, any formal system is doomed to be incomplete.  Emphatically, NO!  It 
is not true that any formal system is doomed to be incomplete WITH RESPECT TO 
NUMBERS.

It is entirely possible (nay, almost certain) that there is a larger system 
where the information about numbers is complete but that the other things that 
the system describes are incomplete.



  So my question is, do you interpret this as meaning Numbers are not
  well-defined and can never be (constructivist), or do you interpret
  this as It is impossible to pack all true information about numbers
  into an axiom system (classical)?



Hmmm.  From a larger reference framework, the former 
claimed-to-be-constructivist view isn't true/correct because it clearly *is* 
possible that numbers may be well-defined within a larger system (i.e. the can 
never be is incorrect).

Does that mean that I'm a classicist or that you are mis-interpreting 
constructivism (because you're attributing a provably false statement to 
constructivists)?  I'm leaning towards the latter currently.  ;-)


- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com

Sent: Tuesday, October 28, 2008 5:02 PM

Subject: Re: [agi] constructivist issues



  Mark,

  That is thanks to Godel's incompleteness theorem. Any formal system
  that describes numbers is doomed to be incomplete, meaning there will
  be statements that can be constructed purely by reference to numbers
  (no red cats!) that the system will fail to prove either true or
  false.

  So my question is, do you interpret this as meaning Numbers are not
  well-defined and can never be (constructivist), or do you interpret
  this as It is impossible to pack all true information about numbers
  into an axiom system (classical)?

  Hmm By the way, I might not be using the term constructivist in
  a way that all constructivists would agree with. I think
  intuitionist (a specific type of constructivist) would be a better
  term for the view I'm referring to.

  --Abram Demski

  On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser [EMAIL PROTECTED] wrote:

Numbers can be fully defined in the classical sense, but not in the


constructivist sense. So, when you say fully defined question, do
you mean a question for which all answers are stipulated by logical

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Mark Waser
Or, in other words, you can't even start to draw a clear distinction in a small 
number of words.  That would argue that maybe those equalities aren't so silly 
after all.

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 26, 2008 7:38 PM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI



  Sorry, I'm just going to have to choose to be ignored on this topic ;-) ... I 
have too much AGI stuff to do to be spending so much time chatting on mailing 
lists ... and I've already published my thoughts on philosophy of science in 
The Hidden Pattern and online...

  ben g


  On Sun, Oct 26, 2008 at 9:51 AM, Mark Waser [EMAIL PROTECTED] wrote:

 These equations seem silly to me ... obviously science is much more than 
that, as Mark should know as he has studied philosophy of science extensively

Mark is looking for well-defined distinctions.  Claiming that science is 
obviously much more than learning is a non-sequitor.  What does science 
include that learning does not?  Please be specific or you *should* be ignored.

The transmission or communication of results (or, as Matt puts it, 
language) is one necessary addition.  Do you wish to provide another or do you 
just want to say that there must be one without being able to come up with one?

Mark can still think of at least one other thing (which may be multiples 
depending upon how you look at it) but isn't comfortable that he has an optimal 
view of it so he's looking for other viewpoints/phrasings.

 Cognitively, the precursor for science seems to be Piaget's formal stage 
of cognitive development.  If you have a community of minds that have reached 
the formal stage, then potentially they can develop the mental and social 
patterns corresponding to the practice of science.

So how is science different from optimal formalized group learning?  What's 
the distinction?



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 26, 2008 11:14 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is 
no AGI




  These equations seem silly to me ... obviously science is much more than 
that, as Mark should know as he has studied philosophy of science extensively

  Cognitively, the precursor for science seems to be Piaget's formal stage 
of cognitive development.  If you have a community of minds that have reached 
the formal stage, then potentially they can develop the mental and social 
patterns corresponding to the practice of science.

  -- Ben


  On Sun, Oct 26, 2008 at 8:08 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:

 Would it then be accurate to saySCIENCE = LEARNING +
 TRANSMISSION?

 Or, how about,SCIENCE = GROUP LEARNING?


Science = learning + language.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  A human being should be able to change a diaper, plan an invasion, 
butcher a hog, conn a ship, design a building, write a sonnet, balance 
accounts, build a wall, set a bone, comfort the dying, take orders, give 
orders, cooperate, act alone, solve equations, analyze a new problem, pitch 
manure, program a computer, cook a tasty meal, fight efficiently, die 
gallantly. Specialization is for insects.  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects.  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser
Hmmm.  I think that some of our miscommunication might have been due to the 
fact that you seem to be talking about two things while I think that I'm 
talking about third . . . .


I believe that *meaning* is constructed.
I believe that truth is absolute (within a given context) and is a proper 
subset of meaning.
I believe that proof is constructed and is a proper subset of truth (and 
therefore a proper subset of meaning as well).


So, fundamentally, I *am* a constructivist as far as meaning is concerned 
and take Gödel's theorem to say that meaning is not completely defined or 
definable.


Since I'm being a constructionist about meaning, it would seem that your 
statement that

A constructivist would be justified in asserting the equivalence of
Gödel's incompleteness theorem and Tarski's undefinability theorem,
would mean that I was correct (or, at least, not wrong) in using Gödel's 
theorem but probably not as clear as I could have been if I'd used Tarski 
since an additional condition/assumption (constructivism) was required.



So, interchanging the two theorems is fully justifiable in some
intellectual circles! Just don't do it when non-constructivists are
around :).


I guess the question is . . . . How many people *aren't* constructivists 
when it comes to meaning?  Actually, I get the impression that this mailing 
list is seriously split . . . .


Where do you fall on the constructivism of meaning?

- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 26, 2008 10:00 PM
Subject: Re: [agi] constructivist issues



Mark,

After some thought...

A constructivist would be justified in asserting the equivalence of
Godel's incompleteness theorem and Tarski's undefinability theorem,
based on the idea that truth is constructable truth. Where classical
logicians take Godels theorem to prove that provability cannot equal
truth, constructivists can take it to show that provability is not
completely defined or definable (and neither is truth, since they are
the same).

So, interchanging the two theorems is fully justifiable in some
intellectual circles! Just don't do it when non-constructivists are
around :).

--Abram

On Sat, Oct 25, 2008 at 6:18 PM, Mark Waser [EMAIL PROTECTED] wrote:
OK.  A good explanation and I stand corrected and more educated.  Thank 
you.


- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 6:06 PM
Subject: Re: [agi] constructivist issues



Mark,

Yes.

I wouldn't normally be so picky, but Godel's theorem *really* gets
misused.

Using Godel's theorem to say made it sound (to me) as if you have a
very fundamental confusion. You were using a theorem about the
incompleteness of proof to talk about the incompleteness of truth, so
it sounded like you thought logically true and logically provable
were equivalent, which is of course the *opposite* of what Godel
proved.

Intuitively, Godel's theorem says If a logic can talk about number
theory, it can't have a complete system of proof. Tarski's says, If
a logic can talk about number theory, it can't talk about its own
notion of truth. Both theorems rely on the Diagonal Lemma, which
states If a logic can talk about number theory, it can talk about its
own proof method. So, Tarski's theorem immediately implies Godel's
theorem: if a logic can talk about its own notion of proof, but not
its own notion of truth, then the two can't be equivalent!

So, since Godel's theorem follows so closely from Tarski's (even
though Tarski's came later), it is better to invoke Tarski's by
default if you aren't sure which one applies.

--Abram

On Sat, Oct 25, 2008 at 4:22 PM, Mark Waser [EMAIL PROTECTED] 
wrote:


So you're saying that if I switch to using Tarski's theory (which I
believe
is fundamentally just a very slightly different aspect of the same
critical
concept -- but unfortunately much less well-known and therefore less
powerful as an explanation) that you'll agree with me?

That seems akin to picayune arguments over phrasing when trying to 
simply

reach general broad agreement . . . . (or am I misinterpreting?)

- Original Message - From: Abram Demski 
[EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 5:29 PM
Subject: Re: [agi] constructivist issues





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Mark Waser
 You've now changed your statement to science = optimal formalized group 
 learning ... I'm not sure if this is intended as descriptive or prescriptive

Our previous e-mails about the sociology of science should have made it quite 
clear that its not descriptive  ;-)  Of course it was intended to be 
prescriptive.  (Though, on second thought, if you removed the optimal, maybe 
it could be descriptive -- what do you think?)

And yes, I'm constantly changing the phrasing of my statement in an attempt to 
get my intended meaning across.  This is eventually going to loop back to my 
belief that the degree to which you are a general intelligence is the degree to 
which you're a(n optimal) scientist.  So I haven't really changed my basic 
point at all (although admittedly, I've certainly refined it some -- which is 
my whole purpose in having this discussion :-)

 Also, learning could be learning about mathematics, which we don't normally 
 think of as being science ...

True.  But I would argue that that is a shortcoming of our thinking.  This is 
similar to your previous cosmology example.  I'm including both under the 
umbrella of what you'd clearly be happier phrasing as a system of thought 
intended to guide a group in learning about . . . .

What would you say if I defined science as a system of thought intended to 
guide a group in learning about the empirical world and a scientist simply as 
someone who employs science (i.e. that system of thought).

I would also tend to think of system of thought as being interchangeable with 
process and/or method.

 If you said A scientific research programme is a system of thought intended 
 to guide a group in learning about some aspect of the empirical world (as 
 understood by this group) and formalizing their conclusions and methods I 
 wouldn't complain as much...

So you like SCIENCE PROGRAM = SYSTEM FOR GROUP LEARNING + FORMALIZATION OF 
RESULTS but you don't like SCIENCE = LEARNING + TRANSMISSION (which is the 
individual case) OR SCIENCE = GROUP LEARNING (which probably should have + 
CODIFICATION added to assist the learning of future group members).

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 27, 2008 10:55 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI



  No, it's really just that I've been spending too much time on this mailing 
list.  I've got an AGI to build, as well as too many other responsibilities ;-p

  You've now changed your statement to science = optimal formalized group 
learning ... I'm not sure if this is intended as descriptive or prescriptive

  Obviously, science as practiced is not optimal and has many cultural 
properties besides those implied by being group learning

  Also, learning could be learning about mathematics, which we don't normally 
think of as being science ...

  If you said A scientific research programme is a system of thought intended 
to guide a group in learning about some aspect of the empirical world (as 
understood by this group) and formalizing their conclusions and methods I 
wouldn't complain as much...

  ben



  On Mon, Oct 27, 2008 at 3:13 AM, Mark Waser [EMAIL PROTECTED] wrote:

Or, in other words, you can't even start to draw a clear distinction in a 
small number of words.  That would argue that maybe those equalities aren't so 
silly after all.

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 26, 2008 7:38 PM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is 
no AGI



  Sorry, I'm just going to have to choose to be ignored on this topic ;-) 
... I have too much AGI stuff to do to be spending so much time chatting on 
mailing lists ... and I've already published my thoughts on philosophy of 
science in The Hidden Pattern and online...

  ben g


  On Sun, Oct 26, 2008 at 9:51 AM, Mark Waser [EMAIL PROTECTED] wrote:

 These equations seem silly to me ... obviously science is much more 
than that, as Mark should know as he has studied philosophy of science 
extensively

Mark is looking for well-defined distinctions.  Claiming that science 
is obviously much more than learning is a non-sequitor.  What does science 
include that learning does not?  Please be specific or you *should* be ignored.

The transmission or communication of results (or, as Matt puts it, 
language) is one necessary addition.  Do you wish to provide another or do you 
just want to say that there must be one without being able to come up with one?

Mark can still think of at least one other thing (which may be 
multiples depending upon how you look at it) but isn't comfortable that he has 
an optimal view of it so he's looking for other viewpoints/phrasings.

 Cognitively, the precursor for science seems to be Piaget's formal 
stage of cognitive development.  If you have a community

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser

Hi,

   It's interesting (and useful) that you didn't use the word meaning until 
your last paragraph.



I'm not sure what you mean when you say that meaning is constructed,
yet truth is absolute. Could you clarify?


   Hmmm.  What if I say that meaning is your domain model and that truth is 
whether that domain model (or rather, a given preposition phrased in the 
semantics of the domain model) accurately represents the empirical world?


= = = = = = = =
I'm a classicalist in the sense that I think classical mathematics needs 
to be accounted for in a theory of meaning.


Would *anyone* argue with this?  Is there anyone (with a clue ;-) who isn't 
a classicist in this sense?


 I am also a classicalist in the sense that I think that the 
mathematically true is a proper subset of the mathematically provable, so 
that Gödelian truths are not undefined, just unprovable.


OK.  But that is talking about a formal (and complete -- though still 
infinite) system.


I might be called a constructivist in the sense that I think there needs 
to be a tight, well-defined connection between syntax and semantics...


Agreed but you seem to be overlooking the question of Syntax and semantics 
of what?


The semantics of an AGI's internal logic needs to follow from its 
manipulation rules.


Absolutely.


But, partly because I accept the

implementability of super-recursive algorithms, I think there is a
chance to allow at least *some* classical mathematics into the
picture. And, since I believe in the computational nature of the mind,
I think that and classical mathematics that *can't* fit into the
picture is literally nonsense! So, since I don't feel like much of
math is nonsense, I won't be satisfied until I've fit most of it in.

OK.  But I'm not sure where this is going . . . . I agree with all that 
you're saying but can't see where/how it's supposed to address/go back into 
my domain model ;-)




- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 27, 2008 11:05 AM
Subject: Re: [agi] constructivist issues


Mark,

I'm a classicalist in the sense that I think classical mathematics
needs to be accounted for in a theory of meaning. (Ben seems to think
that a constructivist can do this by equating classical mathematics
with axiom-systems-of-classical-mathematics, but I am unconvinced.) I
am also a classicalist in the sense that I think that the
mathematically true is a proper subset of the mathematically provable,
so that Godelian truths are not undefined, just unprovable.

I might be called a constructivist in the sense that I think there
needs to be a tight, well-defined connection between syntax and
semantics... The semantics of an AGI's internal logic needs to follow
from its manipulation rules. But, partly because I accept the
implementability of super-recursive algorithms, I think there is a
chance to allow at least *some* classical mathematics into the
picture. And, since I believe in the computational nature of the mind,
I think that and classical mathematics that *can't* fit into the
picture is literally nonsense! So, since I don't feel like much of
math is nonsense, I won't be satisfied until I've fit most of it in.

I'm not sure what you mean when you say that meaning is constructed,
yet truth is absolute. Could you clarify?

--Abram

On Mon, Oct 27, 2008 at 10:27 AM, Mark Waser [EMAIL PROTECTED] wrote:
Hmmm.  I think that some of our miscommunication might have been due to 
the

fact that you seem to be talking about two things while I think that I'm
talking about third . . . .

I believe that *meaning* is constructed.
I believe that truth is absolute (within a given context) and is a proper
subset of meaning.
I believe that proof is constructed and is a proper subset of truth (and
therefore a proper subset of meaning as well).

So, fundamentally, I *am* a constructivist as far as meaning is concerned
and take Gödel's theorem to say that meaning is not completely defined or
definable.

Since I'm being a constructionist about meaning, it would seem that your
statement that


A constructivist would be justified in asserting the equivalence of
Gödel's incompleteness theorem and Tarski's undefinability theorem,


would mean that I was correct (or, at least, not wrong) in using Gödel's
theorem but probably not as clear as I could have been if I'd used Tarski
since an additional condition/assumption (constructivism) was required.


So, interchanging the two theorems is fully justifiable in some
intellectual circles! Just don't do it when non-constructivists are
around :).


I guess the question is . . . . How many people *aren't* constructivists
when it comes to meaning?  Actually, I get the impression that this 
mailing

list is seriously split . . . .

Where do you fall on the constructivism of meaning?

- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, October 26, 2008 10:00 PM
Subject: Re

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Mark Waser
Cool.  Thank you for the assist.

I think that math has the distinction that it is a closed formal system and 
that therefore people segregate it from the open mess that science has to deal 
with (though arguably the scientific method applies).

Art seems to be that which deals with an even bigger open mess (since it always 
includes humans in the system ;-) and which is even less codified though it 
seems to frequently want to migrate to be science.

= = = = = = 

Actually, in a way, it almost seems as if you want a spectrum running from  
MATH  through  SCIENCE  continuing through  ART  to  ??DISORDER??
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 27, 2008 12:07 PM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI



  I think you're converging on better and better wording ... however, I think 
somehow you do need to account for the differences between

  -- science

  on the one hand and

  -- math
  -- art

  etc. on the other hand, which also involve group learning and codification 
and communication of results, etc. ... but are different from science.  I'm not 
sure the best way to formalize the difference in general, in a way that 
encompasses all the cases of science and is descriptive rather than normative 
... but I haven't thought about it much and have other stuff to do...

  ben


  On Mon, Oct 27, 2008 at 8:40 AM, Mark Waser [EMAIL PROTECTED] wrote:

 You've now changed your statement to science = optimal formalized group 
learning ... I'm not sure if this is intended as descriptive or prescriptive

Our previous e-mails about the sociology of science should have made it 
quite clear that its not descriptive  ;-)  Of course it was intended to be 
prescriptive.  (Though, on second thought, if you removed the optimal, maybe 
it could be descriptive -- what do you think?)

And yes, I'm constantly changing the phrasing of my statement in an attempt 
to get my intended meaning across.  This is eventually going to loop back to 
my belief that the degree to which you are a general intelligence is the degree 
to which you're a(n optimal) scientist.  So I haven't really changed my basic 
point at all (although admittedly, I've certainly refined it some -- which is 
my whole purpose in having this discussion :-)

 Also, learning could be learning about mathematics, which we don't 
normally think of as being science ...

True.  But I would argue that that is a shortcoming of our thinking.  This 
is similar to your previous cosmology example.  I'm including both under the 
umbrella of what you'd clearly be happier phrasing as a system of thought 
intended to guide a group in learning about . . . .

What would you say if I defined science as a system of thought intended to 
guide a group in learning about the empirical world and a scientist simply as 
someone who employs science (i.e. that system of thought).

I would also tend to think of system of thought as being interchangeable 
with process and/or method.

 If you said A scientific research programme is a system of thought 
intended to guide a group in learning about some aspect of the empirical world 
(as understood by this group) and formalizing their conclusions and methods I 
wouldn't complain as much...

So you like SCIENCE PROGRAM = SYSTEM FOR GROUP LEARNING + FORMALIZATION OF 
RESULTS but you don't like SCIENCE = LEARNING + TRANSMISSION (which is the 
individual case) OR SCIENCE = GROUP LEARNING (which probably should have + 
CODIFICATION added to assist the learning of future group members).

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 27, 2008 10:55 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is 
no AGI



  No, it's really just that I've been spending too much time on this 
mailing list.  I've got an AGI to build, as well as too many other 
responsibilities ;-p

  You've now changed your statement to science = optimal formalized group 
learning ... I'm not sure if this is intended as descriptive or prescriptive

  Obviously, science as practiced is not optimal and has many cultural 
properties besides those implied by being group learning

  Also, learning could be learning about mathematics, which we don't 
normally think of as being science ...

  If you said A scientific research programme is a system of thought 
intended to guide a group in learning about some aspect of the empirical world 
(as understood by this group) and formalizing their conclusions and methods I 
wouldn't complain as much...

  ben



  On Mon, Oct 27, 2008 at 3:13 AM, Mark Waser [EMAIL PROTECTED] wrote:

Or, in other words, you can't even start to draw a clear distinction in 
a small number of words.  That would argue that maybe those equalities aren't 
so silly after all.

- Original

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser
I, being of the classical persuasion, believe that arithmetic is either 
consistent or inconsistent. You, to the extent that you are a 
constructivist, should say that the matter is undecidable and therefore 
undefined.


I believe that arithmetic is a formal and complete system.  I'm not a 
constructivist where formal and complete systems are concerned (since there 
is nothing more to construct).


On the other hand, if you want to try to get into the meaning of 
arithmetic . . . .


= = = = = = =

since the infinity of real numbers is larger than the infinity of 
possible names/descriptions.


Huh?  The constructivist in me points out that via compound constructions 
the infinity of possible names/descriptions is exponentially larger than the 
infinity of real numbers.  You can reference *any* real number to the extent 
that you can define it.  And yes, that is both a trick statement AND also 
the crux of the matter at the same time -- you can't name pi as a sequence 
of numbers but you certainly can define it by a description of what it is 
and what it does and any description can also be said to be a name (or a 
true name if you will :-).


If the Gödelian truths are unreachable because they are undefined, then 
there is something *wrong* with the classical insistence that they are 
true or false but we just don't know which.


They are undefined unless they are part of a formal and complete system.  If 
they are part of a formal and complete system, then they are defined but may 
be indeterminable.  There is nothing *wrong* with the classical insistence 
as long as it is applied to a limited domain (i.e. that of closed systems) 
which is what you are doing.



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 27, 2008 12:29 PM
Subject: Re: [agi] constructivist issues


Mark,

An example of people who would argue with the meaningfulness of
classical mathematics: there are some people who contest the concept
of real numbers. The cite things like that the vast majority of real
numbers cannot even be named or referenced in any way as individuals,
since the infinity of real numbers is larger than the infinity of
possible names/descriptions.

OK.  But I'm not sure where this is going . . . . I agree with all
that you're saying but can't see where/how it's supposed to address/go
back into my domain model ;-)

Well, you already agreed that classical mathematics is meaningful.
But, you also asserted that you are a constructivist where meaning is
concerned, and therefore collapse Godel's and Tarski's theorems. I do
not think you can consistently assert both! If the Godelian truths are
unreachable because they are undefined, then there is something
*wrong* with the classical insistence that they are true or false but
we just don't know which.

To take a concrete example: One of these truths that suffers from
Godelian incompleteness is the consistency of arithmetic. I, being of
the classical persuasion, believe that arithmetic is either consistent
or inconsistent. You, to the extent that you are a constructivist,
should say that the matter is undecidable and therefore undefined.

--Abram

On Mon, Oct 27, 2008 at 12:04 PM, Mark Waser [EMAIL PROTECTED] wrote:

Hi,

  It's interesting (and useful) that you didn't use the word meaning until
your last paragraph.


I'm not sure what you mean when you say that meaning is constructed,
yet truth is absolute. Could you clarify?


  Hmmm.  What if I say that meaning is your domain model and that truth is
whether that domain model (or rather, a given preposition phrased in the
semantics of the domain model) accurately represents the empirical world?

= = = = = = = =


I'm a classicalist in the sense that I think classical mathematics needs
to be accounted for in a theory of meaning.


Would *anyone* argue with this?  Is there anyone (with a clue ;-) who 
isn't

a classicist in this sense?


 I am also a classicalist in the sense that I think that the
mathematically true is a proper subset of the mathematically provable, 
so

that Gödelian truths are not undefined, just unprovable.


OK.  But that is talking about a formal (and complete -- though still
infinite) system.


I might be called a constructivist in the sense that I think there needs
to be a tight, well-defined connection between syntax and semantics...


Agreed but you seem to be overlooking the question of Syntax and 
semantics

of what?


The semantics of an AGI's internal logic needs to follow from its
manipulation rules.


Absolutely.


But, partly because I accept the


implementability of super-recursive algorithms, I think there is a
chance to allow at least *some* classical mathematics into the
picture. And, since I believe in the computational nature of the mind,
I think that and classical mathematics that *can't* fit into the
picture is literally nonsense! So, since I don't feel like much of
math is nonsense, I won't be satisfied until I've

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-26 Thread Mark Waser
 These equations seem silly to me ... obviously science is much more than 
 that, as Mark should know as he has studied philosophy of science extensively

Mark is looking for well-defined distinctions.  Claiming that science is 
obviously much more than learning is a non-sequitor.  What does science 
include that learning does not?  Please be specific or you *should* be ignored.

The transmission or communication of results (or, as Matt puts it, language) is 
one necessary addition.  Do you wish to provide another or do you just want to 
say that there must be one without being able to come up with one?

Mark can still think of at least one other thing (which may be multiples 
depending upon how you look at it) but isn't comfortable that he has an optimal 
view of it so he's looking for other viewpoints/phrasings.

 Cognitively, the precursor for science seems to be Piaget's formal stage of 
 cognitive development.  If you have a community of minds that have reached 
 the formal stage, then potentially they can develop the mental and social 
 patterns corresponding to the practice of science.

So how is science different from optimal formalized group learning?  What's the 
distinction?



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 26, 2008 11:14 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI



  These equations seem silly to me ... obviously science is much more than 
that, as Mark should know as he has studied philosophy of science extensively

  Cognitively, the precursor for science seems to be Piaget's formal stage of 
cognitive development.  If you have a community of minds that have reached the 
formal stage, then potentially they can develop the mental and social patterns 
corresponding to the practice of science.

  -- Ben


  On Sun, Oct 26, 2008 at 8:08 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:

 Would it then be accurate to saySCIENCE = LEARNING +
 TRANSMISSION?

 Or, how about,SCIENCE = GROUP LEARNING?


Science = learning + language.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects.  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
AIXI says that a perfect solution is not computable. However, a very 
general principle of both scientific research and machine learning is to 
favor simple hypotheses over complex ones. AIXI justifies these practices 
in a formal way. It also says we can stop looking for a universal 
solution, which I think is important. It justifies our current ad-hoc 
approach to problem solving -- we have no choice.


Excellent.  Thank you.  Another good point to be pinned (since a number of 
people frequently go around and around on it).


Is there anything else that it tells us that is useful and not a 
distraction?


- - - - - - - - - - - - - -
Also, since you invoked the two in the same sentence as if they were 
different things . . . .


What is the distinction between scientific research and machine learning 
(other than who performs it, of course).  Or, re-phrased, what is the 
difference between a machine doing scientific research and a machine that is 
simply learning?


I'd love to hear everybody chiming in on that last question



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 4:54 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI



--- On Fri, 10/24/08, Mark Waser [EMAIL PROTECTED] wrote:


Cool.  And you're saying that intelligence is not
computable.  So why else
are we constantly invoking AIXI?  Does it tell us anything
else about
general intelligence?


AIXI says that a perfect solution is not computable. However, a very 
general principle of both scientific research and machine learning is to 
favor simple hypotheses over complex ones. AIXI justifies these practices 
in a formal way. It also says we can stop looking for a universal 
solution, which I think is important. It justifies our current ad-hoc 
approach to problem solving -- we have no choice.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Mark Waser
 Anyway language issues are just not the main problem in creating AGI.  
 Getting the algorithms and structures and cognitive architecture right are 
 dramatically more important.

Strong agreement with what you say but then effective rejection as a valid 
point because language issues frequently are a total barrier to entry for 
people who might have been able to do the algorithms and structures and 
cognitive architecture.

I'll even go so far as to use myself as an example.  I can easily do C++ (since 
I've done so in the past) but all the baggage around it make me consider it not 
worth my while.  I certainly won't hesitate to use what is learned on that 
architecture but I'll be totally shocked if you aren't massively leap-frogged 
because of the inherent shortcomings of what you're trying to work with.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 7:40 PM
  Subject: **SPAM** Re: [agi] On programming languages



  Mark,

  In OpenCog we use all sorts of libraries for all sorts of things, of course.  
 Like everyone else we try to avoid reinventing the wheel.  We nearly always 
avoid coding our own data structures, using either Boost or STL stuff, or 
third-party stuff such as the vtree library that is the basis of PLN and MOSES 
libraries (soon to be replaced with Moshe's superior variant treetree, though 
;-).

  The peeve you have seems to be with the Atomspace, which is custom code for 
managing the Atom knowledge base ... but this is one piece of code that was 
written in 2001 and works and has not consumed a significant percentage of the 
time of the project.   This particular object seemed so central to the system 
and so performance and memory-usage critical that it seemed worthwhile to 
create it in a custom way.  But even if this judgment was wrong (and I'm not 
saying it was) it does not represent a particularly large impact on the project.

  The main problem I have seen with using C++ for OpenCog is the large barrier 
to entry.  Not that many programmers are really good at C++.  But LISP has the 
same problem.  For ease of entry I'd probably choose Java, I guess ... or C# if 
Mono were better.

  Of course, C++ being a complex language there are plusses and minuses to 
various choices within it.  We've made really good use of the power afforded by 
templates, but it's also true that debugging complex template constructs can be 
a bitch.

  Anyway language issues are just not the main problem in creating AGI.  
Getting the algorithms and structures and cognitive architecture right are 
dramatically more important.

  Ben G




  On Fri, Oct 24, 2008 at 3:51 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Relatively a small amount of code is my own creation, and the libraries 
I used, e.go. Sesame, Glib, are well maintained.

Steve is a man after my own heart.  Grab the available solid 
infrastructure/libraries and build on top of it/them.

To me, it's all a question of the size and coherence of the communities 
building and maintaining the infrastructure.  My personal *best guess* is that 
the Windows community is more cohesive and therefore the rate of interoperable 
infrastructure is growing faster.  It's even clearer that *nix started with a 
big lead.  Currently I'd still say that which is best to use for any given 
project depends upon the project timeline, your comfort factor, whether or not 
you're willing to re-write and/or port, etc., etc. -- but I'm also increasingly 
of the *opinion* that the balance is starting to swing and swing hard . . . . 
(but I'm not really willing to defend that *opinion* against entrenched 
resistance -- merely to suggest and educate to those who don't know all of the 
things that are now available out-of-the-box).

The only people that I mean to criticize are those who are attempting to do 
everything themselves and are re-inventing the same things that many others are 
doing and continue to do . . . 
  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 1:42 PM
  Subject: **SPAM** Re: [agi] On programming languages


  Hi Mark,

  I readily concede that .Net is superior to Java out-of-the box with 
respect to reflection and metadata support as you say.  I spent my first 
project year creating three successive versions of a Java persistence framework 
for an RDF quad store using third party libraries for these features.  Now I am 
completely satisfied with respect to these capabilities.  Relatively a small 
amount of code is my own creation, and the libraries I used, e.g. Sesame, 
Cglib, are well maintained.

  -Steve


  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860 




  - Original Message 
  From: Mark Waser [EMAIL PROTECTED

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
The fact that Occam's Razor works in the real world suggests that the 
physics of the universe is computable. Otherwise AIXI would not apply.


Hmmm.  I don't get this.  Occam's razor simply says go with the simplest 
explanation until forced to expand it and then only expand it as necessary.


How does this suggest that the physics of the universe is computable?

Or conversely, why and how would Occam's razor *not* work in a universe 
where the physics aren't computable.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 1:41 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI



--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:


 AIXI says that a perfect solution is not computable. However, a very
 general principle of both scientific research and machine learning is 
 to
 favor simple hypotheses over complex ones. AIXI justifies these 
 practices

 in a formal way. It also says we can stop looking for a universal
 solution, which I think is important. It justifies our current ad-hoc
 approach to problem solving -- we have no choice.

Excellent.  Thank you.  Another good point to be pinned
(since a number of  people frequently go around and around on it).

Is there anything else that it tells us that is useful and
not a  distraction?


The fact that Occam's Razor works in the real world suggests that the 
physics of the universe is computable. Otherwise AIXI would not apply.



- - - - - - - - - - - - - -
Also, since you invoked the two in the same sentence as if
they were  different things . . . .

What is the distinction between scientific research and machine learning
(other than who performs it, of course).  Or, re-phrased, what is the
difference between a machine doing scientific research and
a machine that is  simply learning?

I'd love to hear everybody chiming in on that last question


Scientists choose experiments to maximize information gain. There is no 
reason that machine learning algorithms couldn't do this, but often they 
don't.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Scientists choose experiments to maximize information gain. There is no 
reason that machine learning algorithms couldn't do this, but often they 
don't.


Heh.  I would say that scientists attempt to do this and machine learning 
algorithms should do it.


So where is the difference other than in the quality of implementation (i.e. 
other than who performs it, of course).


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 1:41 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI



--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:


 AIXI says that a perfect solution is not computable. However, a very
 general principle of both scientific research and machine learning is 
 to
 favor simple hypotheses over complex ones. AIXI justifies these 
 practices

 in a formal way. It also says we can stop looking for a universal
 solution, which I think is important. It justifies our current ad-hoc
 approach to problem solving -- we have no choice.

Excellent.  Thank you.  Another good point to be pinned
(since a number of  people frequently go around and around on it).

Is there anything else that it tells us that is useful and
not a  distraction?


The fact that Occam's Razor works in the real world suggests that the 
physics of the universe is computable. Otherwise AIXI would not apply.



- - - - - - - - - - - - - -
Also, since you invoked the two in the same sentence as if
they were  different things . . . .

What is the distinction between scientific research and machine learning
(other than who performs it, of course).  Or, re-phrased, what is the
difference between a machine doing scientific research and
a machine that is  simply learning?

I'd love to hear everybody chiming in on that last question


Scientists choose experiments to maximize information gain. There is no 
reason that machine learning algorithms couldn't do this, but often they 
don't.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Vladimir said  I pointed out only that it doesn't follow from AIXI that 
ad-hoc is justified.


Matt used a chain of logic that went as follows:


AIXI says that a perfect solution is not computable. However, a very
general principle of both scientific research and machine learning is
to favor simple hypotheses over complex ones. AIXI justifies these
 practices in a formal way. It also says we can stop looking for a 
universal

solution, which I think is important. It justifies our current ad-hoc
approach to problem solving -- we have no choice.


Or, in summary, ad hoc is justified because we have no choice.

You claimed that we had a choice *BECAUSE* optimal approximation is an 
alternative to ad hoc.


I then asked
So what is an optimal approximation under uncertainty?  How do you know 
when

you've gotten there?

and said:
If you don't believe in ad-hoc then you must have an algorithmic solution 
.


You are now apparently declining to provide an algorithmic solution without 
arguing that not doing so is a disproof of your statement.
Or, in other words, you are declining to prove that Matt is incorrect in 
saying that we have no choice -- You're just simply repeating your 
insistence that your now-unsupported point is valid.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser

Which faulty reasoning step are you talking about?

You said that there is an alternative to ad hoc in optimal approximation.

My request is that you show that the optimal approximation isn't going to 
just be determined in an ad hoc fashion.


Your absurd strawman example of *using* a bad solution instead of a good one 
doesn't address Matt's point of how you *arrive at* a solution at all.


Are you sure that you know the meaning of the term ad hoc?

- Original Message - 
From: Vladimir Nesov [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 5:32 PM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no 
AGI




On Sun, Oct 26, 2008 at 1:19 AM, Mark Waser [EMAIL PROTECTED] wrote:


You are now apparently declining to provide an algorithmic solution 
without

arguing that not doing so is a disproof of your statement.
Or, in other words, you are declining to prove that Matt is incorrect in
saying that we have no choice -- You're just simply repeating your
insistence that your now-unsupported point is valid.



This is tedious. I didn't try to prove that the conclusion is wrong, I
pointed to a faulty reasoning step by showing that in general that
reasoning step is wrong. If you need to find the best solution to
x*3=7, but you can only use integers, the perfect solution is
impossible, but it doesn't mean that we are justified in using x=3
that looks good enough, as x=2 is the best solution given limitations.

--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Mark Waser

Surely a coherent reply to this assertion would involve the phrases
superstitious, ignorant and FUD


So why don't you try to generate one to prove your guess?

Are you claiming that I'm superstitious and ignorant?  That I'm fearful and 
uncertain or trying to generate fearfulness and uncertainty?


Or are you just trying to win a perceived argument by innuendo since you 
don't have any competent response that you can defend?


This is an example of the worst of this mailing list.  Hey Ben, can you at 
least speak out against garbage like this?



- Original Message - 
From: Eric Burton [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 5:41 PM
Subject: **SPAM** Re: [agi] On programming languages



I'll even go so far as to use myself as an example.  I can easily do C++
(since I've done so in the past) but all the baggage around it make me
consider it not worth my while.  I certainly won't hesitate to use what 
is

learned on that architecture but I'll be totally shocked if you aren't
massively leap-frogged because of the inherent shortcomings of what 
you're

trying to work with.


Surely a coherent reply to this assertion would involve the phrases
superstitious, ignorant and FUD

Eric B


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Ummm.  It seems like you were/are saying then that because AIXI makes an 
assumption limiting it's own applicability/proof (that it requires that the 
environment be computable) and because AIXI can make some valid conclusions, 
that that suggests that AIXI's limiting assumptions are true of the 
universe.  That simply doesn't work, dude, unless you have a very loose 
inductive-type definition of suggests that is more suited for inference 
control than anything like a logical proof.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 5:51 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI



--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:

 The fact that Occam's Razor works in the real world
 suggests that the
 physics of the universe is computable. Otherwise AIXI
 would not apply.

Hmmm.  I don't get this.  Occam's razor simply says
go with the simplest
explanation until forced to expand it and then only expand
it as necessary.

How does this suggest that the physics of the universe is
computable?

Or conversely, why and how would Occam's razor *not*
work in a universe
where the physics aren't computable.


The proof of AIXI assumes the environment is computable by a Turing 
machine (possibly with a random element). I realize this is not a proof 
that the universe is computable.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser

So where is the difference

There is no difference.


Cool.  That's one vote.

Anyone else want to take up the issue of whether there is a distinction 
between competent scientific research and competent learning (whether or not 
both are being done by a machine) and, if so, what that distinction is?


Or how about if I'm bold and follow up with the question of whether there is 
a distinction between a machine (or other entity) that is capable of 
competent scientific research/competent generic learning and a general 
intelligence?


That's an interesting definition of general intelligence that could have an 
awful lot of power if it's acceptable . . . . .



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 5:59 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI



--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:


 Scientists choose experiments to maximize information
 gain. There is no
 reason that machine learning algorithms couldn't
 do this, but often they don't.

Heh.  I would say that scientists attempt to do this and
machine learning
algorithms should do it.

So where is the difference other than in the quality of
implementation (i.e.
other than who performs it, of course).


There is no difference. I originally distinguished machine learning 
because all of the usual algorithms depend on minimizing the complexity of 
the hypothesis space. For example, we use neural networks with the minimum 
number of connections to learn the training data because we want to avoid 
over fitting. Likewise, decision trees and rule learning algorithms like 
RIPPER try to find the minimum number of rules that fit the data. I knew 
about clustering algorithms, but not why they worked. I learned all these 
different strategies for various algorithms in a machine learning course I 
took, but was unaware of the general principle and the reasoning behind it 
until I learned about AIXI.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser

Ah.  An excellent distinction . . . .Thank you.  Very helpful.

Would it then be accurate to saySCIENCE = LEARNING + TRANSMISSION?

Or, how about,SCIENCE = GROUP LEARNING?

- Original Message - 
From: Russell Wallace [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 6:27 PM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no 
AGI




On Sat, Oct 25, 2008 at 11:14 PM, Mark Waser [EMAIL PROTECTED] wrote:

Anyone else want to take up the issue of whether there is a distinction
between competent scientific research and competent learning (whether or 
not

both are being done by a machine) and, if so, what that distinction is?


Science is about public knowledge. I can learn from personal
experience, but it's only science if I publish my results in such a
way that other people can repeat them.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Mark Waser
 People seem to debate programming languages and OS's endlessly, and this 
 list is no exception.

Yes.  And like all other debates there are good points and bad points.:-)

 To make progress on AGI, you  just gotta make *some* reasonable choice and 
 start building

Strongly agree.  Otherwise it's just empty theorizing.  But sometimes it's 
worth gathering up your learning from what you have and making a fresh start (a 
la Eric Raymond).  You may not be at that point yet . . . . you may be past 
that point since a lot of the stuff that OpenCog is depending upon seems to 
have been lost in the mists of time (to judge by some of the e-mails among team 
members).  The OpenCog documents are a great start (though it's too bad that 
some important stuff seems to have been lost before they were created and now 
remains to be rediscovered -- though that's pretty typical of *any* large 
long-term project)

 there's no choice that's going to please everyone, since this stuff is so 
 contentious...

On the other hand, there are projects where most of the people are pleased with 
them (or, at least, not displeased) and horrible projects.  You seem to be 
pretty much on the correct side with many of your naysayers more of the variety 
keeping you honest than really actively disagreeing with you.

Actually I don't debate language and OS endlessly -- indeed, I generally don't 
argue them at all if you truly understand what I'm arguing (i.e. platform which 
is distinct from either though it may be dependent upon or include both -- to 
it's detriment)  -- but I do bring them up periodically (or rather respond to 
others brining them up) just to keep people abreast of changing circumstances 
(admittedly, as I see/evaluate them).  I'd debate your coding on Windows 
comment (since I don't code on Windows even though Windows is the operating 
system that my computer is running) but I think we've reached the point where 
continuing to agree to disagree pending further developments is best.  :-)

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Saturday, October 25, 2008 6:38 PM
  Subject: **SPAM** Re: [agi] On programming languages






Strong agreement with what you say but then effective rejection as a valid 
point because language issues frequently are a total barrier to entry for 
people who might have been able to do the algorithms and structures and 
cognitive architecture.

I'll even go so far as to use myself as an example.  I can easily do C++ 
(since I've done so in the past) but all the baggage around it make me consider 
it not worth my while.  I certainly won't hesitate to use what is learned on 
that architecture but I'll be totally shocked if you aren't massively 
leap-frogged because of the inherent shortcomings of what you're trying to work 
with.


  Somewhat similarly, I've done coding on Windows before, but I dislike the 
operating system quite a lot, so in general I try to avoid any projects where I 
have to use it.   

  However, if I found some AGI project that I thought were more promising than 
OpenCog/Novamente on the level of algorithms, philosophy-of-mind and structures 
... and, egads, this project ran only on Windows ... I would certainly not 
hesitate to join that project, even though my feeling is that any serious 
large-scale software project based exclusively on Windows is going to be 
seriously impaired by its OS choice...

  In short, I just don't think these issues are **all that** important.  
They're important, but having the right AGI design is far, far more so.

  People seem to debate programming languages and OS's endlessly, and this list 
is no exception.  There are smart people on multiple sides of these debates.  
To make progress on AGI, you  just gotta make *some* reasonable choice and 
start building ... there's no choice that's going to please everyone, since 
this stuff is so contentious...

  -- Ben G




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Mark Waser

I am arguing by induction, not deduction:

If the universe is computable, then Occam's Razor holds.
Occam's Razor holds.
Therefore the universe is computable.

Of course, I have proved no such thing.


Yep.  That's a better summation of what I was trying to say . . . .

Except that I'd like to bring back my point that induction really is only 
suited for inference control and determining what should be 
evaluated/examined/proved . . . . NOT actually doing the evaluation/proving 
*with*.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 7:21 PM
Subject: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no 
AGI)




--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:


Ummm.  It seems like you were/are saying then that because
AIXI makes an
assumption limiting it's own applicability/proof (that
it requires that the
environment be computable) and because AIXI can make some
valid conclusions,
that that suggests that AIXI's limiting
assumptions are true of the
universe.  That simply doesn't work, dude, unless you
have a very loose
inductive-type definition of suggests that is
more suited for inference
control than anything like a logical proof.


I am arguing by induction, not deduction:

If the universe is computable, then Occam's Razor holds.
Occam's Razor holds.
Therefore the universe is computable.

Of course, I have proved no such thing.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Mark Waser
 -- truly general AI, even assuming the universe is computable, is impossible 
 for any finite system

Excellent.  Unfortunately, I personally missed (or have forgotten) how AIXI 
shows or proves this (as opposed to invoking some other form of incompleteness) 
unless it is merely because of the assumption that the universe itself is 
assumed to be infinite (which I do understand but which then makes the argument 
rather pedestrian and less interesting).

 The computability of the universe is something that can't really be proved, 
 but I argue that it's an implicit assumption underlying the whole scientific 
 method.

It seems to me (and I certainly can be wrong about this) that computability is 
frequently improperly conflated with consistency (though may be you want to 
argue that such a conflation isn't improper) and that the (actually explicit) 
assumption underlying the whole scientific method is that the same causes 
produces the same results.  Comments?

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Saturday, October 25, 2008 7:48 PM
  Subject: **SPAM** Re: AIXI (was Re: [agi] If your AGI can't learn to play 
chess it is no AGI)



  AIXI shows a couple interesting things...

  -- truly general AI, even assuming the universe is computable, is impossible 
for any finite system

  -- given any finite level L of general intelligence that one desires, there 
are some finite R, M so that you can create a computer with less than R 
processing speed and M memory capacity, so that the computer can achieve level 
L of general intelligence

  This doesn't tell you *anything* about how to make AGI in practice.  It does 
tell you that, in principle, creating AGI is a matter of *computational 
efficiency* ... assuming the universe is computable.

  The computability of the universe is something that can't really be proved, 
but I argue that it's an implicit assumption underlying the whole scientific 
method.  If the universe can't be usefully modelable as computable then the 
whole methodology of gathering finite datasets of finite-precision data is 
fundamentally limited in what it can tell us about the universe ... which would 
really suck...

  -- Ben G

  -- Ben G


  On Sat, Oct 25, 2008 at 7:21 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:

 Ummm.  It seems like you were/are saying then that because
 AIXI makes an
 assumption limiting it's own applicability/proof (that
 it requires that the
 environment be computable) and because AIXI can make some
 valid conclusions,
 that that suggests that AIXI's limiting
 assumptions are true of the
 universe.  That simply doesn't work, dude, unless you
 have a very loose
 inductive-type definition of suggests that is
 more suited for inference
 control than anything like a logical proof.

I am arguing by induction, not deduction:

If the universe is computable, then Occam's Razor holds.
Occam's Razor holds.
Therefore the universe is computable.

Of course, I have proved no such thing.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects.  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser

No Mike. AGI must be able to discover regularities of all kind in all
domains.


Must it be able to *discover* regularities or must it be able to be taught 
and subsequently effectively use regularities?  I would argue the latter. 
(Can we get a show of hands of those who believe the former?  I think that 
it's a small minority but . . . )



If you can find a single domain where your AGI fails, it is no AGI.


Failure is an interesting evaluation.  Ben's made it quite clear that 
advanced science is a domain that stupid (if not non-exceptional) humans 
fail at.  Does that mean that most humans aren't general intelligences?



Chess is broad and narrow at the same time.
It is easy programmable and testable and humans can solve problems of this
domain using abilities which are essential for AGI. Thus chess is a good
milestone.


Chess is a good milestone because of it's very difficulty.  The reason why 
human's learn chess so easily (and that is a relative term) is because they 
already have an excellent spatial domain model in place, a ton of strategy 
knowledge available from other learned domains, and the immense array of 
mental tools that we're going to need to bootstrap an AI.  Chess as a GI 
task (or, via a GI approach) is emphatically NOT easily programmable.



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 4:09 AM
Subject: **SPAM** AW: [agi] If your AGI can't learn to play chess it is no 
AGI





No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.

Chess is broad and narrow at the same time.
It is easy programmable and testable and humans can solve problems of this
domain using abilities which are essential for AGI. Thus chess is a good
milestone.

Of course it is not sufficient for AGI. But before you think about
sufficient features, necessary abilities are good milestones to verify
whether your roadmap towards AGI will not go into a dead-end after a long
way of vague hope, that future embodied experience will solve your 
problems

which you cannot solve today.

- Matthias



Mike wrote
P.S. Matthias seems to be cheerfully cutting his own throat here. The idea
of a single domain AGI  or pre-AGI is a contradiction in terms every which
way - not just in terms of domains/subjects or fields, but also sensory
domains.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser
The limitations of Godelian completeness/incompleteness are a subset of 
the much stronger limitations of finite automata.


Can we get a listing of what you believe these limitations are and whether 
or not you believe that they apply to humans?


I believe that humans are constrained by *all* the limits of finite automata 
yet are general intelligences so I'm not sure of your point.


- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 4:09 AM
Subject: AW: [agi] constructivist issues


The limitations of Godelian completeness/incompleteness are a subset of the
much stronger limitations of finite automata.

If you want to build a spaceship to go to mars it is of no practical
relevance to think whether it is theoretically possible to move through
wormholes in the universe.

I think, this comparison is adequate to evaluate the role of Gödel's theorem
for AGI.

- Matthias




Abram Demski [mailto:[EMAIL PROTECTED] wrote


I agree with your point in this context, but I think you also mean to
imply that Godel's incompleteness theorem isn't of any importance for
artificial intelligence, which (probably pretty obviously) I wouldn't
agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...

--Abram




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
This does not imply that people usually do not use visual patterns to 
solve

chess.
It only implies that visual patterns are not necessary.


So . . . wouldn't dolphins and bats use sonar patterns to play chess?

So . . . is it *vision* or is it the most developed (for the individual), 
highest bandwidth sensory modality that allows the creation and update of a 
competent domain model?


Humans usually do use vision . . . . Sonar may prove to be more easily 
implemented for AGI.



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 4:30 AM
Subject: **SPAM** AW: [agi] If your AGI can't learn to play chess it is no 
AGI



This does not imply that people usually do not use visual patterns to 
solve

chess.
It only implies that visual patterns are not necessary.

Since I do not know any good blind chess player I would suspect that 
visual

patterns are better for chess
then those patterns which are used by blind people.

http://www.psych.utoronto.ca/users/reingold/publications/Reingold_Charness_P
omplun__Stampe_press/


http://www.psychology.gatech.edu/create/pubs/reingoldcharness_perception-in
-chess_2005_underwood.pdf


Von: Trent Waddington [mailto:[EMAIL PROTECTED] wrote

http://www.eyeway.org/inform/sp-chess.htm

Trent




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
 E.g. according to this, AIXI (with infinite computational power) but not 
 AIXItl
 would have general intelligence, because the latter can only find 
 regularities
 expressible using programs of length bounded by l and runtime bounded
 by t

rant

I hate AIXI because not only does it have infinite computational power but 
people also unconsciously assume that it has infinite data (or, at least, 
sufficient data to determine *everything*).

AIXI is *not* a general intelligence by any definition that I would use.  It is 
omniscient and need only be a GLUT (giant look-up table) and I argue that that 
is emphatically *NOT* intelligence.  

AIXI may have the problem-solving capabilities of general intelligence but does 
not operate under the constraints that *DEFINE* a general intelligence.  If it 
had to operate under those constraints, it would fail, fail, fail.

AIXI is useful for determining limits but horrible for drawing other types of 
conclusions about GI.

/rant


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 5:02 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI





  On Fri, Oct 24, 2008 at 4:09 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:


No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.


  According to this definition **no finite computational system can be an AGI**,
  so this is definition obviously overly strong for any practical purposes

  E.g. according to this, AIXI (with infinite computational power) but not 
AIXItl
  would have general intelligence, because the latter can only find regularities
  expressible using programs of length bounded by l and runtime bounded
  by t

  Unfortunately, the pragmatic notion of AGI we need to use as researchers is
  not as simple as the above ... but fortunately, it's more achievable ;-)

  One could view the pragmatic task of AGI as being able to discover all 
regularities
  expressible as programs with length bounded by l and runtime bounded by t ...
  [and one can add a restriction about the resources used to make this
  discover], but the thing is, this depends highly on the underlying 
computational model,
  which then can be used to encode some significant domain bias.

  -- Ben G
   




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Mark Waser

Abram,

   Would you agree that this thread is analogous to our debate?

- Original Message - 
From: Vladimir Nesov [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 6:49 AM
Subject: **SPAM** Re: [agi] On programming languages



On Fri, Oct 24, 2008 at 2:16 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 10:56 AM, Vladimir Nesov [EMAIL PROTECTED] 
wrote:



Needing many different
features just doesn't look like a natural thing for AI-generated
programs.


No, it doesn't, does it? And then you run into this requirement that
wasn't obvious on day one, and you cater for that, and then you run
into another requirement, that has to be dealt with in a different
way, and then you run into another... and you end up realizing you've
wasted a great deal of irreplaceable time for no good reason
whatsoever.

So I figure I might as well document the mistake, in case it saves
someone having to repeat it.



Well, my point was that maybe the mistake is use of additional
language constructions and not their absence? You yourself should be
able to emulate anything in lambda-calculus (you can add interpreter
for any extension as a part of a program), and so should your AI, if
it's to ever learn open-ended models.

--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser
But I do not agree that most humans can be scientists. If this is 
necessary

for general intelligence then most humans are not general intelligences.


Soften be scientists to generally use the scientific method.  Does this 
change your opinion?


- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 10:27 AM
Subject: AW: [agi] constructivist issues



Mark Waser wrote:



Can we get a listing of what you believe these limitations are and whether
or not you believe that they apply to humans?

I believe that humans are constrained by *all* the limits of finite 
automata


yet are general intelligences so I'm not sure of your point.


It is also my opinion that humans are constrained by *all* the limits of
finite automata.
But I do not agree that most humans can be scientists. If this is 
necessary

for general intelligence then most humans are not general intelligences.

It depends on your definition of general intelligence.

Surely there are rules (=algorithms) to be a scientist. If not, AGI would
not be possible and there would not be any scientist at all.

But you cannot separate the rules (algorithm) from the evaluation whether 
a
human or a machine is intelligent. Intelligence comes essentially from 
these

rules and from a lot of data.

The mere ability to use arbitrary rules does not imply general 
intelligence.

Your computer has this ability but without the rules it is not intelligent
at all.

- Matthias





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Mark Waser

Instead of arguing language, why don't you argue platform?

Name a language and there's probably a .Net version.  They are all 
interoperable so you can use whatever is most appropriate.  Personally, the 
fact that you can now even easily embed functional language statements  in 
procedural languages (via methods like using F#-style calls in your C# code 
and vice versa) means that it is silly to use languages and platforms that 
lack the wide variety of features and interoperability of one single, common 
low-level architecture supporting all the variety that people need and want.


(but, then again, I'm just a voice in the wilderness on this list ;-)


And as for Python?  Great for getting reasonably small projects up quickly 
and easily.  The cost is trade-offs on extensibility and maintenance --  
which means that, for a large, complex system, some day you're either going 
to rewrite and replace it (not necessarily a bad thing) or you're going to 
rue the day that you used it.



- Original Message - 
From: Russell Wallace [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 10:41 AM
Subject: **SPAM** Re: [agi] On programming languages



On Fri, Oct 24, 2008 at 3:37 PM, Eric Burton [EMAIL PROTECTED] wrote:

Due to a characteristic paucity of datatypes, all powerful, and a
terse, readable syntax, I usually recommend Python for any project
that is just out the gate. It's my favourite way by far at present to
mangle huge tables. By far!


Python is definitely a very good language. Unless this has changed
since I last looked at it, though, it doesn't expose the parse tree,
so isn't suitable for representing AI content?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
 The value of AIXI is not that it tells us how to solve AGI. The value is 
 that it tells us intelligence is not computable

Define not computable  Too many people are incorrectly interpreting it to 
mean not implementable on a computer.

  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 10:49 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI


The value of AIXI is not that it tells us how to solve AGI. The value 
is that it tells us intelligence is not computable.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Fri, 10/24/08, Mark Waser [EMAIL PROTECTED] wrote:

  From: Mark Waser [EMAIL PROTECTED]
  Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI
  To: agi@v2.listbox.com
  Date: Friday, October 24, 2008, 9:51 AM


   E.g. according to this, AIXI (with infinite computational power) 
but not AIXItl
   would have general intelligence, because the latter can only find 
regularities
   expressible using programs of length bounded by l and runtime 
bounded
   by t

  rant

  I hate AIXI because not only does it have infinite computational 
power but people also unconsciously assume that it has infinite data (or, at 
least, sufficient data to determine *everything*).

  AIXI is *not* a general intelligence by any definition that I would 
use.  It is omniscient and need only be a GLUT (giant look-up table) and I 
argue that that is emphatically *NOT* intelligence.  

  AIXI may have the problem-solving capabilities of general 
intelligence but does not operate under the constraints that *DEFINE* a general 
intelligence.  If it had to operate under those constraints, it would fail, 
fail, fail.

  AIXI is useful for determining limits but horrible for drawing other 
types of conclusions about GI.

  /rant


- Original Message - 
From: Ben Goertzel 
To: agi@v2.listbox.com 
Sent: Friday, October 24, 2008 5:02 AM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess 
it is no AGI





On Fri, Oct 24, 2008 at 4:09 AM, Dr. Matthias Heger [EMAIL 
PROTECTED] wrote:


  No Mike. AGI must be able to discover regularities of all kind in 
all
  domains.
  If you can find a single domain where your AGI fails, it is no 
AGI.


According to this definition **no finite computational system can 
be an AGI**,
so this is definition obviously overly strong for any practical 
purposes

E.g. according to this, AIXI (with infinite computational power) 
but not AIXItl
would have general intelligence, because the latter can only find 
regularities
expressible using programs of length bounded by l and runtime 
bounded
by t

Unfortunately, the pragmatic notion of AGI we need to use as 
researchers is
not as simple as the above ... but fortunately, it's more 
achievable ;-)

One could view the pragmatic task of AGI as being able to discover 
all regularities
expressible as programs with length bounded by l and runtime 
bounded by t ...
[and one can add a restriction about the resources used to make this
discover], but the thing is, this depends highly on the underlying 
computational model,
which then can be used to encode some significant domain bias.

-- Ben G
 





  agi | Archives  | Modify Your Subscription   


--
agi | Archives  | Modify Your Subscription  
   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser

I'm making the point natural language is incompletely defined for
you, but *not* the point natural language suffers from Godelian
incompleteness, unless you specify what concept of proof applies to
natural language.


I'm back to being lost I think.  You agree that natural language is 
incompletely defined.  Cool.  My saying that natural language suffers from 
Godelian incompleteness merely adds that it *can't* be defined.  Do you mean 
to say that natural languages *can* be completely defined?  Or are you 
arguing that I can't *prove* that they can't be defined?  If it is the last, 
then that's like saying that Godel's theorem can't prove itself -- which is 
exactly the point to what Godel's theorem says . . . .



Have you heard of Tarski's undefinability theorem? It is relevant to
this discussion.
http://en.wikipedia.org/wiki/Indefinability_theory_of_truth


Yes.  In fact, the restatement of Tarski's theory as No sufficiently 
powerful language is strongly-semantically-self-representational also 
fundamentally says that I can't prove in natural language what you're asking 
me to prove about natural language.


Personally, I always have trouble separating out Godel and Tarski as they 
are obviously both facets of the same underlying principles.


I'm still not sure of what you're getting at.  If it's a proof, then Godel 
says I can't give it to you.  If it's something else, then I'm not getting 
it.



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 11:31 AM
Subject: Re: [agi] constructivist issues



Mark,

It makes sense but I'm arguing that you're making my point for me . . . 
.


I'm making the point natural language is incompletely defined for
you, but *not* the point natural language suffers from Godelian
incompleteness, unless you specify what concept of proof applies to
natural language.

It emphatically does *not* tell us anything about any approach that
can be implemented on normal computers and this is where all the
people who insist that because computers operate algorithmically,
they will never achieve true general intelligence are going wrong.

It tells us that any approach that is implementable on a normal
computer will not always be able to come up with correct answers to
all halting-problem questions (along with other problems that suffer
from incompleteness).

You are correct in saying that Godel's theory has been improperly
overused and abused over the years but my point was merely that AGI is
Godellian Incomplete, natural language is Godellian Incomplete, 

Specify truth and proof in these domains before applying the
theorem, please. For agi I am OK, since X is provable would mean
the AGI will come to believe X, and X is true would mean something
close to what it intuitively means. But for natural language? Natural
language will come to believe X makes no sense, so it can't be our
definition of proof...

Really, it is a small objection, and I'm only making it because I
don't want the theorem abused. You could fix your statement just by
saying any proof system we might want to provide will be incomplete
for any well-defined subset of natural language semantics that is
large enough to talk fully about numbers. Doing this just seems
pointless, because the real point you are trying to make is that the
semantics is ill-defined in general, *not* that some hypothetical
proof system is incomplete.

and effectively AGI-Complete most probably pretty much exactly means
Godellian-Incomplete. (Yes, that is a radically new phrasing and not
necessarily quite what I mean/meant but . . . . ).

I used to agree that Godelian incompleteness was enough to show that
the semantics of a knowledge representation was strong enough for AGI.
But, that alone doesn't seem to guarantee that a knowledge
representation can faithfully reflect concepts like continuous
differentiable function (which gets back to the whole discussion with
Ben).

Have you heard of Tarski's undefinability theorem? It is relevant to
this discussion.
http://en.wikipedia.org/wiki/Indefinability_theory_of_truth

--Abram

On Fri, Oct 24, 2008 at 9:19 AM, Mark Waser [EMAIL PROTECTED] wrote:

I'm saying Godelian completeness/incompleteness can't be easily
defined in the context of natural language, so it shouldn't be applied
there without providing justification for that application
(specifically, unambiguous definitions of provably true and
semantically true for natural language). Does that make sense, or am
I still confusing?


It makes sense but I'm arguing that you're making my point for me . . . .


agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...


Godel's incompleteness theorem tells us important limitations of all 
formal

Re: [agi] On programming languages

2008-10-24 Thread Mark Waser

But I thought I'd mention that for OpenCog we are planning on a
cross-language approach.  The core system is C++, for scalability and
efficiency reasons, but the MindAgent objects that do the actual AI
algorithms should be creatable in various languages, including Scheme or
LISP.


*nods* As you know, I'm of the opinion that C++ is literally the worst
possible choice in this context. However...


ROTFL.  OpenCog is dead-set on reinventing the wheel while developing the 
car.  They may eventually create a better product for doing so -- but many 
of us software engineers contend that the car could be more quickly and 
easily developed without going that far back (while the OpenCog folk contend 
that the current wheel is insufficient).


(To be clear, the specific wheels in this case are things like memory 
management, garbage collection, etc. -- all those things that need to be 
written in C++ and are baked into more modern languages and platforms).


Note:  You can even create AGI in machine code -- I just wouldn't want to 
try (unless of course, it's simply by creating it a real competent set of 
langauges and compiling it down)


But then again, this is an argument that Ben and I have been having for 
years (and he, admittedly has the dollars and the programmers ;-).



- Original Message - 
From: Russell Wallace [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 12:45 PM
Subject: **SPAM** Re: [agi] On programming languages



On Fri, Oct 24, 2008 at 5:30 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

Interesting!  I have a good friend who is also an AGI enthusiast who
followed the same path as you ... a lot of time burned making his own
superior, stripped-down, AGI-customized variant of LISP, followed by a
decision to just go with LISP ;-)


I'm not surprised :-)


But I thought I'd mention that for OpenCog we are planning on a
cross-language approach.  The core system is C++, for scalability and
efficiency reasons, but the MindAgent objects that do the actual AI
algorithms should be creatable in various languages, including Scheme or
LISP.


*nods* As you know, I'm of the opinion that C++ is literally the worst
possible choice in this context. However...


We can do self-modification of components of the system by coding these
components in LISP or other highly manipulable languages.


This is good, and for what it's worth I think the best approach for
OpenCog at this stage to aim to stabilize the C++ core as soon as
possible, and try to write AI code at the higher level in Lisp, Combo
or whatever.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Mark Waser
AGI *really* needs an environment that comes with reflection and metadata 
support (including persistence, accessibility, etc.) baked right in.

http://msdn.microsoft.com/en-us/magazine/cc301780.aspx

(And note that the referenced article is six years old and several major 
releases back)

This isn't your father's programming *language* . . . .

  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 12:55 PM
  Subject: **SPAM** Re: [agi] On programming languages


  Russell asked:

  But if it can't read the syntax tree, how will it know what the main body 
actually does?


  My line of thinking arose while considering how to reason over syntax trees.  
I came to realize that source code composition is somewhat analogous to program 
compilation in this way:  When a source code program is compiled into 
executable machine instructions, much of the conceptual intent of the 
programmer is lost, but the computer can none the less execute the program.  
Humans cannot read compiled binary code; they cannot reason about it.  We need 
source code for reasoning about programs.  Accordingly, I thought about the 
program composition process.  Exactly what is lost, i.e. not explicitly 
recorded, when a human programmer writes a correct source code program from 
high-level specifications.  This lost information is what I model as the 
nested composition framework.  When a programmer tries to understand a source 
code program written by someone else, the programmer must reverse-engineer the 
deductive chain that leads from the observed source code back to the perhaps 
only partially known original specifications.

  I will not have a worked out example until next year, but a sketch would be 
as follows.  In Java, a main body could be a method or a block within a method. 
 For a method, I do not persist simply the syntax tree for the method, but 
rather the nested composition operations that when subsequently processed 
generate the method source code.   For a composed method I would persist:

a.. composed preconditions with respect to the method parameters and 
possibly other scoped variables such as class variables

b.. composed invariant conditions
c.. composed postconditions
d.. composed method comment
e.. composed method type
f.. composed method access modifiers (i.e. public, private, abstract etc.)
g.. composed method parameter type, comment, modifier (e.g. final)
h.. composed statements
  Composed statements generate Java statements such as an assignment statement, 
block statement and so forth.  You can see that there is a tree structure that 
can be navigated when performing a deductive composition operation like is 
ArrayList imported into the containing class? - if not then compose that import 
in the right place. 

  Persisted composition instances are KB terms that can be linked to the 
justifying algorithmic and domain knowledge.  I hypothesize this is cleaner and 
more flexible than directly tying lower-level persisted syntax trees to their 
justifications. 


   -Steve


  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860




  - Original Message 
  From: Russell Wallace [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Friday, October 24, 2008 10:28:39 AM
  Subject: Re: [agi] On programming languages

  On Fri, Oct 24, 2008 at 4:10 PM, Stephen Reed [EMAIL PROTECTED] wrote:
   Hi Russell,
   Although I've already chosen an implementation language for my Texai project
   - Java, I believe that my experience may interest you.

  Very much so, thank you.

   I moved up one level of procedural abstraction to view program composition
   as the key intelligent activity.  Supporting this abstraction level is the
   capability to perform source code editing for the desired target language -
   in my case Java.  In my paradigm, its not the program syntax tree that gets
   persisted in the knowledge base but rather the nested composition framework
   that bottoms out in primitives that generate Java program elements.  The
   nested composition framework is my attempt to model the conceptual aspects
   of program composition.  For example a procedure may have an initialization
   section, a main body, and a finalization section.  I desire Texai to be able
   to figure out for itself where to insert a new required variable in the
   source code so that it has the appropriate scope, and so forth.

  But if it can't read the syntax tree, how will it know what the main
  body actually does?


  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com





Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
Cool.  And you're saying that intelligence is not computable.  So why else 
are we constantly invoking AIXI?  Does it tell us anything else about 
general intelligence?


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 12:59 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI


--- On Fri, 10/24/08, Mark Waser [EMAIL PROTECTED] wrote:


The value of AIXI is not that it tells us how to solve AGI.
The value is that it tells us intelligence is not computable



Define not computable Too many people are
incorrectly interpreting it to mean not implementable on a
computer.


Not implementable by a Turing machine. AIXI says the optimal solution is to 
find the shortest program consistent with observation so far. This implies 
the ability to compute Kolmogorov complexity.

http://en.wikipedia.org/wiki/Kolmogorov_complexity

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Mark Waser

The obvious fly in the ointment is that a lot of technical work is
done on Unix, so an AI project really wants to keep that option open
if at all possible. Is Mono ready for prime time yet?


No.  Unfortunately not.  But I would argue that most work done on *nix is 
not easily accessible to or usable by other work.



Why do you say that? Python code is concise and very readable, both of
which are positive attributes for extensibility and maintenance.


Yes, but not markedly more so than most other choices.

The problem is that Python does not enforce, promote, or even facilitate any 
number of practices and procedures that are necessary to make large complex 
project extensible and maintainable.  A person who knew all of these 
practices and procedures could laboriously follow and/or re-implement them 
but in many ways it's like trying to program in assembly language.  Quick 
and dirty (and simple) is always a trade-off for complex and lasting and 
extensible.


I always hate discussion about languages because to me the most advanced 
versions of Basic, Pascal, Object-Oriented C (whether ObjectiveC, C++, or 
C#), Java, etc. are basically the same language with slightly different 
syntax from case to case.  The *real* difference between the languages is 
the infrastructure/functionality that is immediately available.  I sneered 
at C++ in an earlier message not because of it's syntax but because you are 
*always* burdened with memory management, garbage collection, etc.  This 
makes C++ code much more expensive (and slow) to develop, maintain, and 
extend.  Python does not have much of the rich infrastructure that other 
languages frequently have and all the really creative Python work seems to 
be migrating on to Ruby . . . .


As you say, I really don't want to choose either language or platform.  What 
I want is the most flexibility so that I can get access to the widest 
variety of already created and tested infrastructure.  Language is far more 
splintered than platform and each language has a rational place  (i.e. a set 
of trade-offs) where it is best.  .Net is a great platform because it 
provides the foundations for all languages to co-exist, work together, and 
even intermingle.  It's even better because it's building up more and more 
and more useful infrastructure while *nix development continues to fragment 
into the flavor of the month.  Take a look, for example, at an the lambda 
closure and LINQ stuff that is now part of the platform and available to 
*all* of the supported languages.  Now look at all the people who are 
re-implementing all that stuff in their project.


The bad point of .Net is that it is now an absolute, stone-cold b*tch to 
learn because there is so much infrastructure available and it's not always 
clear what is most effective when . . . . but once you start using it, 
you'll find that due to the available infrastructure you'll need an order of 
magnitude less code (literally) to produce the same functionality.


But I'm going to quit here.  Language is politics and I find myself tiring 
easily of that these days  :-)


/language *opinion*


- Original Message - 
From: Russell Wallace [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 12:56 PM
Subject: **SPAM** Re: [agi] On programming languages



On Fri, Oct 24, 2008 at 5:37 PM, Mark Waser [EMAIL PROTECTED] wrote:

Instead of arguing language, why don't you argue platform?


Platform is certainly an interesting question. I take the view that
Common Lisp has the advantage of allowing me to defer the choice of
platform. You take the view that .Net has the advantage of allowing
you to defer the choice of language, which is not unreasonable. As far
as I know, there isn't a version of Common Lisp for .Net, but there is
a Scheme, which would be suitable for writing things that the AI needs
to understand, and still allow interoperability with other chunks of
code written in C# or whatever.

The obvious fly in the ointment is that a lot of technical work is
done on Unix, so an AI project really wants to keep that option open
if at all possible. Is Mono ready for prime time yet?

And as for Python?  Great for getting reasonably small projects up 
quickly

and easily.  The cost is trade-offs on extensibility and maintenance --
 which means that, for a large, complex system, some day you're either 
going
to rewrite and replace it (not necessarily a bad thing) or you're going 
to

rue the day that you used it.


Why do you say that? Python code is concise and very readable, both of
which are positive attributes for extensibility and maintenance.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com

Re: [agi] On programming languages

2008-10-24 Thread Mark Waser
 Relatively a small amount of code is my own creation, and the libraries I 
 used, e.go. Sesame, Glib, are well maintained.

Steve is a man after my own heart.  Grab the available solid 
infrastructure/libraries and build on top of it/them.

To me, it's all a question of the size and coherence of the communities 
building and maintaining the infrastructure.  My personal *best guess* is that 
the Windows community is more cohesive and therefore the rate of interoperable 
infrastructure is growing faster.  It's even clearer that *nix started with a 
big lead.  Currently I'd still say that which is best to use for any given 
project depends upon the project timeline, your comfort factor, whether or not 
you're willing to re-write and/or port, etc., etc. -- but I'm also increasingly 
of the *opinion* that the balance is starting to swing and swing hard . . . . 
(but I'm not really willing to defend that *opinion* against entrenched 
resistance -- merely to suggest and educate to those who don't know all of the 
things that are now available out-of-the-box).

The only people that I mean to criticize are those who are attempting to do 
everything themselves and are re-inventing the same things that many others are 
doing and continue to do . . . 
  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 1:42 PM
  Subject: **SPAM** Re: [agi] On programming languages


  Hi Mark,

  I readily concede that .Net is superior to Java out-of-the box with respect 
to reflection and metadata support as you say.  I spent my first project year 
creating three successive versions of a Java persistence framework for an RDF 
quad store using third party libraries for these features.  Now I am completely 
satisfied with respect to these capabilities.  Relatively a small amount of 
code is my own creation, and the libraries I used, e.g. Sesame, Cglib, are well 
maintained.

  -Steve


  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860




  - Original Message 
  From: Mark Waser [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Friday, October 24, 2008 12:28:36 PM
  Subject: Re: [agi] On programming languages


  AGI *really* needs an environment that comes with reflection and metadata 
support (including persistence, accessibility, etc.) baked right in.

  http://msdn.microsoft.com/en-us/magazine/cc301780.aspx

  (And note that the referenced article is six years old and several major 
releases back)

  This isn't your father's programming *language* . . . .

- Original Message - 
From: Stephen Reed 
To: agi@v2.listbox.com 
Sent: Friday, October 24, 2008 12:55 PM
Subject: **SPAM** Re: [agi] On programming languages


Russell asked:

But if it can't read the syntax tree, how will it know what the main body 
actually does?


My line of thinking arose while considering how to reason over syntax 
trees.  I came to realize that source code composition is somewhat analogous to 
program compilation in this way:  When a source code program is compiled into 
executable machine instructions, much of the conceptual intent of the 
programmer is lost, but the computer can none the less execute the program.  
Humans cannot read compiled binary code; they cannot reason about it.  We need 
source code for reasoning about programs.  Accordingly, I thought about the 
program composition process.  Exactly what is lost, i.e. not explicitly 
recorded, when a human programmer writes a correct source code program from 
high-level specifications.  This lost information is what I model as the 
nested composition framework.  When a programmer tries to understand a source 
code program written by someone else, the programmer must reverse-engineer the 
deductive chain that leads from the observed source code back to the perhaps 
only partially known original specifications.

I will not have a worked out example until next year, but a sketch would be 
as follows.  In Java, a main body could be a method or a block within a method. 
 For a method, I do not persist simply the syntax tree for the method, but 
rather the nested composition operations that when subsequently processed 
generate the method source code.   For a composed method I would persist:

  a.. composed preconditions with respect to the method parameters and 
possibly other scoped variables such as class variables

  b.. composed invariant conditions 
  c.. composed postconditions 
  d.. composed method comment 
  e.. composed method type 
  f.. composed method access modifiers (i.e. public, private, abstract 
etc.) 
  g.. composed method parameter type, comment, modifier (e.g. final) 
  h.. composed statements
Composed statements generate Java statements such as an assignment 
statement, block statement and so forth.  You can see

Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-23 Thread Mark Waser

I have already proved something stronger


What would you consider your best reference/paper outlining your arguments? 
Thanks in advance.


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 8:55 PM
Subject: Re: AW: AW: [agi] Language learning (was Re: Defining AGI)



--- On Wed, 10/22/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:


You make the implicit assumption that a natural language
understanding system will pass the turing test. Can you prove this?


If you accept that a language model is a probability distribution over 
text, then I have already proved something stronger. A language model 
exactly duplicates the distribution of answers that a human would give. 
The output is indistinguishable by any test. In fact a judge would have 
some uncertainty about other people's language models. A judge could be 
expected to attribute some errors in the model to normal human variation.



Furthermore,  it is just an assumption that the ability to
have and to apply
the rules are really necessary to pass the turing test.

For these two reasons, you still haven't shown 3a and
3b.


I suppose you are right. Instead of encoding mathematical rules as a 
grammar, with enough training data you can just code all possible 
instances that are likely to be encountered. For example, instead of a 
grammar rule to encode the commutative law of addition,


 5 + 3 = a + b = b + a = 3 + 5

a model with a much larger training data set could just encode instances 
with no generalization:


 12 + 7 = 7 + 12
 92 + 0.5 = 0.5 + 92
 etc.

I believe this is how Google gets away with brute force n-gram statistics 
instead of more sophisticated grammars. It's language model is probably 
10^5 times larger than a human model (10^14 bits vs 10^9 bits). Shannon 
observed in 1949 that random strings generated by n-gram models of English 
(where n is the number of either letters or words) look like natural 
language up to length 2n. For a typical human sized model (1 GB text), n 
is about 3 words. To model strings longer than 6 words we would need more 
sophisticated grammar rules. Google can model 5-grams (see 
http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html ) 
, so it is able to generate and recognize (thus appear to understand) 
sentences up to about 10 words.



By the way:
The turing test must convince 30% of the people.
Today there is a system which can already convince 25%

http://www.sciencedaily.com/releases/2008/10/081013112148.htm


It would be interesting to see a version of the Turing test where the 
human confederate, machine, and judge all have access to a computer with 
an internet connection. I wonder if this intelligence augmentation would 
make the test easier or harder to pass?




-Matthias


 3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5
but
 you have not shown that
 3a) that a language understanding system
necessarily(!) has
 this rules
 3b) that a language understanding system
necessarily(!) can
 apply such rules

It must have the rules and apply them to pass the Turing
test.

-- Matt Mahoney, [EMAIL PROTECTED]



-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-23 Thread Mark Waser
But, I still do not agree with the way you are using the incompleteness 
theorem.


Um.  OK.  Could you point to a specific example where you disagree?  I'm a 
little at a loss here . . . .


It is important to distinguish between two different types of 
incompleteness.
1. Normal Incompleteness-- a logical theory fails to completely specify 
something.
2. Godelian Incompleteness-- a logical theory fails to completely specify 
something, even though we want it to.


I'm also not getting this.  If I read the words, it looks like the 
difference between Normal and Godelian incompleteness is based upon our 
desires.  I think I'm having a complete disconnect with your intended 
meaning.



However, it seems like all you need is type 1 completeness for what

you are saying.

So, Godel's theorem is way overkill here in my opinion.


Um.  OK.  So I used a bazooka on a fly?  If it was a really pesky fly and I 
didn't destroy anything else, is that wrong?  :-)


It seems as if you're not arguing with my conclusion but saying that my 
arguments were way better than they needed to be (like I'm being 
over-efficient?) . . . .


= = = = =

Seriously though, I having a complete disconnect here.  Maybe I'm just 
having a bad morning but . . .  huh?   :-)
If I read the words, all I'm getting is that you disagree with the way that 
I am using the theory because the theory is overkill for what is necessary.


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 9:05 PM
Subject: Re: [agi] constructivist issues


Mark,

I own and have read the book-- but my first introduction to Godel's
Theorem was Douglas Hofstadter's earlier work, Godel Escher Bach.
Since I had already been guided through the details of the proof (and
grappled with the consequences), to be honest chapter 10 you refer to
was a little boring :).

But, I still do not agree with the way you are using the incompleteness 
theorem.


It is important to distinguish between two different types of 
incompleteness.


1. Normal Incompleteness-- a logical theory fails to completely
specify something.
2. Godelian Incompleteness-- a logical theory fails to completely
specify something, even though we want it to.

Logicians always mean type 2 incompleteness when they use the term. To
formalize the difference between the two, the measuring stick of
semantics is used. If a logic's provably-true statements don't match
up to its semantically-true statements, it is incomplete.

However, it seems like all you need is type 1 completeness for what
you are saying. Nobody claims that there is a complete, well-defined
semantics for natural language against which we could measure the
provably-true (whatever THAT would mean).

So, Godel's theorem is way overkill here in my opinion.

--Abram

On Wed, Oct 22, 2008 at 7:48 PM, Mark Waser [EMAIL PROTECTED] wrote:

Most of what I was thinking of and referring to is in Chapter 10.  Gödel's
Quintessential Strange Loop (pages 125-145 in my version) but I would
suggest that you really need to read the shorter Chapter 9. Pattern and
Provability (pages 113-122) first.

I actually had them conflated into a single chapter in my memory.

I think that you'll enjoy them tremendously.

- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 4:19 PM
Subject: Re: [agi] constructivist issues



Mark,

Chapter number please?

--Abram

On Wed, Oct 22, 2008 at 1:16 PM, Mark Waser [EMAIL PROTECTED] wrote:


Douglas Hofstadter's newest book I Am A Strange Loop (currently 
available

from Amazon for $7.99 -
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM)
has
an excellent chapter showing Godel in syntax and semantics.  I highly
recommend it.

The upshot is that while it is easily possible to define a complete
formal
system of syntax, that formal system can always be used to convey
something
(some semantics) that is (are) outside/beyond the system -- OR, to
paraphrase -- meaning is always incomplete because it can always be 
added

to
even inside a formal system of syntax.

This is why I contend that language translation ends up being
AGI-complete
(although bounded subsets clearly don't need to be -- the question is
whether you get a usable/useful subset more easily with or without first
creating a seed AGI).

- Original Message - From: Abram Demski 
[EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 12:38 PM
Subject: Re: [agi] constructivist issues



Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser [EMAIL PROTECTED]
wrote:


It looks like all this disambiguation by moving to a more formal
language is about sweeping the problem under the rug, removing the
need for uncertain reasoning from surface

Re: Lojban (was Re: [agi] constructivist issues)

2008-10-23 Thread Mark Waser
Hi.  I don't understand the following statements.  Could you explain it some 
more?

- Natural language can be learned from examples. Formal language can not.

I think that you're basing this upon the methods that *you* would apply to each 
of the types of language.  It makes sense to me that because of the 
regularities of a formal language that you would be able to use more effective 
methods -- but it doesn't mean that the methods used on natural language 
wouldn't work (just that they would be as inefficient as they are on natural 
languages.

I would also argue that the same argument applies to the first statement of 
following the following two.

- Formal language must be parsed before it can be understood. Natural language 
must be understood before it can be parsed.


  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 9:23 PM
  Subject: Lojban (was Re: [agi] constructivist issues)


Why would anyone use a simplified or formalized English (with regular 
grammar and no ambiguities) as a path to natural language understanding? Formal 
language processing has nothing to do with natural language processing other 
than sharing a common lexicon that make them appear superficially similar.

- Natural language can be learned from examples. Formal language can 
not.
- Formal language has an exact grammar and semantics. Natural language 
does not.
- Formal language must be parsed before it can be understood. Natural 
language must be understood before it can be parsed.
- Formal language is designed to be processed efficiently on a fast, 
reliable, sequential computer that neither makes nor tolerates errors, between 
systems that have identical, fixed language models. Natural language evolved to 
be processed efficiently by a slow, unreliable, massively parallel computer 
with enormous memory in a noisy environment between systems that have different 
but adaptive language models.

So how does yet another formal language processing system help us 
understand natural language? This route has been a dead end for 50 years, in 
spite of the ability to always make some initial progress before getting stuck.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Wed, 10/22/08, Ben Goertzel [EMAIL PROTECTED] wrote:

  From: Ben Goertzel [EMAIL PROTECTED]
  Subject: Re: [agi] constructivist issues
  To: agi@v2.listbox.com
  Cc: [EMAIL PROTECTED]
  Date: Wednesday, October 22, 2008, 12:27 PM



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled 
via reference to WordNet via usages like run_1, run_2, etc. ... or as you say 
by using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be 
handled in a similar way, e.g. by defining an ontology of preposition meanings 
like with_1, with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing 
resources into a preposition-meaning ontology like this a while back ... the 
so-called PrepositionWordNet ... or as it eventually came to be called the 
LARDict or LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts 
of subscripts, and in this way to recognize a highly controlled English that 
would be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple 
sentences, so the only real hassle to deal with is disambiguation.   We could 
use similar hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 
urinated in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with 
an AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



  On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser [EMAIL PROTECTED] 
wrote:

 IMHO that is an almost hopeless approach, ambiguity is too 
integral to English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always 
use big words and never use small words and/or you use a specific phrase as a 
word.  Ambiguous prepositions just disambiguate to one of 
three/four/five/more possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, 
Basic

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 You may not like Therefore, we cannot understand the math needed to define
 our own intelligence., but I'm rather convinced that it's correct. 

Do you mean to say that there are parts that we can't understand or that the 
totality is too large to fit and that it can't be cleanly and completely 
decomposed into pieces (i.e. it's a complex system ;-).

Personally, I believe that the foundational pieces necessary to 
construct/boot-strap an intelligence are eminently understandable (if not even 
fairly simple) but that the resulting intelligence that a) organically grows 
from it's interaction with an environment that it can only extract partial, 
dirty, and ambiguous data and b) does not have the time, computational 
capability, or data to make itself even remotely consistent past a certain 
level IS large and complex enough that you will never truly understand it 
(which is where I have sympathy with Richard Loosemore's arguments -- but don't 
buy that the interaction of the pieces is necessarily so complex that we can't 
make broad predictions that are accurate enough to be able to engineer 
intelligence).

If you say parts we can't understand, how do you reconcile that with your 
statements of yesterday about what general intelligences can learn?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser
 However, the point I took issue with was your claim that a stupid person 
 could be taught to effectively do science ... or (your later modification) 
 evaluation of scientific results.
 At the time I originally took exception to your claim, I had not read the 
 earlier portion of the thread, and I still haven't; so I still do not know 
 why you made the claim in the first place.

In brief -- You've agreed that even a stupid person is a general intelligence. 
 By do science, I (originally and still) meant the amalgamation that is 
probably best expressed as a combination of critical thinking and/or the 
scientific method.  My point was a combination of both a) to be a general 
intelligence, you really must have a domain model and the rudiments of critical 
thinking/scientific methodology in order to be able to competently/effectively 
update it and b) if you're a general intelligence, even if you don't need it, 
you should be able to be taught the rudiments of critical thinking/scientific 
methodology.  

Are those points that you would agree with?  (A serious question -- and, in 
particular, if you don't agree, I'd be very interested in why since I'm trying 
to arrive at a reasonable set of distinctions that define a general 
intelligence).

In typical list fashion, rather than asking what I meant (or, in your case, 
even having the courtesy to read what came before -- so that you might have 
*some* chance of understanding what I was trying to get at -- in case my 
immediate/proximate phrasing was as awkward as I'll freely admit that it was 
;-), it effectively turned into an argument past each other when your immediate 
concept/interpretation of *science = advanced statistical interpretation* hit 
the blindingly obvious shoals of it's not easy teaching stupid people 
complicated things (I mean -- seriously, dude --do you *really* think that I'm 
going to be that far off base?  And, if not, why disrupt the conversation so 
badly by coming in in such a fashion?)..

(And I have to say -- As list owner, it would be helpful if you would set a 
good example of reading threads and trying to understand what people meant 
rather than immediately coming in and flinging insults and accusations of 
ignorance e.g.  This is obviously spoken by someone who has never . . . . ).

So . . . . can you agree with the claim as phrased above?  (i.e. What were we 
disagreeing on again? ;-)

Oh, and the original point was part of a discussion about the necessary and 
sufficient pre-requisites for general intelligence so it made sense to 
(awkwardly :-) say that a domain model and the rudiments of critical 
thinking/scientific methodology are a (major but not complete) part of that.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 8:51 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark W wrote:


What were we disagreeing on again?


  This conversation has drifted into interesting issues in the philosophy of 
science, most of which you and I seem to substantially agree on.

  However, the point I took issue with was your claim that a stupid person 
could be taught to effectively do science ... or (your later modification) 
evaluation of scientific results.

  At the time I originally took exception to your claim, I had not read the 
earlier portion of the thread, and I still haven't; so I still do not know why 
you made the claim in the first place.

  ben




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 It doesn't, because **I see no evidence that humans can
 understand the semantics of formal system in X in any sense that
 a digital computer program cannot**

I just argued that humans can't understand the totality of any formal system X 
due to Godel's Incompleteness Theorem but the rest of this is worth addressing 
. . . . 

 Whatever this mysterious understanding is that you believe you
 possess, **it cannot be communicated to me in language or
 mathematics**.  Because any series of symbols you give me, could
 equally well be produced by some being without this mysterious
 understanding.

Excellent!  Except for the fact that the probability of the being *continuing* 
to emit those symbols without this mysterious understanding rapidly 
approaches zero.  So I'm going to argue that understanding *can* effectively be 
communicated/determined.  Arguing otherwise is effectively arguing for 
vanishingly small probabilities in infinities (and why I hate most arguments 
involving AIXI as proving *anything* except absolute limits c.f. Matt Mahoney 
and compression = intelligence).

I'm going to continue arguing that understanding exactly equates to having a 
competent domain model and being able to communicate about it (i.e. that there 
is no mystery about understanding -- other than not understanding it ;-).

 Can you describe any possible finite set of finite-precision observations
 that could provide evidence in favor of the hypothesis that you possess
 this posited understanding, and against the hypothesis that you are
 something equivalent to a digital computer?

 I think you cannot.

But I would argue that this is because a digital computer can have 
understanding (and must and will in order to be an AGI).

 So, your belief in this posited understanding has nothing to do with 
 science, it's
 basically a kind of religious faith, it seems to me... '-)

If you're assuming that humans have it and computers can't, then I have to 
strenuously agree.  There is no data (that I am aware of) to support this 
conclusion so it's pure faith, not science.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 I don't want to diss the personal value of logically inconsistent thoughts.  
 But I doubt their scientific and engineering value.

I doesn't seem to make sense that something would have personal value and then 
not have scientific or engineering value.

I can sort of understand science if you're interpreting science looking for the 
final correct/optimal value but engineering generally goes for either good 
enough or the best of the currently known available options and anything 
that really/truly has personal value would seem to have engineering value.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser

I'm also confused. This has been a strange thread. People of average
and around-average intelligence are trained as lab technicians or
database architects every day. Many of them are doing real science.
Perhaps a person with down's syndrome would do poorly in one of these
largely practical positions. Perhaps.

The consensus seems to be that there is no way to make a fool do a
scientist's job. But he can do parts of it. A scientist with a dozen
fools at hand could be a great deal more effective than a rival with
none, whereas a dozen fools on their own might not be expected to do
anything at all. So it is complicated.


Or maybe another way to rephrase it is combine it with another thread . . . 
.


Any individual piece of science is understandable/teachable to (or my 
original point -- verifiable or able to be validated by) any general 
intelligence but the totally of science combined with the world is far too 
large to . . . . (which is effectively Ben's point) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser

(1) We humans understand the semantics of formal system X.


No.  This is the root of your problem.  For example, replace formal system 
X with XML.  Saying that We humans understand the semantics of XML 
certainly doesn't work and why I would argue that natural language 
understanding is AGI-complete (i.e. by the time all the RDF, OWL, and other 
ontology work is completed -- you'll have an AGI).  Any formal system can 
always be extended *within it's defined syntax* to have any meaning.  That 
is the essence of Godel's Incompleteness Theorem.


It's also sort of the basis for my argument with Dr. Matthias Heger. 
Semantics are never finished except when your model of the world is finished 
(including all possibilities and infinitudes) so language understanding 
can't be simple and complete.


Personally, rather than starting with NLP, I think that we're going to need 
to start with a formal language that is a disambiguated subset of English 
and figure out how to use our world model/knowledge to translate English to 
this disambiguated subset -- and then we can build from there.  (or maybe 
this makes Heger's argument for him . . . .  ;-)





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 Well, if you are a computable system, and if by think you mean represent 
 accurately and internally then you can only think that odd thought via 
 being logically inconsistent... ;-)

True -- but why are we assuming *internally*?  Drop that assumption as Charles 
clearly did and there is no problem.

It's like infrastructure . . . . I don't have to know all the details of 
something to use it under normal circumstances though I frequently need to know 
the details is I'm doing something odd with it or looking for extreme 
performance and I definitely need to know the details if I'm 
diagnosing/fixing/debugging it -- but I can always learn them as I go . . . . 


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 11:26 PM
  Subject: Re: [agi] constructivist issues



  Well, if you are a computable system, and if by think you mean represent 
accurately and internally then you can only think that odd thought via being 
logically inconsistent... ;-)




  On Tue, Oct 21, 2008 at 11:23 PM, charles griffiths [EMAIL PROTECTED] wrote:

  I disagree, and believe that I can think X: This is a thought (T) 
that is way too complex for me to ever have.

  Obviously, I can't think T and then think X, but I might represent T 
as a combination of myself plus a notebook or some other external media. Even 
if I only observe part of T at once, I might appreciate that it is one thought 
and believe (perhaps in error) that I could never think it.

  I might even observe T in action, if T is the result of billions of 
measurements, comparisons and calculations in a computer program.

  Isn't it just like thinking This is an image that is way too 
detailed for me to ever see?

  Charles Griffiths

  --- On Tue, 10/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:

From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] constructivist issues
To: agi@v2.listbox.com
Date: Tuesday, October 21, 2008, 7:56 PM



I am a Peircean pragmatist ...

I have no objection to using infinities in mathematics ... they can 
certainly be quite useful.  I'd rather use differential calculus to do 
calculations, than do everything using finite differences.

It's just that, from a science perspective, these mathematical 
infinities have to be considered finite formal constructs ... they don't existP 
except in this way ...

I'm not going to claim the pragmatist perspective is the only 
subjectively meaningful one.  But so far as I can tell it's the only useful one 
for science and engineering...

To take a totally different angle, consider the thought X = This 
is a thought that is way too complex for me to ever have

Can I actually think X?

Well, I can understand the *idea* of X.  I can manipulate it 
symbolically and formally.  I can reason about it and empathize with it by 
analogy to A thought that is way too complex for my three-year-old past-self 
to have ever had , and so forth.

But it seems I can't ever really think X, except by being logically 
inconsistent within that same thought ... this is the Godel limitation applied 
to my own mind...

I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

-- Ben G




On Tue, Oct 21, 2008 at 10:43 PM, Abram Demski [EMAIL PROTECTED] 
wrote:

  Ben,

  How accurate would it be to describe you as a finitist or
  ultrafinitist? I ask because your view about restricting 
quantifiers
  seems to reject even the infinities normally allowed by
  constructivists.

  --Abram



  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/

  Modify Your Subscription: https://www.listbox.com/member/?;

  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be 
first overcome   - Dr Samuel Johnson





  agi | Archives  | Modify Your Subscription  
 




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  Nothing will ever be attempted if all possible objections must be first 
overcome   - Dr 

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 I disagree, and believe that I can think X: This is a thought (T) that is 
 way too complex for me to ever have.
 Obviously, I can't think T and then think X, but I might represent T as a 
 combination of myself plus a notebook or some other external media. Even if 
 I only observe part of T at once, I might appreciate that it is one thought 
 and believe (perhaps in error) that I could never think it.
 I might even observe T in action, if T is the result of billions of 
 measurements, comparisons and calculations in a computer program.
 Isn't it just like thinking This is an image that is way too detailed for 
 me to ever see?

Excellent!  This is precisely how I feel about intelligence . . . .  (and why 
we *can* understand it even if we can't hold the totality of it -- or fully 
predict it -- sort of like the weather :-)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 You have not convinced me that you can do anything a computer can't do.
 And, using language or math, you never will -- because any finite set of 
 symbols
 you can utter, could also be uttered by some computational system.
 -- Ben G

Can we pin this somewhere?

(Maybe on Penrose?  ;-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 IMHO that is an almost hopeless approach, ambiguity is too integral to 
 English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use big words 
and never use small words and/or you use a specific phrase as a word.  
Ambiguous prepositions just disambiguate to one of three/four/five/more 
possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic English) 
actually *favored* the small tremendously over-used/ambiguous words (because 
you got so much more bang for the buck with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

 If you want to take this sort of approach, you'd better start with Lojban 
 instead  Learning Lojban is a pain but far less pain than you'll have 
 trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you can come 
up with an unambiguous English word or very short phrase for each Lojban word.  
If you can do it, my approach will work and will have the advantage that the 
output can be read by anyone (i.e. it's the equivalent of me having done it in 
Lojban and then added a Lojban - English translation on the end) though the 
input is still *very* problematical (thus the need for a semantically-driven 
English-subset translator).  If you can't do it, then my approach won't work.

Can you do it?  Why or why not?  If you can, do you still believe that my 
approach won't work?  Oh, wait . . . . a Lojban-to-English dictionary *does* 
attempt to come up with an unambiguous English word or very short phrase for 
each Lojban word.  :-)

Actually, h . . . . a Lojban dictionary would probably help me focus my 
efforts a bit better and highlight things that I may have missed . . . . do you 
have a preferred dictionary or resource?  (Google has too many for me to do a 
decent perusal quickly)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues







Personally, rather than starting with NLP, I think that we're going to need 
to start with a formal language that is a disambiguated subset of English 


  IMHO that is an almost hopeless approach, ambiguity is too integral to 
English or any natural language ... e.g preposition ambiguity

  If you want to take this sort of approach, you'd better start with Lojban 
instead  Learning Lojban is a pain but far less pain than you'll have 
trying to make a disambiguated subset of English.

  ben g 




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
(joke)

What?  You don't love me any more?  

/thread
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues



  (joke)


  On Wed, Oct 22, 2008 at 11:11 AM, Ben Goertzel [EMAIL PROTECTED] wrote:




On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser [EMAIL PROTECTED] wrote:

   I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

  I doesn't seem to make sense that something would have personal value and 
then not have scientific or engineering value.

Come by the house, we'll drop some acid together and you'll be convinced ;-)
 





  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects.  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 Come by the house, we'll drop some acid together and you'll be convinced ;-)

Been there, done that.  Just because some logically inconsistent thoughts have 
no value doesn't mean that all logically inconsistent thoughts have no value.

Not to mention the fact that hallucinogens, if not the subsequently warped 
thoughts, do have the serious value of raising your mental Boltzmann 
temperature.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues





  On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser [EMAIL PROTECTED] wrote:

 I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

I doesn't seem to make sense that something would have personal value and 
then not have scientific or engineering value.

  Come by the house, we'll drop some acid together and you'll be convinced ;-)
   



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser

What I meant was, it seems like humans are
logically complete in some sense. In practice we are greatly limited
by memory and processing speed and so on; but I *don't* think we're
limited by lacking some important logical construct. It would be like
us discovering some alien species whose mathematicians were able to
understand each individual case of mathematical induction, but were
unable to comprehend the argument for accepting it as a general
principle, because they lack the abstraction. Something like that is
what I find implausible.


I like the phrase logically complete.

The way that I like to think about it is that we have the necessary seed of 
whatever intelligence/competence is that can be logically extended to cover 
all circumstances.


We may not have the personal time or resources to do so but given infinite 
time and resources there is no block on the path from what we have to 
getting there.


Note, however, that it is my understanding that a number of people on this 
list do not agree with this statement (feel free to chime in with you 
reasons why folks).



- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 12:20 PM
Subject: Re: [agi] constructivist issues



Too many responses for me to comment on everything! So, sorry to those
I don't address...

Ben,

When I claim a mathematical entity exists, I'm saying loosely that
meaningful statements can be made using it. So, I think meaning is
more basic. I mentioned already what my current definition of meaning
is: a statement is meaningful if it is associated with a computable
rule of deduction that it can use to operate on other (meaningful)
statements. This is in contrast to positivist-style definitions of
meaning, that would instead require a computable test of truth and/or
falsehood.

So, a statement is meaningful if it has procedural deductive meaning.
We *understand* a statement if we are capable of carrying out the
corresponding deductive procedure. A statement is *true* if carrying
out that deductive procedure only produces more true statements. We
*believe* a statement if we not only understand it, but proceed to
apply its deductive procedure.

There is of course some basic level of meaningful statements, such as
sensory observations, so that this is a working recursive definition
of meaning and truth.

By this definition of meaning, any statement in the arithmetical
hierarchy is meaningful (because each statement can be represented by
computable consequences on other statements in the arithmetical
hierarchy). I think some hyperarithmetical truths are captured as
well. I am more doubtful about it capturing anything beyond the first
level of the analytic hierarchy, and general set-theoretic discourse
seems far beyond its reach. Regardless, the definition of meaning
makes a very large number of uncomputable truths nonetheless
meaningful.

Russel,

I think both Ben and I would approximately agree with everything you
said, but that doesn't change our disagreeing with each other :).

Mark,

Good call... I shouldn't be talking like I think it is terrifically
unlikely that some more-intelligent alien species would find humans
mathematically crude. What I meant was, it seems like humans are
logically complete in some sense. In practice we are greatly limited
by memory and processing speed and so on; but I *don't* think we're
limited by lacking some important logical construct. It would be like
us discovering some alien species whose mathematicians were able to
understand each individual case of mathematical induction, but were
unable to comprehend the argument for accepting it as a general
principle, because they lack the abstraction. Something like that is
what I find implausible.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [OpenCog] Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 I think this would be a relatively pain-free way to communicate with an AI 
 that lacks the common sense to carry out disambiguation and reference 
 resolution reliably.   Also, the log of communication would provide a nice 
 training DB for it to use in studying disambiguation.

Awesome.  Like I said, it's a piece of something that I'm trying currently.  If 
I get positive results, I'm certainly not going to hide the fact.  ;-)

(or, it could turn into a learning experience like my attempts with Simplified 
English and Basic English :-)
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 12:27 PM
  Subject: [OpenCog] Re: [agi] constructivist issues



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled via 
reference to WordNet via usages like run_1, run_2, etc. ... or as you say by 
using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be handled in a 
similar way, e.g. by defining an ontology of preposition meanings like with_1, 
with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing resources 
into a preposition-meaning ontology like this a while back ... the so-called 
PrepositionWordNet ... or as it eventually came to be called the LARDict or 
LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts of 
subscripts, and in this way to recognize a highly controlled English that would 
be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple sentences, so 
the only real hassle to deal with is disambiguation.   We could use similar 
hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 urinated 
in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with an AI 
that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



  On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser [EMAIL PROTECTED] wrote:

 IMHO that is an almost hopeless approach, ambiguity is too integral to 
English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use big 
words and never use small words and/or you use a specific phrase as a word.  
Ambiguous prepositions just disambiguate to one of three/four/five/more 
possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic 
English) actually *favored* the small tremendously over-used/ambiguous words 
(because you got so much more bang for the buck with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

 If you want to take this sort of approach, you'd better start with 
Lojban instead  Learning Lojban is a pain but far less pain than you'll 
have trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you can 
come up with an unambiguous English word or very short phrase for each Lojban 
word.  If you can do it, my approach will work and will have the advantage that 
the output can be read by anyone (i.e. it's the equivalent of me having done it 
in Lojban and then added a Lojban - English translation on the end) though the 
input is still *very* problematical (thus the need for a semantically-driven 
English-subset translator).  If you can't do it, then my approach won't work.

Can you do it?  Why or why not?  If you can, do you still believe that my 
approach won't work?  Oh, wait . . . . a Lojban-to-English dictionary *does* 
attempt to come up with an unambiguous English word or very short phrase for 
each Lojban word.  :-)

Actually, h . . . . a Lojban dictionary would probably help me focus my 
efforts a bit better and highlight things that I may have missed . . . . do you 
have a preferred dictionary or resource?  (Google has too many for me to do a 
decent perusal quickly)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues







Personally, rather than starting with NLP, I think that we're going to 
need to start with a formal language that is a disambiguated subset of English 


  IMHO that is an almost hopeless approach, ambiguity is too integral to 
English or any natural language ... e.g

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
Douglas Hofstadter's newest book I Am A Strange Loop (currently available 
from Amazon for $7.99 - 
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM) has 
an excellent chapter showing Godel in syntax and semantics.  I highly 
recommend it.


The upshot is that while it is easily possible to define a complete formal 
system of syntax, that formal system can always be used to convey something 
(some semantics) that is (are) outside/beyond the system -- OR, to 
paraphrase -- meaning is always incomplete because it can always be added to 
even inside a formal system of syntax.


This is why I contend that language translation ends up being AGI-complete 
(although bounded subsets clearly don't need to be -- the question is 
whether you get a usable/useful subset more easily with or without first 
creating a seed AGI).


- Original Message - 
From: Abram Demski [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, October 22, 2008 12:38 PM
Subject: Re: [agi] constructivist issues



Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser [EMAIL PROTECTED] wrote:

It looks like all this disambiguation by moving to a more formal
language is about sweeping the problem under the rug, removing the
need for uncertain reasoning from surface levels of syntax and
semantics, to remember about it 10 years later, retouch the most
annoying holes with simple statistical techniques, and continue as
before.


That's an excellent criticism but not the intent.

Godel's Incompleteness Theorem means that you will be forever building . 
. .

.

All that disambiguation does is provides a solid, commonly-agreed upon
foundation to build from.

English and all natural languages are *HARD*.  They are not optimal for
simple understanding particularly given the realms we are currently in 
and

ambiguity makes things even worse.

Languages have so many ambiguities because of the way that they (and
concepts) develop.  You see something new, you grab the nearest analogy 
and

word/label and then modify it to fit.  That's why you then later need the
much longer words and very specific scientific terms and names.

Simple language is what you need to build the more specific complex
language.  Having an unambiguous constructed language is simply a 
template

or mold that you can use as scaffolding while you develop NLU.  Children
start out very unambiguous and concrete and so should we.

(And I don't believe in statistical techniques unless you have the 
resources

of Google or AIXI)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [OpenCog] Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
 Well, I am confident my approach with subscripts to handle disambiguation 
 and reference resolution would work, in conjunction with the existing 
 link-parser/RelEx framework...
 If anyone wants to implement it, it seems like just some hacking with the 
 open-source Java RelEx code...

Like what I called a semantically-driven English-subset translator?.  

Oh, I'm pretty confidant that it will work as well . . . . after the LaBrea tar 
pit of implementations . . . . (exactly how little semantic-related coding do 
you think will be necessary? ;-)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 1:06 PM
  Subject: Re: [OpenCog] Re: [agi] constructivist issues



  Well, I am confident my approach with subscripts to handle disambiguation and 
reference resolution would work, in conjunction with the existing 
link-parser/RelEx framework...

  If anyone wants to implement it, it seems like just some hacking with the 
open-source Java RelEx code...

  ben g


  On Wed, Oct 22, 2008 at 12:59 PM, Mark Waser [EMAIL PROTECTED] wrote:

 I think this would be a relatively pain-free way to communicate with an 
AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

Awesome.  Like I said, it's a piece of something that I'm trying currently. 
 If I get positive results, I'm certainly not going to hide the fact.  ;-)

(or, it could turn into a learning experience like my attempts with 
Simplified English and Basic English :-)
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 12:27 PM
  Subject: [OpenCog] Re: [agi] constructivist issues



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled via 
reference to WordNet via usages like run_1, run_2, etc. ... or as you say by 
using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be handled 
in a similar way, e.g. by defining an ontology of preposition meanings like 
with_1, with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing 
resources into a preposition-meaning ontology like this a while back ... the 
so-called PrepositionWordNet ... or as it eventually came to be called the 
LARDict or LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts of 
subscripts, and in this way to recognize a highly controlled English that would 
be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple sentences, 
so the only real hassle to deal with is disambiguation.   We could use similar 
hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 
urinated in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with an 
AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



  On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser [EMAIL PROTECTED] wrote:

 IMHO that is an almost hopeless approach, ambiguity is too integral 
to English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use 
big words and never use small words and/or you use a specific phrase as a 
word.  Ambiguous prepositions just disambiguate to one of 
three/four/five/more possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic 
English) actually *favored* the small tremendously over-used/ambiguous words 
(because you got so much more bang for the buck with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

 If you want to take this sort of approach, you'd better start with 
Lojban instead  Learning Lojban is a pain but far less pain than you'll 
have trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you 
can come up with an unambiguous English word or very short phrase for each 
Lojban word.  If you can do it, my approach will work and will have the 
advantage that the output can be read by anyone (i.e. it's the equivalent

Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
A couple of distinctions that I think would be really helpful for this 
discussion . . . . 

There is a profound difference between learning to play chess legally and 
learning to play chess well.

There is an equally profound difference between discovering how to play chess 
well and being taught to play chess well.

Personally, I think that a minimal AGI should be able to be taught to play 
chess reasonably well (i.e. about how well an average human would play after 
being taught the rules and playing a reasonable number of games with 
hints/pointers/tutoring provided) at about the same rate as a human when given 
the same assistance as that human.

Given that grandmasters don't learn solely from chess-only examples without 
help or without analogies and strategies from other domains, I don't see why an 
AGI should be forced to operate under those constraints.  Being taught is much 
faster and easier than discovering on your own.  Translating an analogy or 
transferring a strategy from another domain is much faster than discovering 
something new or developing something from scratch.  Why are we crippling our 
AGI in the name of simplicity?

(And Go is obviously the same)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
If MW would be scientific then he would not have asked Ben to prove that 
MWs hypothesis is wrong.


Science is done by comparing hypotheses to data.  Frequently, the fastest 
way to handle a hypothesis is to find a counter-example so that it can be 
discarded (or extended appropriately to handle the new case).  How is asking 
for a counter-example unscientific?



The person who has to prove something is the person who creates the 
hypothesis.


Ah.  Like the theory of evolution is conclusively proved?  The scientific 
method is about predictive power not proof.  Try reading the reference that 
I gave Ben.  (And if you've got something to prove, maybe the scientific 
method isn't so good for you.  :-)



And MW has given not a tiny argument for his hypothesis that a natural 
language understanding system can easily be a scientist.


First, I'd appreciate it if you'd drop the strawman.  You are the only one 
who keeps insisting that anything is easy.


Second, my hypothesis is more correctly stated that the pre-requisites for a 
natural language understanding system are necessary and sufficient for a 
scientist because both are AGI-complete.  Again, I would appreciate it if 
you could correctly represent it in the future.


Third, while I haven't given a tiny argument, I have given a reasonably 
short logical chain which I'll attempt to rephrase yet again.


Science is all about modeling the world and predicting future data.
The scientific method simply boils down to making a theory (of how to change 
or enhance your world model) and seeing if it is supported (not proved!) or 
disproved by future data.
Ben's and my disagreement initially came down to whether a scientist was an 
Einstein (his view) or merely capable of competently reviewing data to see 
if it supports, disproves, or isn't relevant to the predictive power of a 
theory (my view).
Later, he argued that most humans aren't even competent to review data and 
can't be made competent.
I agreed with his assessment that many scientists don't competently review 
data (inappropriate over-reliance on the heuristic p  0.5 without 
understanding what it truly means) but disagreed as to whether the average 
human could be *taught*
Ben's argument was that the scientific method couldn't be codified well 
enough to be taught.  My argument was that the method was codified 
sufficiently but that the application of the method was clearly context 
dependent and could be unboundedly complex.


But this is actually a distraction from some more important arguments . . . 
.
The $1,000,000 question is If a human can't be taught something, is that 
human a general intelligence?
The $5,000,000 question is If a human can't competently follow a recipe in 
a cookbook, do they have natural language understanding?


Fundamentally, this either comes down to a disagreement about what a general 
intelligence is and/or what understanding and meaning are.
Currently, I'm using the definition that a general intelligence is one that 
can achieve competence in any domain in a reasonable length of time.

To achieve competence in a domain, you have to understand that domain
My definition of understanding is that you have a mental model of that 
domain that has predictive power in that domain and which you can update as 
you learn about that domain.

(You could argue with this definition if you like)
Or, in other words, you have to be a competent scientist in that domain --  
or else, you don't truly understand that domain


So, for simplicity, why don't we just say
   scientist = understanding

Now, for a counter-example to my initial hypothesis, why don't you explain 
how you can have natural language understanding without understanding (which 
equals scientist ;-).





- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI


You and MW are clearly as philosophically ignorant, as I am in AI.


But MW and I have not agreed on anything.


Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination,

and creativity

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


And this is fundamentally what I was trying to say.

I don't think of myself as philosophically ignorant. I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Oh, and I *have* to laugh . . . .


Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination,

and creativity

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


In the cited wikipedia entry, the phrase Scientific method is not a recipe: 
it requires intelligence, imagination, and creativity is immediately 
followed by just such a recipe for the scientific method


A linearized, pragmatic scheme of the four points above is sometimes offered 
as a guideline for proceeding:[25]

 1.. Define the question
 2.. Gather information and resources (observe)
 3.. Form hypothesis
 4.. Perform experiment and collect data
 5.. Analyze data
 6.. Interpret data and draw conclusions that serve as a starting point for 
new hypothesis

 7.. Publish results
 8.. Retest (frequently done by other scientists)



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI


You and MW are clearly as philosophically ignorant, as I am in AI.


But MW and I have not agreed on anything.


Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination,

and creativity

http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.


And this is fundamentally what I was trying to say.

I don't think of myself as philosophically ignorant. I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
 Yes, but each of those steps is very vague, and cannot be boiled down to a 
 series of precise instructions sufficient for a stupid person to 
 consistently carry them out effectively...

So -- are those stupid people still general intelligences?  Or are they only 
general intelligences to the degree to which they *can* carry them out?  
(because I assume that you'd agree that general intelligence is a spectrum like 
any other type).

There also remains the distinction (that I'd like to highlight and emphasize) 
between a discoverer and a learner.  The cognitive skills/intelligence 
necessary to design questions, hypotheses, experiments, etc. are far in excess 
the cognitive skills/intelligence necessary to evaluate/validate those things.  
My argument was meant to be that a general intelligence needs to be a 
learner-type rather than a discoverer-type although the discoverer type is 
clearly more effective.

So -- If you can't correctly evaluate data, are you a general intelligence?  
How do you get an accurate and effective domain model to achieve competence in 
a domain if you don't know who or what to believe?  If you don't believe in 
evolution, does that mean that you aren't a general intelligence in that 
particular realm/domain (biology)?

 Also, those steps are heuristic and do not cover all cases.  For instance 
 step 4 requires experimentation, yet there are sciences such as cosmology 
 and paleontology that are not focused on experimentation.

I disagree.  They may be based upon thought experiments rather than physical 
experiments but it's still all about predictive power.  What is that next 
star/dinosaur going to look like?  What is it *never* going to look like (or 
else we need to expand or correct our theory)?  Is there anything that we can 
guess that we haven't tested/seen yet that we can verify?  What else is science?

My *opinion* is that the following steps are pretty inviolable.  
A.  Observe
B.  Form Hypotheses
C.  Observe More (most efficiently performed by designing competent 
experiments including actively looking for disproofs)
D.  Evaluate Hypotheses
E.  Add Evaluation to Knowledge-Base (Tentatively) but continue to test
F.  Return to step A with additional leverage

If you were forced to codify the hard core of the scientific method, how 
would you do it?

 As you asked for references I will give you two:

Thank you for setting a good example by including references but the contrast 
between the two is far better drawn in For and Against Method (ISBN 
0-226-46774-0).
Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for 
those who wish to educate themselves in the fundamentals of Philosophy of 
Science 
(you didn't really forget that my undergraduate degree was a dual major of 
Biochemistry and Philosophy of Science, did you? :-).

My view is basically that of Lakatos to the extent that I would challenge you 
to find anything in Lakatos that promotes your view over the one that I've 
espoused here.  Feyerabend's rants alternate between criticisms ultimately 
based upon the fact that what society frequently calls science is far more 
politics (see sociology of scientific knowledge); a Tintnerian/Anarchist rant 
against structure and formalism; and incorrect portrayals/extensions of Lakatos 
(just like this list ;-).  Where he is correct is in the first case where 
society is not doing science correctly (i.e. where he provided examples 
regarded as indisputable instances of progress and showed how the political 
structures of the time fought against or suppressed them).  But his rants 
against structure and formalism (or, purportedly, for freedom and 
humanitarianism snort) are simply garbage in my opinion (though I'd guess 
that they appeal to you ;-).




  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 10:41 AM
  Subject: Re: AW: AW: [agi] Re: Defining AGI





  On Tue, Oct 21, 2008 at 10:38 AM, Mark Waser [EMAIL PROTECTED] wrote:

Oh, and I *have* to laugh . . . .



  Hence the wiki entry on scientific method:
  Scientific method is not a recipe: it requires intelligence, 
imagination,

and creativity

  http://en.wikipedia.org/wiki/Scientific_method
  This is basic stuff.



In the cited wikipedia entry, the phrase Scientific method is not a 
recipe: it requires intelligence, imagination, and creativity is immediately 
followed by just such a recipe for the scientific method

A linearized, pragmatic scheme of the four points above is sometimes 
offered as a guideline for proceeding:[25]

  Yes, but each of those steps is very vague, and cannot be boiled down to a 
series of precise instructions sufficient for a stupid person to consistently 
carry them out effectively...

  Also, those steps are heuristic and do not cover all cases.  For instance 
step 4 requires experimentation, yet there are sciences such as cosmology

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Marc Walser wrote


Try to get the name right.  It's just common competence and courtesy.

Before you ask for counter examples you should *first* give some 
arguments which supports your hypothesis. This was my point.


And I believe that I did.  And I note that you didn't even address the fact 
that I did so again in the e-mail you are quoting.  You seem to want to 
address trivia rather than the meat of the argument.  What don't you address 
the core instead of throwing up a smokescreen?



Regarding your example with Darwin:


What example with Darwin?

First, I'd appreciate it if you'd drop the strawman.  You are the only 
one who keeps insisting that anything is easy.
 Is this a scientific discussion from you? No. You use rhetoric and 
nothing else.


And baseless statements like You use rhetoric and nothing else are a 
scientific discussion.  Again with the smokescreen.



I don't say that anything is easy.


Direct quote cut and paste from *your* e-mail . . . .
--
From: Dr. Matthias Heger
To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 2:19 PM
Subject: AW: AW: [agi] Re: Defining AGI


The process of translating patterns into language should be easier than the 
process of creating patterns or manipulating patterns. Therefore I say that 
language understanding is easy.


--





Clearly you DO say that language understanding is easy.








This is the first time you speak about pre-requisites.


Direct quote cut and paste from *my* e-mail . . . . .

- Original Message - 
From: Mark Waser [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 4:01 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



I don't think that learning of language is the entire point. If I have
only
learned language I still cannot create anything. A human who can
understand
language is by far still no good scientist. Intelligence means the 
ability

to solve problems. Which problems can a system solve if it can nothing
else
than language understanding?


Many or most people on this list believe that learning language is an
AGI-complete task.  What this means is that the skills necessary for
learning a language are necessary and sufficient for learning any other
task.  It is not that language understanding gives general intelligence
capabilities, but that the pre-requisites for language understanding are
general intelligence (or, that language understanding is isomorphic to
general intelligence in the same fashion that all NP-complete problems are
isomorphic).  Thus, the argument actually is that a system that can do
nothing else than language understanding is an oxymoron.


-




Clearly I DO talk about the pre-requisites for language understanding.






Dude.  Seriously.

First you deny your own statements and then claim that I didn't previously 
mention something that it is easily provable that I did (at the top of an 
e-mail).  Check the archives.  It's all there in bits and bytes.


Then you end with a funky pseudo-definition that Understanding does not 
imply the ability to create something new or to apply knowledge.   What 
*does* understanding mean if you can't apply it?  What value does it have?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
 But, by the time she overcame every other issue in the way of really 
 understanding science, her natural lifespan would have long been overspent...

You know, this is a *really* interesting point.  Effectively what you're saying 
(I believe) is that the difficulty isn't in learning but in UNLEARNING 
incorrect things that actively prevent you (via conflict) from learning correct 
things.  Is this a fair interpretation?

It's also particularly interesting when you compare it to information theory 
where the sole cost is in erasing a bit, not in setting it.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 2:56 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Hmm...

  I think that non-retarded humans are fully general intelligences in the 
following weak sense: for any fixed t and l, for any human there are some 
numbers M and T so that if the human is given amount M of external memory (e.g. 
notebooks to write on), that human could be taught to emulate AIXItl

  [see 
http://www.amazon.com/Universal-Artificial-Intelligence-Algorithmic-Probability/dp/3540221395/ref=sr_1_1?ie=UTF8s=booksqid=1224614995sr=1-1
 , or the relevant papers on Marcus Hutter's website]

  where each single step of AIXItl might take up to T seconds.

  This is a kind of generality that I think no animals but humans have.  So, in 
that sense, we seem to be the first evolved general intelligences.

  But, that said, there are limits to what any one of us can learn in a fixed 
finite amount of time.   If you fix T realistically then our intelligence 
decreases dramatically.

  And for the time-scales relevant in human life, it may not be possible to 
teach some people to do science adequately.

  I am thinking for instance of a 40 yr old student I taught at the University 
of Nevada way back when (normally I taught advanced math, but in summers I 
sometimes taught remedial stuff for extra $$).  She had taken elementary 
algebra 7 times before ... and had had extensive tutoring outside of class ... 
but I still was unable to convince her of the incorrectness of the following 
reasoning: The variable a always stands for 1.  The variable b always stands 
for 2. ... The variable z always stands for 26.   She was not retarded.  She 
seemed to have a mental block against algebra.  She could discuss politics and 
other topics with seeming intelligence.  Eventually I'm sure she could have 
been taught to overcome this block.  But, by the time she overcame every other 
issue in the way of really understanding science, her natural lifespan would 
have long been overspent...

  -- Ben G



  On Tue, Oct 21, 2008 at 12:33 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Yes, but each of those steps is very vague, and cannot be boiled down to 
a series of precise instructions sufficient for a stupid person to consistently 
carry them out effectively...

So -- are those stupid people still general intelligences?  Or are they 
only general intelligences to the degree to which they *can* carry them out?  
(because I assume that you'd agree that general intelligence is a spectrum like 
any other type).

There also remains the distinction (that I'd like to highlight and 
emphasize) between a discoverer and a learner.  The cognitive 
skills/intelligence necessary to design questions, hypotheses, experiments, 
etc. are far in excess the cognitive skills/intelligence necessary to 
evaluate/validate those things.  My argument was meant to be that a general 
intelligence needs to be a learner-type rather than a discoverer-type although 
the discoverer type is clearly more effective.

So -- If you can't correctly evaluate data, are you a general intelligence? 
 How do you get an accurate and effective domain model to achieve competence in 
a domain if you don't know who or what to believe?  If you don't believe in 
evolution, does that mean that you aren't a general intelligence in that 
particular realm/domain (biology)?

 Also, those steps are heuristic and do not cover all cases.  For 
instance step 4 requires experimentation, yet there are sciences such as 
cosmology and paleontology that are not focused on experimentation.

I disagree.  They may be based upon thought experiments rather than 
physical experiments but it's still all about predictive power.  What is that 
next star/dinosaur going to look like?  What is it *never* going to look like 
(or else we need to expand or correct our theory)?  Is there anything that we 
can guess that we haven't tested/seen yet that we can verify?  What else is 
science?

My *opinion* is that the following steps are pretty inviolable.  
A.  Observe
B.  Form Hypotheses
C.  Observe More (most efficiently performed by designing competent 
experiments including actively looking for disproofs)
D.  Evaluate Hypotheses
E.  Add Evaluation to Knowledge-Base (Tentatively) but continue to test

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser
Wow!  Way too much good stuff to respond to in one e-mail.  I'll try to respond 
to more in a later e-mail but . . . . (and I also want to get your reaction to 
a few things first :-)

 However, I still don't think that a below-average-IQ human can pragmatically 
 (i.e., within the scope of the normal human lifetime) be taught to 
 effectively carry out statistical evaluation of theories based on data, 
 given the realities of how theories are formulated and how data is obtained 
 and presented, at the present time...

Hmmm.  After some thought, I have to start by saying that it looks like you're 
equating science with statistics and I've got all sorts of negative reactions 
to that.

First -- Sure.  I certainly have to agree for a below-average-IQ human and 
could even be easily convinced for an average IQ human if they had to do it all 
themselves.  And then, statistical packages quickly turn into a two-edged sword 
where people blindly use heuristics without understanding them (p  .05 
anyone?).

A more important point, though, is that humans natively do *NOT* use statistics 
but innately use very biased, non-statistical methods that *arguably* function 
better than statistics in real world data environments.   That alone would 
convince me that I certainly don't want to say that science = statistics.

 I am not entirely happy with Lakatos's approach either.  I find it 
 descriptively accurate yet normatively inadequate.

Hmmm.  (again)  To me that seems to be an interesting way of rephrasing our 
previous disagreement except that you're now agreeing with me.  (Gotta love it 
:-)

You find Lakatos's approach descriptively accurate?  Fine, that's the 
scientific method.  

You find it normatively inadequate?  Well, duh (but meaning no offense :-) . . 
. . you can't codify the application of the scientific method to all cases.  I 
easily agreed to that before.

What were we disagreeing on again?


 My own take is that science normatively **should** be based on a Bayesian 
 approach to evaluating theories based on data

That always leads me personally to the question Why do humans operate on the 
biases that they do rather than Bayesian statistics?  MY *guess*  is that 
evolution COULD have implemented Bayesian methods but that the current methods 
are more efficient/effective under real world conditions (i.e. because of the 
real-world realities of feature extraction under dirty and incomplete or 
contradictory data and the fact that the Bayesian approach really does need to 
operate in an incredibly data-rich world where the features have already been 
extracted and ambiguities, other than occurrence percentages, are basically 
resolved).

**And adding different research programmes and/or priors always seems like such 
a kludge . . . . . 






  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 4:15 PM
  Subject: Re: AW: AW: [agi] Re: Defining AGI



  Mark,


 As you asked for references I will give you two:

Thank you for setting a good example by including references but the 
contrast between the two is far better drawn in For and Against Method (ISBN 
0-226-46774-0).

  I read that book but didn't like it as much ... but you're right, it may be 
an easier place for folks to start...
   
Also, I would add in Polya, Popper, Russell, and Kuhn for completeness for 
those who wish to educate themselves in the fundamentals of Philosophy of 
Science 

  All good stuff indeed.
   
My view is basically that of Lakatos to the extent that I would challenge 
you to find anything in Lakatos that promotes your view over the one that I've 
espoused here.  Feyerabend's rants alternate between criticisms ultimately 
based upon the fact that what society frequently calls science is far more 
politics (see sociology of scientific knowledge); a Tintnerian/Anarchist rant 
against structure and formalism; and incorrect portrayals/extensions of Lakatos 
(just like this list ;-).  Where he is correct is in the first case where 
society is not doing science correctly (i.e. where he provided examples 
regarded as indisputable instances of progress and showed how the political 
structures of the time fought against or suppressed them).  But his rants 
against structure and formalism (or, purportedly, for freedom and 
humanitarianism snort) are simply garbage in my opinion (though I'd guess 
that they appeal to you ;-).

  Feyerabend appeals to my sense of humor ... I liked the guy.  I had some 
correspondence with him when I was 18.  I wrote him a letter outlining some of 
my ideas on philosophy of mind and asking his advice on where I should go to 
grad school to study philosophy.  He replied telling me that if I wanted to be 
a real philosopher I should **not** study philosophy academically nor become a 
philosophy professor, but should study science or arts and then pursue 
philosophy independently.  We chatted back and forth a little after 

Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

AI!   :-)

This is what I was trying to avoid.   :-)

My objection starts with How is a Bayes net going to do feature 
extraction?


A Bayes net may be part of a final solution but as you even indicate, it's 
only going to be part . . . .


- Original Message - 
From: Eric Burton [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 21, 2008 4:51 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



I think I see what's on the table here. Does all this mean a Bayes
net, properly motivated, could be capable of performing scientific
inquiry? Maybe in combination with a GA that tunes itself to maximize
adaptive mutations in the input based on scores from the net, which
seeks superior product designs? A Bayes net could be a sophisticated
tool for evaluating technological merit, while really just a signal
filter on a stream of candidate blueprints if what you're saying is
true.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Mark Waser

Incorrect things are wrapped up with correct things in peoples' minds



Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem.


Um.  No.

I'm thinking that in order to integrate a new idea into your world model, 
you first have to resolve all the conflicts that it has with the existing 
model.  That could be incredibly expensive.


(And intelligence is emphatically not linear)

But Ben is saying that for evaluating science, there ain't no such 
checklist.

The circumstances are too variable, you would need checklists to infinity.


I'm sure that Ben was saying that for doing discovery . . . . and I agree.

For evaluation, I'm not sure that we've come to closure on what either of us 
think . . . .   :-)




- Original Message - 
From: BillK [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, October 21, 2008 5:50 PM
Subject: Re: AW: AW: [agi] Re: Defining AGI



On Tue, Oct 21, 2008 at 10:31 PM, Ben Goertzel wrote:


Incorrect things are wrapped up with correct things in peoples' minds

However, pure slowness at learning is another part of the problem ...




Mark seems to be thinking of something like the checklist that the ISP
technician walks through when you call with a problem. Even when you
know what the problem is, the tech won't listen. He insists on working
through his checklist, making you do all the irrelevant checks,
eventually by a process of elimination, ending up with what you knew
was wrong all along. Very little GI required.

But Ben is saying that for evaluating science, there ain't no such 
checklist.

The circumstances are too variable, you would need checklists to infinity.

I go along with Ben.

BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Mark Waser
There is a wide area between moderation and complete laissez-faire.

Also, as list owner, people tend to pay attention to what you say/request and 
also what you do.

If you regularly point to references and ask others to do the same, they are 
likely to follow.  If you were to gently chastise people for saying that there 
are no facts when references were provided, people might get the hint.  
Instead, you generally feed the trolls and humorously insult the people who 
are trying to keep it on a scientific basis.  That's a pretty clear message all 
by itself.

You don't need to spend more time but, as a serious role model for many of the 
people on the list, you do need to pay attention to the effects of what you say 
and do.  I can't help but go back to my perceived summary of the most recent 
issue -- Ben Goertzel says that there is no true defined method to the 
scientific method (and Mark Waser is clueless for thinking that there is).


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 20, 2008 6:53 AM
  Subject: Re: AW: AW: [agi] Re: Defining AGI





It would also be nice if this mailing list could be operate on a bit more 
of a scientific basis.  I get really tired of pointing to specific references 
and then being told that I have no facts or that it was solely my opinion.



  This really has to do with the culture of the community on the list, rather 
than the operation of the list per se, I'd say.

  I have also often been frustrated by the lack of inclination of some list 
members to read the relevant literature.  Admittedly, there is a lot of it to 
read.  But on the other hand, it's not reasonable to expect folks who *have* 
read a certain subset of the literature, to summarize that subset in emails for 
individuals who haven't taken the time.  Creating such summaries carefully 
takes a lot of effort.

  I agree that if more careful attention were paid to the known science related 
to AGI ... and to the long history of prior discussions on the issues discussed 
here ... this list would be a lot more useful.

  But, this is not a structured discussion setting -- it's an Internet 
discussion group, and even if I had the inclination to moderate more carefully 
so as to try to encourage a more carefully scientific mode of discussion, I 
wouldn't have the time...

  ben g




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] Re: Defining AGI

2008-10-19 Thread Mark Waser
Domain effectiveness (a.k.a. intelligence) is predicated upon having an 
effective internal model of that domain.

Language production is the extraction and packaging of applicable parts of the 
internal model for transmission to others.
Conversely, language understanding is for the reception (and integration) of 
model portions developed by others (i.e. learning from a teacher).

The better your internal models, the more effective/intelligent you are.

BUT!  This also holds true for language!  Concrete unadorned statements convey 
a lot less information than statements loaded with adjectives, adverbs, or even 
more markedly analogies (or innuendos or . . . ).
A child cannot pick up the same amount of information from a sentence that they 
think that they understand (and do understand to some degree) that an adult can.
Language is a knowledge domain like any other and high intelligences can use it 
far more effectively than lower intelligences.

** Or, in other words, I am disagreeing with the statement that the process 
itself needs not much intelligence.

Saying that the understanding of language itself is simple is like saying that 
chess is simple because you understand the rules of the game.
Godel's Incompleteness Theorem can be used to show that there is no upper bound 
on the complexity of language and the intelligence necessary to pack and 
extract meaning/knowledge into/from language.

Language is *NOT* just a top-level communications protocol because it is not 
fully-specified and because it is tremendously context-dependent (not to 
mention entirely Godellian).  These two reasons are why it *IS* inextricably 
tied into intelligence.

I *might* agree that the concrete language of lower primates and young children 
is separate from intelligence, but there is far more going on in adult language 
than a simple communications protocol.

E-mail programs are simply point-to-point repeaters of language (NOT meaning!)  
Intelligences generally don't exactly repeat language but *try* to repeat 
meaning.  The game of telephone is a tremendous example of why language *IS* 
tied to intelligence (or look at the results of translating simple phrases into 
another language and back -- The drink is strong but the meat is rotten).  
Translating language to and from meaning (i.e. your domain model) is the 
essence of intelligence.

How simple is the understanding of the above?  How much are you having to fight 
to relate it to your internal model (assuming that it's even compatible :-)?

I don't believe that intelligence is inherent upon language EXCEPT that 
language is necessary to convey knowledge/meaning (in order to build 
intelligence in a reasonable timeframe) and that language is influenced by and 
influences intelligence since it is basically the core of the critical 
meta-domains of teaching, learning, discovery, and alteration of your internal 
model (the effectiveness of which *IS* intelligence).  Future AGI and humans 
will undoubtedly not only have a much richer language but also a much richer 
repertoire of second-order (and higher) features expressed via language.

** Or, in other words, I am strongly disagreeing that intelligence is 
separated from language understanding.  I believe that language understanding 
is the necessary tool that intelligence is built with since it is what puts the 
*contents* of intelligence (i.e. the domain model) into intelligence .  Trying 
to build an intelligence without language understanding is like trying to build 
it with just machine language or by using only observable data points rather 
than trying to build those things into more complex entities like third-, 
fourth-, and fifth-generation programming languages instead of machine language 
and/or knowledge instead of just data points.

BTW -- Please note, however, that the above does not imply that I believe that 
NLU is the place to start in developing AGI.  Quite the contrary -- NLU rests 
upon such a large domain model that I believe that it is counter-productive to 
start there.  I believe that we need to star with limited domains and learn 
about language, internal models, and grounding without brittleness in tractable 
domains before attempting to extend that knowledge to larger domains.

  - Original Message - 
  From: David Hart 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 19, 2008 5:30 AM
  Subject: Re: AW: [agi] Re: Defining AGI



  An excellent post, thanks!

  IMO, it raises the bar for discussion of language and AGI, and should be 
carefully considered by the authors of future posts on the topic of language 
and AGI. If the AGI list were a forum, Matthias's post should be pinned!

  -dave


  On Sun, Oct 19, 2008 at 6:58 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.

Every email program can receive meaning, store meaning and 

Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.


This is what I disagree entirely with.  If nothing else, humans are 
constantly building and updating their mental model of what other people 
believe and how they communicate it.  Only in routine, pre-negotiated 
conversations can language be entirely devoid of learning.  Unless a 
conversation is entirely concrete and based upon something like shared 
physical experiences, it can't be any other way.  You're only paying 
attention to the absolutely simplest things that language does (i.e. the tip 
of the iceberg).



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 10:31 AM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]


For the discussion of the subject the details of the pattern 
representation

are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a 
word:


http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations.
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for 
concepts,


which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the processing of the
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words c-a-t or m-i-n-d or U-S  or f-i-n-a-n-c-i-a-l 
c-r-i-s-i-s

are distinct from the underlying concepts. The question is: What form do
those concepts take? And what is happening in our minds (and what has to
happen in any mind) when we process those concepts?

You talk of patterns. What patterns, do you think, form the concept of
mind that are engaged in thinking about sentence 2? Do you think that
concepts like mind or the US might involve something much more complex
still? Models? Or is that still way too simple? Spaces?

Equally, of course, we can say that each *sentence* above is not just a
verbal composition but a conceptual composition - and the question 
then

is what form does such a composition take? Do sentences form, say, a
pattern of patterns,  or something like a picture? Or a blending of
spaces ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. a million 
dollars -
something that we know can be cashed in, in an infinite variety of ways, 
but


that we may not have to start cashing in,  (when processing), unless
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would
see as central to AGI, but that most of you don't want to talk about?





Matthias: What the computer makes with the data it receives depends on the
information

of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities
can
be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to
obtain
this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have
experienced
angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who 
was

angry?

The way to obtain knowledge with embodiment is hard and long even in
virtual
worlds.
If the 

Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


Read Pinker's The Stuff of Thought.  Actually, a lot of these details *are* 
visible from a linguistic point of view.


- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 10:31 AM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]


For the discussion of the subject the details of the pattern 
representation

are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a 
word:


http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations.
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for 
concepts,


which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the processing of the
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words c-a-t or m-i-n-d or U-S  or f-i-n-a-n-c-i-a-l 
c-r-i-s-i-s

are distinct from the underlying concepts. The question is: What form do
those concepts take? And what is happening in our minds (and what has to
happen in any mind) when we process those concepts?

You talk of patterns. What patterns, do you think, form the concept of
mind that are engaged in thinking about sentence 2? Do you think that
concepts like mind or the US might involve something much more complex
still? Models? Or is that still way too simple? Spaces?

Equally, of course, we can say that each *sentence* above is not just a
verbal composition but a conceptual composition - and the question 
then

is what form does such a composition take? Do sentences form, say, a
pattern of patterns,  or something like a picture? Or a blending of
spaces ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. a million 
dollars -
something that we know can be cashed in, in an infinite variety of ways, 
but


that we may not have to start cashing in,  (when processing), unless
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would
see as central to AGI, but that most of you don't want to talk about?





Matthias: What the computer makes with the data it receives depends on the
information

of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities
can
be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to
obtain
this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have
experienced
angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who 
was

angry?

The way to obtain knowledge with embodiment is hard and long even in
virtual
worlds.
If the AGI shall understand natural language it would be necessary that 
it
makes similar experiences as humans make in the real world. But this 
would

need a very very sophisticated and rich virtual world. At least, there
have
to be angry dogs in the virtual world ;-)

As I have already said I do not think the relation between utility of 
this

approach and the costs would be positive for first AGI.







Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser

The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished.


What if the matching process is not finished?

This is overly simplistic for several reasons since you're apparently 
assuming that the matching process is crisp, unambiguous, and irreversible 
(and ask Stephen Reed how well that works for TexAI).


It *must* be remembered that the internal model for natural language 
includes such critically entwined and constantly changing information as 
what this particular conversation is about, what the speaker knows, and what 
the speakers motivations are.  The meaning of sentences can change 
tremendously based upon the currently held beliefs about these questions. 
Suddenly realizing that the speaker is being sarcastic generally reverses 
the meaning of statements.  Suddenly realizing that the speaker is using an 
analogy can open up tremendous vistas for interpretation and analysis.  Look 
at all the problems that people have parsing sentences.



Language
understanding can be strictly separated conceptually from creation and
manipulation of patterns as you can separate the process of communication
with the process of manipulating the database in a computer.


The reason why you can separate the process of communication with the 
process of manipulating data in a computer is because *data* is crisp and 
unambiguous.  It is concrete and completely specified as I suggested in my 
initial e-mail.  The model is entirely known and the communication process 
is entirely specified.  None of these things are true of unstructured 
knowledge.


Language understanding emphatically does not meet these requirements so your 
analogy doesn't hold.



You can see it differently but then everything is only a discussion about
definitions.


No, and claiming that everything is just a discussion about definitions is a 
strawman.  Your analogies are not accurate and your model is incomplete. 
You are focusing only on the tip of the iceberg (concrete language as spoken 
by a two-year-old) and missing the essence of NLP.



- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 1:42 PM
Subject: AW: [agi] Words vs Concepts [ex Defining AGI]



The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished. Language
understanding can be strictly separated conceptually from creation and
manipulation of patterns as you can separate the process of communication
with the process of manipulating the database in a computer.
You can see it differently but then everything is only a discussion about
definitions.

- Matthias




Mark Waser [mailto:[EMAIL PROTECTED] wrote

Gesendet: Sonntag, 19. Oktober 2008 19:00
An: agi@v2.listbox.com
Betreff: Re: [agi] Words vs Concepts [ex Defining AGI]

There is no creation of new patterns and there is no intelligent 
algorithm
which manipulates patterns. It is just translating, sending, receiving 
and

retranslating.


This is what I disagree entirely with.  If nothing else, humans are
constantly building and updating their mental model of what other people
believe and how they communicate it.  Only in routine, pre-negotiated
conversations can language be entirely devoid of learning.  Unless a
conversation is entirely concrete and based upon something like shared
physical experiences, it can't be any other way.  You're only paying
attention to the absolutely simplest things that language does (i.e. the 
tip


of the iceberg).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Mark Waser
If there are some details of the internal structure of patterns visible 
then

this is no proof at all that there are not also details of the structure
which are completely hidden from the linguistic point of view.


True, but visible patterns offer clues for interpretation and analysis.  The 
more that is visible and clear, the less that is ambiguous and needs to be 
guessed at.  This is where your analogy splitting computer communications 
and data updates is accurate because the internal structures have been 
communicated and are shared to the nth degree.



Since in many communicating technical systems there are so much details
which are not transferred I would bet that this is also the case in 
humans.


Details that don't need to be transferred are those which are either known 
by or unnecessary to the recipient.  The former is a guess (unless the 
details were transmitted previously) and the latter is an assumption based 
upon partial knowledge of the recipient.  In a perfect, infinite world, 
details could and should always be transferred.  In the real world, time and 
computational constraints means that trade-offs need to occur.  This is 
where the essence of intelligence comes into play -- determining which of 
the trade-offs to take to get optimal perfomance (a.k.a. domain competence)



As long as we have no proof this remains an open question.


What remains an open question?  Obviously there are details which can be 
teased out by behavior and details that can't be easily teased out because 
we have insufficient data to do so.  This is like any other scientific 
examination of any other complex phenomenon.



An AGI which may
have internal features for its patterns would have less restrictions and 
is

thus far easier to build.


Sorry, but I can't interpret this.  An AGI without internal features and 
regularities is an oxymoron and completely nonsensical.  What are you trying 
to convey here?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


  1   2   3   4   5   6   7   8   >