[agi] Paper: Voodoo Correlations in Social Neuroscience

2009-01-15 Thread Mark Waser
http://machineslikeus.com/news/paper-voodoo-correlations-social-neuroscience

http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Mark Waser

But how can it dequark the tachyon antimatter containment field?


Richard,

   You missed Mike Tintner's explanation . . . .

You're not thinking your argument through. Look carefully at my 
spontaneous

"COW" - DOG - TAIL - CURRENT CRISIS - LOCAL VS GLOBAL
THINKING - WHAT A NICE DAY - MUST GET ON- CANT SPEND MUCH MORE TIME ON
THIS" etc. etc"



It can do this partly because
a) single ideas have multiple, often massively mutiple, idea/domain 
connections in the human brain, and allow one to go off in any of multiple 
tangents/directions
b) humans have many things - and therefore multiple domains - on their 
mind at the same time concurrently - and can switch as above from the 
immediate subject to some other pressing subject domain (e.g. from 
economics/politics (local vs global) to the weather (what a nice day).


So maybe it's worth taking 20 secs. of time - producing your own 
chain-of-free-association starting say with "MAHONEY" and going on for 
another 10 or so items - and trying to figure out how






- Original Message - 
From: "Richard Loosemore" 

To: 
Sent: Thursday, January 08, 2009 8:05 PM
Subject: Re: [agi] The Smushaby of Flatway.



Ronald C. Blue wrote:

[snip]
[snip] ... chaos stimulation because ... correlational wavelet opponent 
processing machine ... globally entangled ... Paul rf trap ... parallel

> modulating string pulses ... a relative zero energy value or

opponent process  ...   phase locked ... parallel opponent process
... reciprocal Eigenfunction ...  opponent process ... summation 
interference ... gaussian reference rf trap ...

> oscillon output picture ... locked into the forth harmonic ...
> ... entangled with its Eigenfunction ..

 [snip]
 That is what entangled memory means.



Okay, I got that.

But how can it dequark the tachyon antimatter containment field?



Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Religious attitudes to NBIC technologies

2008-12-09 Thread Mark Waser
>> The problem here is that WE don't have anything to point to as OUR religion
Why not go with Unitarian Universalism?  It's non-creedal (i.e. you don't have 
to believe in God -or- you can believe in any God ) and has a long history and already established 
community.

Form the Unitarian Universalist Association website at http://www.uua.org
Our Principles
Principios en Espanol

There are seven principles which Unitarian Universalist congregations affirm 
and promote:

  a.. The inherent worth and dignity of every person; 
  b.. Justice, equity and compassion in human relations; 
  c.. Acceptance of one another and encouragement to spiritual growth in our 
congregations; 
  d.. A free and responsible search for truth and meaning; 
  e.. The right of conscience and the use of the democratic process within our 
congregations and in society at large; 
  f.. The goal of world community with peace, liberty, and justice for all; 
  g.. Respect for the interdependent web of all existence of which we are a 
part. 
Unitarian Universalism (UU) draws from many sources: 

  a.. Direct experience of that transcending mystery and wonder, affirmed in 
all cultures, which moves us to a renewal of the spirit and an openness to the 
forces which create and uphold life; 
  b.. Words and deeds of prophetic women and men which challenge us to confront 
powers and structures of evil with justice, compassion, and the transforming 
power of love; 
  c.. Wisdom from the world's religions which inspires us in our ethical and 
spiritual life; 
  d.. Jewish and Christian teachings which call us to respond to God's love by 
loving our neighbors as ourselves; 
  e.. Humanist teachings which counsel us to heed the guidance of reason and 
the results of science, and warn us against idolatries of the mind and spirit. 
  f.. Spiritual teachings of earth-centered traditions which celebrate the 
sacred circle of life and instruct us to live in harmony with the rhythms of 
nature. 
These principles and sources of faith are the backbone of our religious 
community.



  - Original Message - 
  From: Steve Richfield 
  To: agi@v2.listbox.com 
  Sent: Monday, December 08, 2008 8:14 PM
  Subject: Re: [agi] Religious attitudes to NBIC technologies


  Everyone,

  The problem here is that WE don't have anything to point to as OUR religion, 
so that everyone else has the power of stupidity in general and the 1st 
amendment in particular, yet we don't have any such power.

  I believe that it is possible to fill in this gap, but I don't wish to 
discuss incomplete solutions on public forums. However, if you have any ideas 
just how OUR religion should be structured, then please feel free to send them 
to me, preferably off-line. It would be a real shame to do a bad job of this, 
so I'm keeping my detailed thoughts to myself pending a "live birth".
   
  Note Buddhism's belief structure that does NOT include a Deity.

  Note Islam's various provisions for unbelievers to get a free pass, and 
sometimes even break a rule here and there, so long as they pretend to believe.

  Any thoughts?

  Steve Richfield
  
  On 12/8/08, Philip Hunt <[EMAIL PROTECTED]> wrote: 
2008/12/8 Bob Mottram <[EMAIL PROTECTED]>:
> People who are highly religious tend to be very "past positive"
> according the Zimbardo classification of people according to their
> temporal orientation. [...]
> I agree that in time we will see more polarization around a variety of
> technology related issues.

You're probably right. Part of the problem is that these people
[correctly] believe that science and technology are destroying their
worldview. And as the gaps in scientific knowledge decrease, there's
less roo for the "God of the gaps" to occupy.

Having said that, I'm not aware that nanotechnology or AI are
specifically prohibited by any of the major religions. And if one
society forgoes science, they'll just get outcompeted by their
neighbours.

--
Philip Hunt, <[EMAIL PROTECTED]>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] The Future of AGI

2008-11-26 Thread Mark Waser
- Original Message - 
From: "Mike Tintner" <[EMAIL PROTECTED]>

I should explain "rationality"


No Mike, you *really* shouldn't.  Repurposing words like you do merely leads 
to confusion not clarity . . . .


Actual general intelligence in humans and animals is indisputably 
continuously "screen-based."


You keep contending this with absolutely no evidence or proof.

You can have conscious intelligence without language, logic or maths. You 
can't have it without a "screen" - the continuous movie of consciousness. 
And that screen is not just vision but sound.


And how do you know this?

If you're smart,  I suggest, you'll acknowledge the truth here, which is 
that you know next to nothing about imaginative intelligence


I see, so if Ben is smart he'll acknowledge that you, with far less 
knowledge and experience, have the correct answer (despite being unable to 
explain it coherently enough to convince *anyone*).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Mark Waser

???  Did you read the article?


Absolutely.  I don't comment on things without reading them (unlike some 
people on this list).  Not only that, I also read the paper that someone was 
nice enough to send the link for.


Now his 'new' theory may be old hat to you personally,  but apparently 
not to the majority of AI researchers, (according to the article).


The phrase "according to the article" is what is telling.  It is an improper 
(and incorrect) portrayal of "the majority of AI researchers".


He must be saying something a bit unusual to have been fighting for ten 
years to get it published and accepted enough for him to now have been 
invited to do a workshop on his theory.


Something a bit unusual like Mike Tintner fighting us on this list for ten 
years and then finding someone to accept his theories and run a workshop? 
Note who is running the workshop . . . . not the normal BICA community for 
sure . . . .




- Original Message - 
From: "BillK" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, November 20, 2008 10:37 AM
Subject: **SPAM** Re: [agi] Professor Asim Roy Finally Publishes 
Controversial Brain Theory



On Thu, Nov 20, 2008 at 3:06 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Yeah.  Great headline -- "Man beats dead horse beyond death!"

I'm sure that there will be more details at 11.

Though I am curious . . . .  BillK, why did you think that this was worth
posting?




???  Did you read the article?

---
Quote:
In the late '90s, Asim Roy, a professor of information systems at
Arizona State University, began to write a paper on a new brain
theory. Now, 10 years later and after several rejections and
resubmissions, the paper "Connectionism, Controllers, and a Brain
Theory" has finally been published in the November issue of IEEE
Transactions on Systems, Man, and Cybernetics – Part A: Systems and
Humans.

Roy's theory undermines the roots of connectionism, and that's why his
ideas have experienced a tremendous amount of resistance from the
cognitive science community. For the past 15 years, Roy has engaged
researchers in public debates, in which it's usually him arguing
against a dozen or so connectionist researchers. Roy says he wasn't
surprised at the resistance, though.

"I was attempting to take down their whole body of science," he
explained. "So I would probably have behaved the same way if I were in
their shoes."

No matter exactly where or what the brain controllers are, Roy hopes
that his theory will enable research on new kinds of learning
algorithms. Currently, restrictions such as local and memoryless
learning have limited AI designers, but these concepts are derived
directly from that idea that control is local, not high-level.
Possibly, a controller-based theory could lead to the development of
truly autonomous learning systems, and a next generation of
intelligent robots.

The sentiment that the "science is stuck" is becoming common to AI
researchers. In July 2007, the National Science Foundation (NSF)
hosted a workshop on the "Future Challenges for the Science and
Engineering of Learning." The NSF's summary of the "Open Questions in
Both Biological and Machine Learning" [see below] from the workshop
emphasizes the limitations in current approaches to machine learning,
especially when compared with biological learners' ability to learn
autonomously under their own self-supervision:

"Virtually all current approaches to machine learning typically
require a human supervisor to design the learning architecture, select
the training examples, design the form of the representation of the
training examples, choose the learning algorithm, set the learning
parameters, decide when to stop learning, and choose the way in which
the performance of the learning algorithm is evaluated. This strong
dependence on human supervision is greatly retarding the development
and ubiquitous deployment of autonomous artificial learning systems.
Although we are beginning to understand some of the learning systems
used by brains, many aspects of autonomous learning have not yet been
identified."

Roy sees the NSF's call for a new science as an open door for a new
theory, and he plans to work hard to ensure that his colleagues
realize the potential of the controller model. Next April, he will
present a four-hour workshop on autonomous machine learning, having
been invited by the Program Committee of the International Joint
Conference on Neural Networks (IJCNN).
-


Now his 'new' theory may be old hat to you personally,  but apparently
not to the majority of AI researchers, (according to the article).  He
must be saying something a bit unusual to have been fighting for ten
years to get it published and accepted enough for him to now have been
invited to do a workshop on his t

RE: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Mark Waser
Yeah.  Great headline -- "Man beats dead horse beyond death!"

I'm sure that there will be more details at 11.

Though I am curious . . . .  BillK, why did you think that this was worth 
posting?
  - Original Message - 
  From: Derek Zahn 
  To: agi@v2.listbox.com 
  Sent: Thursday, November 20, 2008 9:43 AM
  Subject: **SPAM** RE: [agi] Professor Asim Roy Finally Publishes 
Controversial Brain Theory



  From the paper:
   
  > This paper has proposed a new paradigm for the 
  > internal mechanisms of the brain, one that postulates 
  > that there are parts of the brain that control other parts. 
   
  Sometimes I despair.
   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Mark Waser
>> Seed AI is a myth.

Ah.  Now I get it.  You are on this list solely to try to slow down progress as 
much as possible . . . . (sorry that I've been so slow to realize this)

add-rule kill-file "Matt Mahoney"
  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Tuesday, November 18, 2008 8:23 PM
  Subject: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other 
dangerous technologies...


Steve, what is the purpose of your political litmus test? If you are 
trying to assemble a team of seed-AI programmers with the "correct" ethics, 
forget it. Seed AI is a myth.
http://www.mattmahoney.net/agi2.html (section 2).

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Steve Richfield <[EMAIL PROTECTED]> wrote:

  From: Steve Richfield <[EMAIL PROTECTED]>
  Subject: Re: [agi] My prospective plan to neutralize AGI and other 
dangerous technologies...
  To: agi@v2.listbox.com
  Date: Tuesday, November 18, 2008, 6:39 PM


  Richard and Bill,


  On 11/18/08, BillK <[EMAIL PROTECTED]> wrote: 
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
> I see how this would work:  crazy people never tell lies, so 
you'd be able
> to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.

  In 1994 I was literally sold into servitude in Saudi Arabia as a sort 
of slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air 
Force. I managed to escape that situation with the help of the same Wahhabist 
Sunni Muslims that are now causing so many problems. With that background, I 
think I understand them better than most people.

  As in all other societies, they are not given the whole truth, e.g. 
most have never heard of the slaughter at Medina, and believe that Mohamed 
never hurt anyone at all.

  My hope and expectation is that, by allowing people to research 
various issues as they work on their test, that a LOT of people who might 
otherwise fail the test will instead reevaluate their beliefs, at least enough 
to come up with the right answers, whether or not they truly believe them. At 
least that level of understanding assures that they can carry on a reasoned 
conversation. This is a MAJOR problem now. Even here on this forum, many people 
still don't get reverse reductio ad absurdum.

  BTW, I place most of the blame for the middle east impasse on the 
West rather than on the East. The Koran says that most of the evil in the world 
is done by people who think they are doing good, which brings with it a good 
social mandate to publicly reconsider and defend any actions that others claim 
to be evil. The next step is to proclaim evil doers as "unwitting agents of 
Satan". If there is still no good defense, then they drop the "unwitting". Of 
course, us stupid uncivilized Westerners have fallen into this, and so 19 brave 
men sacrificed their lives just to get our attention, but even that failed to 
work as planned. Just what DOES it take to get our attention - a nuke in NYC? 
What the West has failed to realize is that they are playing a losing hand, but 
nonetheless, they just keep increasing the bet on the expectation that the 
other side will fold. They won't. I was as much intending my test for the sort 
of stupidity that nearly all Americans harbor as that carried by Al Queda. 
Neither side seems to be playing with a full deck.

  Steve Richfield


--
agi | Archives  | Modify Your Subscription  
   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
>> I am just trying to point out the contradictions in Mark's sweeping 
>> generalizations about the treatment of intelligent machines

Huh?  That's what you're trying to do?  Normally people do that by pointing to 
two different statements and arguing that they contradict each other.  Not by 
creating new, really silly definitions and then trying to posit a universe 
where blue equals red so everybody is confused.

>> But to be fair, such criticism is unwarranted. 

So exactly why are you persisting?

>> Ethical beliefs are emotional, not rational,

Ethical beliefs are subconscious and deliberately obscured from the conscious 
mind so that defections can be explained away without triggering other 
primate's lie-detecting senses.  However, contrary to your antiquated beliefs, 
they are *purely* a survival trait with a very solid grounding.

>> Ethical beliefs are also algorithmically complex

Absolutely not.  Ethical beliefs are actually pretty darn simple as far as the 
subconscious is concerned.  It's only when the conscious "rational" mind gets 
involved that ethics are twisted beyond recognition (just like all your 
arguments).

>> so the result of this argument could only result in increasingly complex 
>> rules to fit his model

Again, absolutely not.  You have no clue as to what my argument is yet you 
fantasize that you can predict it's results.  BAH!

>> For the record, I do have ethical beliefs like most other people

Yet you persist in arguing otherwise.  *Most* people would call that dishonest, 
deceitful, and time-wasting. 

>> The question is not how should we interact with machines, but how will we? 

No, it isn't.  Study the results on ethical behavior when people are convinced 
that they don't have free will.

= = = = = 

BAH!  I should have quit answering you long ago.  No more.


  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Tuesday, November 18, 2008 7:58 PM
  Subject: Re: Definition of pain (was Re: FW: [agi] A paper that actually does 
solve the problem of consciousness--correction)


Just to clarify, I'm not really interested in whether machines feel 
pain. I am just trying to point out the contradictions in Mark's sweeping 
generalizations about the treatment of intelligent machines. But to be fair, 
such criticism is unwarented. Mark is arguing about ethics. Everyone has 
ethical beliefs. Ethical beliefs are emotional, not rational, although we often 
forget this. Ethical beliefs are also algorithmically complex, so the result of 
this argument could only result in increasingly complex rules to fit his model. 
It would be unfair to bore the rest of this list with such a discussion.

For the record, I do have ethical beliefs like most other people, but 
they are irrelevant to the design of AGI. The question is not how should we 
interact with machines, but how will we? For example, when we develop the 
technology to simulate human minds in general, or to simulate specific humans 
who have died, common ethical models among humans will probably result in the 
granting of legal and property rights to these simulations. Since these 
simulations could reproduce, evolve, and acquire computing resources much 
faster than humans, the likely result will be human extinction, or viewed 
another way, our evolution into a non-DNA based life form. I won't offer an 
opinion on whether this is desirable or not, because my opinion would be based 
on my ethical beliefs.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

  From: Ben Goertzel <[EMAIL PROTECTED]>
  Subject: Re: Definition of pain (was Re: FW: [agi] A paper that 
actually does solve the problem of consciousness--correction)
  To: agi@v2.listbox.com
  Date: Tuesday, November 18, 2008, 6:29 PM





  On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney <[EMAIL PROTECTED]> 
wrote:

--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:

> Autobliss has no grounding, no internal feedback, and no
> volition.  By what definitions does it feel pain?

Now you are making up new rules to decide that autobliss doesn't 
feel pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.

You stated that machines can feel pain, and you stated that we 
don't get to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) 

  Clearly, this can be done, and has largely been done already ... 
though cutting and pasting or summarizing the relevant literature in emails 
would not a productive use of time
   
and prove that these criteria ar

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
Now you are making up new rules to decide that autobliss doesn't feel 
pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.


I made up no rules.  I merely asked a question.  You are the one who makes a 
definition like this and then says that it is up to people to decide whether 
other humans feel pain or not.  That is hypocritical to an extreme.


I also believe that your definition is a total crock that was developed for 
no purpose other than to support your BS.


You stated that machines can feel pain, and you stated that we don't get 
to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) and prove that 
these criteria are valid?


I stated that *SOME* future machines will be able to feel pain.  I can 
define grounding, internal feedback and volition but feel no need to do so 
"as properties of a Turing machine" and decline to attempt to prove anything 
to you since you're so full of it that your mother couldn't prove to you 
that you were born.



- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, November 18, 2008 6:26 PM
Subject: Definition of pain (was Re: FW: [agi] A paper that actually does 
solve the problem of consciousness--correction)




--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:


Autobliss has no grounding, no internal feedback, and no
volition.  By what definitions does it feel pain?


Now you are making up new rules to decide that autobliss doesn't feel 
pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.


You stated that machines can feel pain, and you stated that we don't get 
to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) and prove that 
these criteria are valid?


And just to avoid confusion, my question has nothing to do with ethics.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Mark Waser
Aren't you the one who decided that autobliss feels pain? Or did you 
decide that it doesn't?


Autobliss has no grounding, no internal feedback, and no volition.  By what 
definitions does it feel pain?


On the other hand, by what definitions do people not feel pain (other than 
by some fictitious what-if scenario solely designed to split logical and 
ethical hairs with ABSOLUTELY NO PHYSICAL BASIS for even starting to believe 
it).


Go back to happily playing with yourself in your nice little solipsistic 
world.  You clearly aren't reliably attached to this one.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, November 18, 2008 5:05 PM
Subject: Re: FW: [agi] A paper that actually does solve the problem of 
consciousness--correction




--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:


> I mean that people are free to decide if others feel pain.

Wow!  You are one sick puppy, dude.  Personally, you have
just hit my "Do not bother debating with" list.

You can "decide" anything you like -- but that
doesn't make it true.


Aren't you the one who decided that autobliss feels pain? Or did you 
decide that it doesn't?



-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Mark Waser

My problem is if qualia are atomic, with no differentiable details, why
do some "feel" different than others -- shouldn't they all be separate
but equal? "Red" is relatively neutral, while "searing hot" is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it "feels"
qualitatively different. If it was just something like increased
activity (franticness) in response to "searing hot," then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.


Maybe I missed it but why do you assume that because qualia are atomic that 
they have no differentiable details?  Evolution is, quite correctly, going 
to give pain qualia higher priority and less ability to be shut down than 
red qualia.  In a good representation system, that means that searing hot is 
going to be *very*  and very tough to ignore.




- Original Message - 
From: "Harry Chesley" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, November 18, 2008 1:57 PM
Subject: **SPAM** Re: [agi] A paper that actually does solve the problem of 
consciousness




Richard Loosemore wrote:

Harry Chesley wrote:

Richard Loosemore wrote:

I completed the first draft of a technical paper on consciousness
the other day.   It is intended for the AGI-09 conference, and it
can be found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf



One other point: Although this is a possible explanation for our
subjective experience of qualia like "red" or "soft," I don't see
it explaining "pain" or "happy" quite so easily. You can
hypothesize a sort of mechanism-level explanation of those by
relegating them to the older or "lower" parts of the brain (i.e.,
they're atomic at the conscious level, but have more effects at the
physiological level (like releasing chemicals into the system)),
but that doesn't satisfactorily cover the subjective side for me.


I do have a quick answer to that one.

Remember that the core of the model is the *scope* of the analysis
mechanism.  If there is a sharp boundary (as well there might be),
then this defines the point where the qualia kick in.  Pain receptors
are fairly easy:  they are primitive signal lines.  Emotions are, I
believe, caused by clusters of lower brain structures, so the
interface between "lower brain" and "foreground" is the place where
the foreground sees a limit to the analysis mechanisms.

More generally, the significance of the "foreground" is that it sets
a boundary on how far the analysis mechanisms can reach.

I am not sure why that would seem less satisfactory as an explanation
of the subjectivity.  It is a "raw feel", and that is the key idea,
no?


My problem is if qualia are atomic, with no differentiable details, why
do some "feel" different than others -- shouldn't they all be separate
but equal? "Red" is relatively neutral, while "searing hot" is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it "feels"
qualitatively different. If it was just something like increased
activity (franticness) in response to "searing hot," then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Mark Waser

I mean that people are free to decide if others feel pain.


Wow!  You are one sick puppy, dude.  Personally, you have just hit my "Do 
not bother debating with" list.


You can "decide" anything you like -- but that doesn't make it true.

- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Monday, November 17, 2008 4:44 PM
Subject: RE: FW: [agi] A paper that actually does solve the problem of 
consciousness--correction




--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:

First, it is not clear "people
are free to decide what makes pain "real"," at least
subjectively real.


I mean that people are free to decide if others feel pain. For example, a 
scientist may decide that a mouse does not feel pain when it is stuck in 
the eye with a needle (the standard way to draw blood) even though it 
squirms just like a human would. It is surprisingly easy to modify one's 
ethics to feel this way, as proven by the Milgram experiments and Nazi war 
crime trials.


If we have anything close to the advances in brain scanning and brain 
science

that Kurzweil predicts 1, we should come to understand the correlates of
consciousness quite well


No. I used examples like autobliss ( 
http://www.mattmahoney.net/autobliss.txt ) and the roundworm c. elegans as 
examples of simple systems whose functions are completely understood, yet 
the question of whether such systems experience pain remains a 
philosophical question that cannot be answered by experiment.


-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Mark Waser
Autobliss responds to pain by changing its behavior to make it less 
likely. Please explain how this is different from human suffering. And 
don't tell me its because one is human and the other is a simple program, 
because...


Why don't you resend the link to this new autobliss that "responds to pain 
by changing its behavior to make it less likely" and clearly explain why 
what you refer to as "pain" for autobliss isn't just some ungrounded label 
that has absolutely nothing to do with pain in any real sense of the word. 
As far as I have seen, your autobliss argument is akin to claiming that a 
rock feels pain and runs away to avoid pain when I kick it


So either pain is real to both, or to neither, or there is some other 
criteria which you haven't specified, in which case I would like to know 
what that is.


Absolutely.  Pain is real for both.

- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Monday, November 17, 2008 2:17 PM
Subject: Re: FW: [agi] A paper that actually does solve the problem of 
consciousness--correction




--- On Mon, 11/17/08, Mark Waser <[EMAIL PROTECTED]> wrote:


>> No it won't, because people are free to decide what makes pain "real".

What?  You've got to be kidding . . . .  What makes
pain real is how the sufferer reacts to it -- not some
abstract wishful thinking that we use to justify our
decisions of how we wish to behave.


Autobliss responds to pain by changing its behavior to make it less 
likely. Please explain how this is different from human suffering. And 
don't tell me its because one is human and the other is a simple program, 
because...



>> Do you think that the addition of intelligent
robots will make the boundary between human and non-human
any sharper?

No, I think that it will make it much fuzzier . . . . but
since the boundary is just a strawman for lazy thinkers,
removing it will actually make our ethics much sharper.


So either pain is real to both, or to neither, or there is some other 
criteria which you haven't specified, in which case I would like to know 
what that is.


-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser

An excellent question from Harry . . . .

So when I don't remember anything about those towns, from a few minutes 
ago on my road trip, is it because (a) the attentional mechanism did not 
bother to lay down any episodic memory traces, so I cannot bring back the 
memories and analyze them, or (b) that I was actually not experiencing any 
qualia during that time when I was on autopilot?


I believe that the answer is (a), and that IF I can stopped at any point 
during the observation period and thought about the experience I just had, 
I would be able to appreciate the last few seconds of subjective 
experience.


So . . . . what if the *you* that you/we speak of is simply the attentional 
mechanism?  What if qualia are simply the way that other brain processes 
appear to you/the attentional mechanism?


Why would "you" be experiencing qualia when you were on autopilot?  It's 
quite clear from experiments that human's don't "see" things in their visual 
field when they are concentrating on other things in their visual field (for 
example, when you are told to concentrate on counting something that someone 
is doing in the foreground while a man in an ape suit walks by in the 
background).  Do you really have qualia from stuff that you don't sense 
(even though your sensory apparatus picked it up, it was clearly discarded 
at some level below the conscious/attentional level)?




- Original Message - 
From: "Richard Loosemore" <[EMAIL PROTECTED]>

To: 
Sent: Monday, November 17, 2008 1:46 PM
Subject: **SPAM** Re: [agi] A paper that actually does solve the problem of 
consciousness




Harry Chesley wrote:

On 11/14/2008 9:27 AM, Richard Loosemore wrote:


 I completed the first draft of a technical paper on consciousness the
 other day.   It is intended for the AGI-09 conference, and it can be
 found at:


http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


Good paper.

A related question: How do you explain the fact that we sometimes are 
aware of qualia and sometimes not? You can perform the same actions 
paying attention or "on auto pilot." In one case, qualia "manifest," 
while in the other they do not. Why is that?


I actually *really* like this question:  I was trying to compose an answer 
to it while lying in bed this morning.


This is what I started referring to (in a longer version of the paper) as 
a "Consciousness Holiday".


In fact, if start unpacking the idea of what we mean by conscious 
experience, we start to realize that it inly really exists when we look at 
it.  It is not even logically possible to think about consciousness - any 
form of it, including *memories* of the consciousness that I had a few 
minutes ago, when I was driving along the road and talking to my companion 
without bothering to look at several large towns that we drove through - 
without applying the analysis mechanism to the consciousness episode.


So when I don't remember anything about those towns, from a few minutes 
ago on my road trip, is it because (a) the attentional mechanism did not 
bother to lay down any episodic memory traces, so I cannot bring back the 
memories and analyze them, or (b) that I was actually not experiencing any 
qualia during that time when I was on autopilot?


I believe that the answer is (a), and that IF I can stopped at any point 
during the observation period and thought about the experience I just had, 
I would be able to appreciate the last few seconds of subjective 
experience.


The real reply to your question goes much much deeper, and it is 
fascinating because we need to get a handle on creatures that probably do 
not do any reflective, language-based philosophical thinking (like guinea 
pigs and crocodiles).  I want to say more, but will have to set it down in 
a longer form.


Does this seem to make sense so far, though?




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
I have no doubt that if you did the experiments you describe, that the 
brains would be rearranged consistently with your predictions. But what 
does that say about consciousness?


What are you asking about consciousness?


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Monday, November 17, 2008 1:11 PM
Subject: Re: [agi] A paper that actually does solve the problem of 
consciousness




--- On Mon, 11/17/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Okay, let me phrase it like this:  I specifically say (or
rather I should have done... this is another thing I need to
make more explicit!) that the predictions are about making
alterations at EXACTLY the boundary of the "analysis
mechanisms".

So, when we test the predictions, we must first understand
the mechanics of human (or AGI) cognition well enough to be
able to locate the exact scope of the analysis mechanisms.

Then, we make the tests by changing things around just
outside the reach of those mechanisms.

Then we ask subjects (human or AGI) what happened to their
subjective experiences.  If the subjects are ourselves -
which I strongly suggest must be the case - then we can ask
ourselves what happened to our subjective experiences.

My prediction is that if the swaps are made at that
boundary, then things will be as I state.  But if changes
are made within the scope of the analysis mechanisms, then
we will not see those changes in the qualia.

So the theory could be falsified if changes in the qualia
are NOT consistent with the theory, when changes are made at
different points in the system.  The theory is all about the
analysis mechanisms being the culprit, so in that sense it
is extremely falsifiable.

Now, correct me if I am wrong, but is there anywhere else
in the literature where you have you seen anyone make a
prediction that the qualia will be changed by the alteration
of a specific mechanism, but not by other, fairly similar
alterations?


Your predictions are not testable. How do you know if another person has 
experienced a change in qualia, or is simply saying that they do? If you 
do the experiment on yourself, how do you know if you really experience a 
change in qualia, or only believe that you do?


There is a difference, you know. Belief is only a rearrangement of your 
neurons. I have no doubt that if you did the experiments you describe, 
that the brains would be rearranged consistently with your predictions. 
But what does that say about consciousness?


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-17 Thread Mark Waser

No it won't, because people are free to decide what makes pain "real".


What?  You've got to be kidding . . . .  What makes pain real is how the 
sufferer reacts to it -- not some abstract wishful thinking that we use to 
justify our decisions of how we wish to behave.


I'm sorry that it's taken me this long to realize exactly how morally 
bankrupt you are.



What about autobliss?


Autobliss is a toy that proves absolutely nothing.


It learns to avoid negative reinforcement and it says "ouch".


Not the version that you posted . . . . more hot air and hyperbole.

Do you really think that if we build AGI in the likeness of a human mind, 
and stick it with a pin and it says "ouch", that we will finally have an 
answer to the question of whether machines have a consciousness?


I think that we have the answer now but that people like you won't be 
convinced even if given overwhelming proof.


100 years ago there was little controversy over animal rights, 
euthanasia, abortion, or capital punishment.


Because our standard of living wasn't high enough to support it . . . .

Do you think that the addition of intelligent robots will make the 
boundary between human and non-human any sharper?


No, I think that it will make it much fuzzier . . . . but since the boundary 
is just a strawman for lazy thinkers, removing it will actually make our 
ethics much sharper.



- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Monday, November 17, 2008 12:44 PM
Subject: Re: FW: [agi] A paper that actually does solve the problem of 
consciousness--correction



--- On Mon, 11/17/08, Ed Porter <[EMAIL PROTECTED]> wrote:

For example, in
fifty years, I think it is quite possible we will be able to say with some
confidence if certain machine intelligences we design are conscious nor 
not,
and whether their pain is as real as the pain of another type of animal, 
such

as chimpanzee, dog, bird, reptile, fly, or amoeba .


No it won't, because people are free to decide what makes pain "real". The 
question is not resolved for simple systems which are completely understood, 
for example, the 302 neuron nervous system of C. elegans. If it can be 
trained by reinforcement learning, it that "real" pain? What about 
autobliss? It learns to avoid negative reinforcement and it says "ouch". Do 
you really think that if we build AGI in the likeness of a human mind, and 
stick it with a pin and it says "ouch", that we will finally have an answer 
to the question of whether machines have a consciousness?


And there is no reason to believe the question will be easier in the future. 
100 years ago there was little controversy over animal rights, euthanasia, 
abortion, or capital punishment. Do you think that the addition of 
intelligent robots will make the boundary between human and non-human any 
sharper?


-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser
An ethical model tells you what is good or bad. It does not make useful 
predictions.


What determines whether something is good or bad?  Merely the declaration of 
the model?  That is precisely what makes *your* ethics ungrounded and 
useless . . . .


Why should you be ethical?  I would argue it is so that you will have the 
best probability of having the best life possible.  This is an eminently 
testable hypothesis.  Do what the model says is good, see what happens.  Do 
what the model says is bad, see what happens.  Yes, you clearly need to 
extrapolate but your model gives nothing except ivory tower pontification.



- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Monday, November 17, 2008 12:20 PM
Subject: Re: [agi] A paper that actually does solve the problem of 
consciousness




--- On Mon, 11/17/08, Mark Waser <[EMAIL PROTECTED]> wrote:


> How do you propose testing whether a model is correct or not?

By determining whether it is useful and predictive -- just
like what we always do when we're practicing science (as
opposed to spouting BS).


An ethical model tells you what is good or bad. It does not make useful 
predictions.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-17 Thread Mark Waser

How do you propose grounding ethics?


Ethics is building and maintaining healthy relationships for the betterment 
of all.  Evolution has equipped us all with a good solid moral sense that 
frequently we don't/can't even override with our short-sighted selfish 
desires (that, more frequently than not, eventually end up screwing us over 
when we follow them).  It's pretty easy to ground ethics as long as you 
realize that there are some cases that are just too close to call with the 
information that you possess at the time you need to make a decision.  But 
then again, that's precisely what intelligence is -- making effective 
decisions under uncertainty.


I have a complex model that says some things are right and others are 
wrong.


That's nice -- but you've already pointed out that your model has numerous 
shortcomings such that you won't even stand behind it.  Why do you keep 
bringing it up?  It's like saying "I have an economic theory" when you 
clearly don't have the expertise to form a competent one.



So does everyone else. These models don't agree.


And lots of people have theories of creationism.  Do you want to use that to 
argue that evolution is incorrect?



How do you propose testing whether a model is correct or not?


By determining whether it is useful and predictive -- just like what we 
always do when we're practicing science (as opposed to spouting BS).


If everyone agreed that torturing people was wrong, then torture wouldn't 
exist.


Wrong.  People agree that things are wrong and then they go and do them 
anyways because they believe that it is beneficial for them.  Why do you 
spout obviously untrue BS?


How do you prove that Richard's definition of consciousness is correct and 
Colin's is wrong, or vice versa? All you can say about either definition 
is that some entities are conscious and others are not, according to 
whichever definition you accept. But so what?


Wow!  You really do practice useless sophistry.  For definitions, correct 
simply means useful and predictive.  I'll go with whichever definition most 
accurately reflects the world.  Are you trying to propose that there is an 
absolute truth out there as far as definitions go?


Because people nevertheless make this arbitrary distinction in order to 
make ethical decisions.


So when lemmings go into the river you believe that they are correct and you 
should follow them?



- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Monday, November 17, 2008 9:35 AM
Subject: Re: [agi] A paper that actually does solve the problem of 
consciousness




--- On Sun, 11/16/08, Mark Waser <[EMAIL PROTECTED]> wrote:

I wrote:

>> I think the reason that the hard question is
interesting at all is that it would presumably be OK to
torture a zombie because it doesn't actually experience
pain, even though it would react exactly like a human being
tortured. That's an ethical question. Ethics is a belief
system that exists in our minds about what we should or
should not do. There is no objective experiment you can do
that will tell you whether any act, such as inflicting pain
on a human, animal, or machine, is ethical or not. The only
thing you can measure is belief, for example, by taking a
poll.

What is the point to ethics?  The reason why you can't
do objective experiments is because *YOU* don't have a
grounded concept of ethics.  The second that you ground your
concepts in effects that can be seen in "the real
world", there are numerous possible experiments.


How do you propose grounding ethics? I have a complex model that says some 
things are right and others are wrong. So does everyone else. These models 
don't agree. How do you propose testing whether a model is correct or not? 
If everyone agreed that torturing people was wrong, then torture wouldn't 
exist.



The same is true of consciousness.  The hard problem of
consciousness is hard because the question is ungrounded.
Define all of the arguments in terms of things that appear
and matter in the real world and the question goes away.
It's only because you invent ungrounded unprovable
distinctions that the so-called hard problem appears.


How do you prove that Richard's definition of consciousness is correct and 
Colin's is wrong, or vice versa? All you can say about either definition 
is that some entities are conscious and others are not, according to 
whichever definition you accept. But so what?



Torturing a p-zombie is unethical because whether it feels
pain or not is 100% irrelevant in "the real
world".  If it 100% acts as if it feels pain, then for
all purposes that matter it does feel pain.  Why invent this
mystical situation where it doesn't feel pain yet acts
as if it does?


Because people nevertheless make this arbitrary distinction

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread Mark Waser
I think the reason that the hard question is interesting at all is that 
it would presumably be OK to torture a zombie because it doesn't actually 
experience pain, even though it would react exactly like a human being 
tortured. That's an ethical question. Ethics is a belief system that 
exists in our minds about what we should or should not do. There is no 
objective experiment you can do that will tell you whether any act, such 
as inflicting pain on a human, animal, or machine, is ethical or not. The 
only thing you can measure is belief, for example, by taking a poll.


What is the point to ethics?  The reason why you can't do objective 
experiments is because *YOU* don't have a grounded concept of ethics.  The 
second that you ground your concepts in effects that can be seen in "the 
real world", there are numerous possible experiments.


The same is true of consciousness.  The hard problem of consciousness is 
hard because the question is ungrounded.  Define all of the arguments in 
terms of things that appear and matter in the real world and the question 
goes away.  It's only because you invent ungrounded unprovable distinctions 
that the so-called hard problem appears.


Torturing a p-zombie is unethical because whether it feels pain or not is 
100% irrelevant in "the real world".  If it 100% acts as if it feels pain, 
then for all purposes that matter it does feel pain.  Why invent this 
mystical situation where it doesn't feel pain yet acts as if it does?


Richard's paper attempts to solve the hard problem by grounding some of the 
silliness.  It's the best possible effort short of just ignoring the 
silliness and going on to something else that is actually relevant to the 
real world.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, November 15, 2008 10:02 PM
Subject: RE: [agi] A paper that actually does solve the problem of 
consciousness



--- On Sat, 11/15/08, Ed Porter <[EMAIL PROTECTED]> wrote:

With regard to the second notion,
that conscious phenomena are not subject to scientific explanation, there 
is

extensive evidence to the contrary. The prescient psychological writings of
William James, and Dr. Alexander Luria’s famous studies of the effects of
variously located bullet wounds on the minds of Russian soldiers after 
World

War II, both illustrate that human consciousness can be scientifically
studied. The effects of various drugs on consciousness have been
scientifically studied.


Richard's paper is only about the "hard" question of consciousness, that 
which distinguishes you from a P-zombie, not the easy question about mental 
states that distinguish between being awake or asleep.


I think the reason that the hard question is interesting at all is that it 
would presumably be OK to torture a zombie because it doesn't actually 
experience pain, even though it would react exactly like a human being 
tortured. That's an ethical question. Ethics is a belief system that exists 
in our minds about what we should or should not do. There is no objective 
experiment you can do that will tell you whether any act, such as inflicting 
pain on a human, animal, or machine, is ethical or not. The only thing you 
can measure is belief, for example, by taking a poll.


-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser
This does not mean that certain practices are good or bad. If there was 
such a thing, then there would be no debate about war, abortion, 
euthanasia, capital punishment, or animal rights, because these questions 
could be answered experimentally.


Given a goal and a context, there is absolutely such a thing as good or bad. 
The problem with the examples that you cited is that you're attempting to 
generalize to a universal answer across contexts (because I would argue that 
there is a useful universal goal) which is nonsensical.  All of this can be 
answered both logically and experimentally if you just ask the right 
question instead of engaging in vacuous hand-waving about how tough it all 
is after you've mindlessly expanded your problem beyond solution.



- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, November 11, 2008 5:58 PM
Subject: **SPAM** Re: [agi] Ethics of computer-based cognitive 
experimentation




--- On Tue, 11/11/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Your 'belief' explanation is a cop-out because it
does not address any of the issues that need to be addressed
for something to count as a definition or an explanation of
the facts that need to be explained.


As I explained, animals that have no concept of death have nevertheless 
evolved to fear most of the things that can kill them. Humans have learned 
to associate these things with death, and invented the concept of 
consciousness as the large set of features which distinguishes living 
humans from dead humans. Thus, humans fear the loss or destruction of 
consciousness, which is equivalent to death.


Consciousness, free will, qualia, and good and bad are universal human 
beliefs. We should not confuse them with truth by asking the wrong 
questions. Thus, Turing sidestepped the question of "can machines think?" 
by asking instead "can machines appear to think"?  Since we can't (by 
definition) distinguish doing something from appearing to do something, it 
makes no sense for us to make this distinction.


Likewise, asking if it is ethical to inflict simulated pain on machines is 
asking the wrong question. Evolution favors the survival of tribes that 
practice altruism toward other tribe members and teach these ethical 
values to their children. This does not mean that certain practices are 
good or bad. If there was such a thing, then there would be no debate 
about war, abortion, euthanasia, capital punishment, or animal rights, 
because these questions could be answered experimentally.


The question is not "how should machines be treated"? The question is "how 
will we treat machines"?



My proposal is being written up now and will be available
at the end of tomorrow.  It does address all of the facts
that need to be explained.


I am looking forward to reading it.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread Mark Waser

An understanding of what consciousness actually is, for
starters.

It is a belief.

No it is not.
And that statement ("It is a belief") is a cop-out theory.


An "understanding" of what consciousness is requires a consensus definition 
of what it is.


For most people, it seems to be an undifferentiated mess that includes all 
of attentional components, intentional components, understanding components, 
and, frequently, experiential components (i.e. qualia).


If you only buy into the first three and do it in a very concrete fashion, 
consciousness (and ethics) isn't all that tough.


Or you can follow Alice and star debating the "real" meaning of the third 
and whether or not the truly fourth exists in anyone except yourself.


Personally, if something has a will (intentionality/goals) that it can focus 
effectively (attentional and understanding), I figure that you'd better 
start treating it ethically for your own long-term self-interest.


Of course, that then begs the question of what ethics is . . . . but I think 
that that is pretty easy to solve as well . . . . 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Mark Waser

I've noticed lately that the paranoid fear of computers becoming
intelligent and taking over the world has almost entirely disappeared
from the common culture.


Is this sarcasm, irony, or are you that unaware of current popular culture 
(i.e. Terminator Chronicles on TV, a new Terminator movie in the works, "I, 
Robot", etc.)?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-31 Thread Mark Waser

I think Hutter is being modest.


Huh?

So . . . . are you going to continue claiming that Occam's Razor is proved 
or are you going to stop (or are you going to point me to the proof)?


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 31, 2008 5:54 PM
Subject: Re: [agi] Occam's Razor and its abuse



I think Hutter is being modest.

-- Matt Mahoney, [EMAIL PROTECTED]


--- On Fri, 10/31/08, Mark Waser <[EMAIL PROTECTED]> wrote:


From: Mark Waser <[EMAIL PROTECTED]>
Subject: Re: [agi] Occam's Razor and its abuse
To: agi@v2.listbox.com
Date: Friday, October 31, 2008, 5:41 PM
Let's try this . . . .

In Universal Algorithmic Intelligence on page 20, Hutter
uses Occam's razor in the definition of .

Then, at the bottom of the page, he merely claims that
"using  as an estimate for ? may be a reasonable thing
to do"

That's not a proof of Occam's Razor.

= = = = = =

He also references Occam's Razor on page 33 where he
says:

"We believe the answer to be negative, which on the
positive side would show the necessity of Occam's razor
assumption, and the distinguishedness of AIXI."

That's calling Occam's razor a necessary assumption
and bases that upon a *belief*.

= = = = = =

Where do you believe that he proves Occam's razor?


- Original Message - From: "Matt Mahoney"
<[EMAIL PROTECTED]>
To: 
Sent: Wednesday, October 29, 2008 10:46 PM
Subject: Re: [agi] Occam's Razor and its abuse


> --- On Wed, 10/29/08, Mark Waser
<[EMAIL PROTECTED]> wrote:
>
>> Hutter *defined* the measure of correctness using
>> simplicity as a component.
>> Of course, they're correlated when you do such
a thing.
>>  That's not a proof,
>> that's an assumption.
>
> Hutter defined the measure of correctness as the
accumulated reward by the agent in AIXI.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives:
https://www.listbox.com/member/archive/303/=now
> RSS Feed:
https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-31 Thread Mark Waser

Let's try this . . . .

In Universal Algorithmic Intelligence on page 20, Hutter uses Occam's razor 
in the definition of .


Then, at the bottom of the page, he merely claims that "using  as an 
estimate for ? may be a reasonable thing to do"


That's not a proof of Occam's Razor.

= = = = = =

He also references Occam's Razor on page 33 where he says:

"We believe the answer to be negative, which on the positive side would show 
the necessity of Occam's razor assumption, and the distinguishedness of 
AIXI."


That's calling Occam's razor a necessary assumption and bases that upon a 
*belief*.


= = = = = =

Where do you believe that he proves Occam's razor?


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 29, 2008 10:46 PM
Subject: Re: [agi] Occam's Razor and its abuse



--- On Wed, 10/29/08, Mark Waser <[EMAIL PROTECTED]> wrote:


Hutter *defined* the measure of correctness using
simplicity as a component.
Of course, they're correlated when you do such a thing.
 That's not a proof,
that's an assumption.


Hutter defined the measure of correctness as the accumulated reward by the 
agent in AIXI.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Mark Waser
Hutter proved (3), although as a general principle it was already a well 
established practice in machine learning. Also, I agree with (4) but this 
is not the primary reason to prefer simplicity.


Hutter *defined* the measure of correctness using simplicity as a component. 
Of course, they're correlated when you do such a thing.  That's not a proof, 
that's an assumption.


Regarding (4), I was deliberately ambiguous as to whether I meant 
implementation of "thinking" system or implementation of thought itself.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 29, 2008 11:11 AM
Subject: Re: [agi] Occam's Razor and its abuse



--- On Wed, 10/29/08, Mark Waser <[EMAIL PROTECTED]> wrote:


> (1) Simplicity (in conclusions, hypothesis, theories,
> etc.) is preferred.
> (2) The preference to simplicity does not need a
> reason or justification.
> (3) Simplicity is preferred because it is correlated
> with correctness.
> I agree with (1), but not (2) and (3).

I concur but would add that (4) Simplicity is preferred
because it is
correlated with correctness *of implementation* (or ease of
implementation correctly :-)


Occam said (1) but had no proof. Hutter proved (3), although as a general 
principle it was already a well established practice in machine learning. 
Also, I agree with (4) but this is not the primary reason to prefer 
simplicity.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
>> sorry, I should have been more precise.   There is some K so that we never 
>> need integers with algorithmic information exceeding K.

Ah . . . . but is K predictable?  Or do we "need" all the integers above it as 
a safety margin?   :-)

(What is the meaning of "need"?  :-)

The inductive proof to show that all integers are necessary as a safety margin 
is pretty obvious . . . .

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 29, 2008 10:38 AM
  Subject: Re: [agi] constructivist issues



  sorry, I should have been more precise.   There is some K so that we never 
need integers with algorithmic information exceeding K.


  On Wed, Oct 29, 2008 at 10:32 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> but we never need arbitrarily large integers in any particular case, we 
only need integers going up to the size of the universe ;-)

But measured in which units?  For any given integer, I can come up with 
(invent :-) a unit of measurement that requires a larger/greater number than 
that integer to describe the size of the universe.



;-)  Nice try, but . . . .  :-p

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 29, 2008 9:48 AM
  Subject: Re: [agi] constructivist issues



  but we never need arbitrarily large integers in any particular case, we 
only need integers going up to the size of the universe ;-)


  On Wed, Oct 29, 2008 at 7:24 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> However, it does seem clear that "the integers" (for instance) is 
not an entity with *scientific* meaning, if you accept my formalization of 
science in the blog entry I recently posted...

Huh?  Integers are a class (which I would argue is an entity) that is I 
would argue is well-defined and useful in science.  What is meaning if not 
well-defined and useful?  I need to go back to your paper because I didn't get 
that out of it at all.


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 6:41 PM
  Subject: Re: [agi] constructivist issues



  "well-defined" is not well-defined in my view...

  However, it does seem clear that "the integers" (for instance) is not 
an entity with *scientific* meaning, if you accept my formalization of science 
in the blog entry I recently posted...




  On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have 
been WITH RESPECT TO THE DEFINITION OF NUMBERS since I was responding to 
"Numbers are not well-defined and can never be".  Further, I should not have 
said "information about numbers" when I meant "definition of numbers".  Argh!

= = = = = = = = 

So Ben, how would you answer Abram's question "So my question is, 
do you interpret this as meaning "Numbers are not well-defined and can never 
be" (constructivist), or do you interpret this as "It is impossible to pack all 
true information about numbers into an axiom system" (classical)?"

Does the statement that a formal system is "incomplete with respect 
to statements about numbers" mean that "Numbers are not well-defined and can 
never be".

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about 
constructivism as in Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
"Ick!"  I emphatically do not believe "When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence".



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper 
hole  :-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your 
argument as to why uncomputable entities are useless for science.  I'm going to 
need to go back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  S

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser

Here's another slant . . . .

I really liked Pei's phrasing (which I consider to be the heart of 
"Constructivism: The Epistemology" :-)

Generally speaking, I'm not
"building some system that learns about the world", in the sense that
there is a correct way to describe the world waiting to be discovered,
which can be captured by some algorithm. Instead, learning to me is a
non-algorithmic open-ended process by which the system summarizes its
own experience, and uses it to predict the future.


Classicists (to me) seem to frequently want one and only one truth that must 
be accurate, complete, and not only provable but for proofs of all of it's 
implications to exist (which is obviously thwarted by Tarski and Gödel).


So . . . . is true that light is a particle? is it true that light is a 
wave?


That's why Ben and I are stuck answering many of your questions with 
requests for clarification -- Which question -- pi or cat?  Which subset of 
what *might* be considered mathematics/arithmetic?  Why are you asking the 
question?


Certain statements appear obviously untrue (read inconsistent with the 
empirical world or our assumed extensions of it) in the vast majority of 
cases/contexts but many others are just/simply context-dependent.




- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 29, 2008 10:08 AM
Subject: Re: [agi] constructivist issues



Ben,

Thanks, that writeup did help me understand your viewpoint. However, I
don't completely unserstand/agree with the argument (one of the two,
not both!). My comments to that effect are posted on your blog.

About the earlier question...

(Mark) So Ben, how would you answer Abram's question "So my question
is, do you interpret this as meaning "Numbers are not well-defined and
can never be" (constructivist), or do you interpret this as "It is
impossible to pack all true information about numbers into an axiom
system" (classical)?"
(Ben) "well-defined" is not well-defined in my view...

To rephrase. Do you think there is a truth of the matter concerning
formally undecidable statements about numbers?

--Abram

On Tue, Oct 28, 2008 at 5:26 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:


Hi guys,

I took a couple hours on a red-eye flight last night to write up in more
detail my
argument as to why uncomputable entities are useless for science:

http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-for.html

Of course, I had to assume a specific formal model of science which may 
be

controversial.  But at any rate, I think I did succeed in writing down my
argument in a more
clear way than I'd been able to do in scattershot emails.

The only real AGI relevance here is some comments on Penrose's nasty AI
theories, e.g.
in the last paragraph and near the intro...

-- Ben G


On Tue, Oct 28, 2008 at 2:02 PM, Abram Demski <[EMAIL PROTECTED]> 
wrote:


Mark,

That is thanks to Godel's incompleteness theorem. Any formal system
that describes numbers is doomed to be incomplete, meaning there will
be statements that can be constructed purely by reference to numbers
(no red cats!) that the system will fail to prove either true or
false.

So my question is, do you interpret this as meaning "Numbers are not
well-defined and can never be" (constructivist), or do you interpret
this as "It is impossible to pack all true information about numbers
into an axiom system" (classical)?

Hmm By the way, I might not be using the term "constructivist" in
a way that all constructivists would agree with. I think
"intuitionist" (a specific type of constructivist) would be a better
term for the view I'm referring to.

--Abram Demski

On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser <[EMAIL PROTECTED]> 
wrote:

>>> Numbers can be fully defined in the classical sense, but not in the
>
> constructivist sense. So, when you say "fully defined question", do
> you mean a question for which all answers are stipulated by logical
> necessity (classical), or logical deduction (constructivist)?
>
> How (or why) are numbers not fully defined in a constructionist sense?
>
> (I was about to ask you whether or not you had answered your own
> question
> until that caught my eye on the second or third read-through).
>
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion, 
butcher

a hog,

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
>> but we never need arbitrarily large integers in any particular case, we only 
>> need integers going up to the size of the universe ;-)

But measured in which units?  For any given integer, I can come up with (invent 
:-) a unit of measurement that requires a larger/greater number than that 
integer to describe the size of the universe.



;-)  Nice try, but . . . .  :-p

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 29, 2008 9:48 AM
  Subject: Re: [agi] constructivist issues



  but we never need arbitrarily large integers in any particular case, we only 
need integers going up to the size of the universe ;-)


  On Wed, Oct 29, 2008 at 7:24 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> However, it does seem clear that "the integers" (for instance) is not an 
entity with *scientific* meaning, if you accept my formalization of science in 
the blog entry I recently posted...

Huh?  Integers are a class (which I would argue is an entity) that is I 
would argue is well-defined and useful in science.  What is meaning if not 
well-defined and useful?  I need to go back to your paper because I didn't get 
that out of it at all.


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 6:41 PM
  Subject: Re: [agi] constructivist issues



  "well-defined" is not well-defined in my view...

  However, it does seem clear that "the integers" (for instance) is not an 
entity with *scientific* meaning, if you accept my formalization of science in 
the blog entry I recently posted...




  On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been 
WITH RESPECT TO THE DEFINITION OF NUMBERS since I was responding to "Numbers 
are not well-defined and can never be".  Further, I should not have said 
"information about numbers" when I meant "definition of numbers".  Argh!

= = = = = = = = 

So Ben, how would you answer Abram's question "So my question is, do 
you interpret this as meaning "Numbers are not well-defined and can never be" 
(constructivist), or do you interpret this as "It is impossible to pack all 
true information about numbers into an axiom system" (classical)?"

Does the statement that a formal system is "incomplete with respect to 
statements about numbers" mean that "Numbers are not well-defined and can never 
be".

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about constructivism as 
in Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
"Ick!"  I emphatically do not believe "When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence".



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper hole  
:-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your 
argument as to why uncomputable entities are useless for science.  I'm going to 
need to go back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...


  On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

  That is thanks to Godel's incompleteness theorem. Any formal 
system
  that describes numbers is doomed to be incomplete



Yes, any formal system is doomed to be incomplete.  Emphatically, 
NO!  It is not true that "any formal system" is doomed to be incomplete WITH 
RESPECT TO NUMBERS.

It is entirely possible (nay, almost certain) that there is a 
larger system where the information about numbers is complete but that the 
other things that t

Re: [agi] Occam's Razor and its abuse

2008-10-29 Thread Mark Waser

(1) Simplicity (in conclusions, hypothesis, theories, etc.) is preferred.
(2) The preference to simplicity does not need a reason or justification.
(3) Simplicity is preferred because it is correlated with correctness.
I agree with (1), but not (2) and (3).


I concur but would add that (4) Simplicity is preferred because it is 
correlated with correctness *of implementation* (or ease of implementation 
correctly :-)



- Original Message - 
From: "Pei Wang" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 28, 2008 10:15 PM
Subject: Re: [agi] Occam's Razor and its abuse



Eric,

I highly respect your work, though we clearly have different opinions
on what intelligence is, as well as on how to achieve it. For example,
though learning and generalization play central roles in my theory
about intelligence, I don't think PAC learning (or the other learning
algorithms proposed so far) provides a proper conceptual framework for
the typical situation of this process. Generally speaking, I'm not
"building some system that learns about the world", in the sense that
there is a correct way to describe the world waiting to be discovered,
which can be captured by some algorithm. Instead, learning to me is a
non-algorithmic open-ended process by which the system summarizes its
own experience, and uses it to predict the future. I fully understand
that most people in this field probably consider this opinion wrong,
though I haven't been convinced yet by the arguments I've seen so far.

Instead of addressing all of the relevant issues, in this discussion I
have a very limited goal. To rephrase what I said initially, I see
that under the term "Occam's Razor", currently there are three
different statements:

(1) Simplicity (in conclusions, hypothesis, theories, etc.) is preferred.

(2) The preference to simplicity does not need a reason or justification.

(3) Simplicity is preferred because it is correlated with correctness.

I agree with (1), but not (2) and (3). I know many people have
different opinions, and I don't attempt to argue with them here ---
these problems are too complicated to be settled by email exchanges.

However, I do hope to convince people in this discussion that the
three statements are not logically equivalent, and (2) and (3) are not
implied by (1), so to use "Occam's Razor" to refer to all of them is
not a good idea, because it is going to mix different issues.
Therefore, I suggest people to use "Occam's Razor" in its original and
basic sense, that is (1), and to use other terms to refer to (2) and
(3). Otherwise, when people talk about "Occam's Razor", I just don't
know what to say.

Pei

On Tue, Oct 28, 2008 at 8:09 PM, Eric Baum <[EMAIL PROTECTED]> wrote:


Pei> Triggered by several recent discussions, I'd like to make the
Pei> following position statement, though won't commit myself to long
Pei> debate on it. ;-)

Pei> Occam's Razor, in its original form, goes like "entities must not
Pei> be multiplied beyond necessity", and it is often stated as "All
Pei> other things being equal, the simplest solution is the best" or
Pei> "when multiple competing theories are equal in other respects,
Pei> the principle recommends selecting the theory that introduces the
Pei> fewest assumptions and postulates the fewest entities" --- all
Pei> from http://en.wikipedia.org/wiki/Occam's_razor

Pei> I fully agree with all of the above statements.

Pei> However, to me, there are two common misunderstandings associated
Pei> with it in the context of AGI and philosophy of science.

Pei> (1) To take this statement as self-evident or a stand-alone
Pei> postulate

Pei> To me, it is derived or implied by the insufficiency of
Pei> resources. If a system has sufficient resources, it has no good
Pei> reason to prefer a simpler theory.

With all due respect, this is mistaken.
Occam's Razor, in some form, is the heart of Generalization, which
is the essence (and G) of GI.

For example, if you study concept learning from examples,
say in the PAC learning context (related theorems
hold in some other contexts as well),
there are theorems to the effect that if you find
a hypothesis from a simple enough class of a hypotheses
it will with very high probability accurately classify new
examples chosen from the same distribution,

and conversely theorems that state (roughly speaking) that
any method that chooses a hypothesis from too expressive a class
of hypotheses will have a probability that can be bounded below
by some reasonable number like 1/7,
of having large error in its predictions on new examples--
in other words it is impossible to PAC learn without respecting
Occam's Razor.

For discussion of the above paragraphs, I'd refer you to
Chapter 4 of What is Thought? (MIT Press, 2004).

In other words, if you are building some system that learns
about the world, it had better respect Occam's razor if you
want whatever it learns to apply to new experience.
(I use the term Occam's razor loosely; using
hypotheses that are hig

Re: [agi] constructivist issues

2008-10-29 Thread Mark Waser
>> However, it does seem clear that "the integers" (for instance) is not an 
>> entity with *scientific* meaning, if you accept my formalization of science 
>> in the blog entry I recently posted...

Huh?  Integers are a class (which I would argue is an entity) that is I would 
argue is well-defined and useful in science.  What is meaning if not 
well-defined and useful?  I need to go back to your paper because I didn't get 
that out of it at all.


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 6:41 PM
  Subject: Re: [agi] constructivist issues



  "well-defined" is not well-defined in my view...

  However, it does seem clear that "the integers" (for instance) is not an 
entity with *scientific* meaning, if you accept my formalization of science in 
the blog entry I recently posted...




  On Tue, Oct 28, 2008 at 3:34 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been WITH 
RESPECT TO THE DEFINITION OF NUMBERS since I was responding to "Numbers are not 
well-defined and can never be".  Further, I should not have said "information 
about numbers" when I meant "definition of numbers".  Argh!

= = = = = = = = 

So Ben, how would you answer Abram's question "So my question is, do you 
interpret this as meaning "Numbers are not well-defined and can never be" 
(constructivist), or do you interpret this as "It is impossible to pack all 
true information about numbers into an axiom system" (classical)?"

Does the statement that a formal system is "incomplete with respect to 
statements about numbers" mean that "Numbers are not well-defined and can never 
be".

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about constructivism as in 
Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
"Ick!"  I emphatically do not believe "When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence".



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper hole  :-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your argument 
as to why uncomputable entities are useless for science.  I'm going to need to 
go back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus 
equivalent to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be 
incomplete with respect to statements about numbers... that is what Godel 
originally showed...


  On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

  That is thanks to Godel's incompleteness theorem. Any formal system
  that describes numbers is doomed to be incomplete



Yes, any formal system is doomed to be incomplete.  Emphatically, NO!  
It is not true that "any formal system" is doomed to be incomplete WITH RESPECT 
TO NUMBERS.

It is entirely possible (nay, almost certain) that there is a larger 
system where the information about numbers is complete but that the other 
things that the system describes are incomplete. 



  So my question is, do you interpret this as meaning "Numbers are not
  well-defined and can never be" (constructivist), or do you interpret
  this as "It is impossible to pack all true information about numbers
  into an axiom system" (classical)?



Hmmm.  From a larger reference framework, the former 
claimed-to-be-constructivist view isn't true/correct because it clearly *is* 
possible that numbers may be well-defined within a larger system (i.e. the "can 
never be" is incorrect).

Does that mean that I'm a classicist or that you are mis-interpreting 
constructivism (because you're attributing a provably false statement to 
constructivists)?  I'm leaning towards the latter currently.  ;-) 


- Original Message - From: "Abr

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
>> Any formal system that contains some basic arithmetic apparatus equivalent 
>> to http://en.wikipedia.org/wiki/Peano_axioms is doomed to be incomplete with 
>> respect to statements about numbers... that is what Godel originally 
>> showed...

Oh.  Ick!  My bad phrasing.  WITH RESPECT TO NUMBERS should have been WITH 
RESPECT TO THE DEFINITION OF NUMBERS since I was responding to "Numbers are not 
well-defined and can never be".  Further, I should not have said "information 
about numbers" when I meant "definition of numbers".  Argh!

= = = = = = = = 

So Ben, how would you answer Abram's question "So my question is, do you 
interpret this as meaning "Numbers are not well-defined and can never be" 
(constructivist), or do you interpret this as "It is impossible to pack all 
true information about numbers into an axiom system" (classical)?"

Does the statement that a formal system is "incomplete with respect to 
statements about numbers" mean that "Numbers are not well-defined and can never 
be".

= = = = = = = 

(Semi-)Retraction - maybe? (mostly for Abram).

Ick again!  I was assuming that we were talking about constructivism as in 
Constructivist epistemology 
(http://en.wikipedia.org/wiki/Constructivist_epistemology).  I have just had 
Constructivism (mathematics) pointed out to me 
(http://en.wikipedia.org/wiki/Constructivism_(mathematics))  All I can say is 
"Ick!"  I emphatically do not believe "When one assumes that an object does not 
exist and derives a contradiction from that assumption, one still has not found 
the object and therefore not proved its existence".



= = = = = = = = 

I'm quitting and going home now to avoid digging myself a deeper hole  :-)

Mark

PS.  Ben, I read and, at first glance, liked and agreed with your argument as 
to why uncomputable entities are useless for science.  I'm going to need to go 
back over it a few more times though.:-)

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 5:55 PM
  Subject: Re: [agi] constructivist issues



  Any formal system that contains some basic arithmetic apparatus equivalent to 
http://en.wikipedia.org/wiki/Peano_axioms is doomed to be incomplete with 
respect to statements about numbers... that is what Godel originally showed...


  On Tue, Oct 28, 2008 at 2:50 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

  That is thanks to Godel's incompleteness theorem. Any formal system
  that describes numbers is doomed to be incomplete



Yes, any formal system is doomed to be incomplete.  Emphatically, NO!  It 
is not true that "any formal system" is doomed to be incomplete WITH RESPECT TO 
NUMBERS.

It is entirely possible (nay, almost certain) that there is a larger system 
where the information about numbers is complete but that the other things that 
the system describes are incomplete.



  So my question is, do you interpret this as meaning "Numbers are not
  well-defined and can never be" (constructivist), or do you interpret
  this as "It is impossible to pack all true information about numbers
  into an axiom system" (classical)?



Hmmm.  From a larger reference framework, the former 
claimed-to-be-constructivist view isn't true/correct because it clearly *is* 
possible that numbers may be well-defined within a larger system (i.e. the "can 
never be" is incorrect).

Does that mean that I'm a classicist or that you are mis-interpreting 
constructivism (because you're attributing a provably false statement to 
constructivists)?  I'm leaning towards the latter currently.  ;-)


- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 

Sent: Tuesday, October 28, 2008 5:02 PM

Subject: Re: [agi] constructivist issues



  Mark,

  That is thanks to Godel's incompleteness theorem. Any formal system
  that describes numbers is doomed to be incomplete, meaning there will
  be statements that can be constructed purely by reference to numbers
  (no red cats!) that the system will fail to prove either true or
  false.

  So my question is, do you interpret this as meaning "Numbers are not
  well-defined and can never be" (constructivist), or do you interpret
  this as "It is impossible to pack all true information about numbers
  into an axiom system" (classical)?

  Hmm By the way, I might not be using the term "constructivist" in
  a way that all constructivists would agree with. I think
  "intuitionist" (a specific type of constructivist) would be a better
  term for the view I'm referring to.

  --Abram Demski

  On Tue, Oct 28, 2008 at 4:13 

Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

That is thanks to Godel's incompleteness theorem. Any formal system
that describes numbers is doomed to be incomplete


Yes, any formal system is doomed to be incomplete.  Emphatically, NO!  It is 
not true that "any formal system" is doomed to be incomplete WITH RESPECT TO 
NUMBERS.


It is entirely possible (nay, almost certain) that there is a larger system 
where the information about numbers is complete but that the other things 
that the system describes are incomplete.



So my question is, do you interpret this as meaning "Numbers are not
well-defined and can never be" (constructivist), or do you interpret
this as "It is impossible to pack all true information about numbers
into an axiom system" (classical)?


Hmmm.  From a larger reference framework, the former 
claimed-to-be-constructivist view isn't true/correct because it clearly *is* 
possible that numbers may be well-defined within a larger system (i.e. the 
"can never be" is incorrect).


Does that mean that I'm a classicist or that you are mis-interpreting 
constructivism (because you're attributing a provably false statement to 
constructivists)?  I'm leaning towards the latter currently.  ;-)


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 28, 2008 5:02 PM
Subject: Re: [agi] constructivist issues



Mark,

That is thanks to Godel's incompleteness theorem. Any formal system
that describes numbers is doomed to be incomplete, meaning there will
be statements that can be constructed purely by reference to numbers
(no red cats!) that the system will fail to prove either true or
false.

So my question is, do you interpret this as meaning "Numbers are not
well-defined and can never be" (constructivist), or do you interpret
this as "It is impossible to pack all true information about numbers
into an axiom system" (classical)?

Hmm By the way, I might not be using the term "constructivist" in
a way that all constructivists would agree with. I think
"intuitionist" (a specific type of constructivist) would be a better
term for the view I'm referring to.

--Abram Demski

On Tue, Oct 28, 2008 at 4:13 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Numbers can be fully defined in the classical sense, but not in the


constructivist sense. So, when you say "fully defined question", do
you mean a question for which all answers are stipulated by logical
necessity (classical), or logical deduction (constructivist)?

How (or why) are numbers not fully defined in a constructionist sense?

(I was about to ask you whether or not you had answered your own question
until that caught my eye on the second or third read-through).





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

Numbers can be fully defined in the classical sense, but not in the

constructivist sense. So, when you say "fully defined question", do
you mean a question for which all answers are stipulated by logical
necessity (classical), or logical deduction (constructivist)?

How (or why) are numbers not fully defined in a constructionist sense?

(I was about to ask you whether or not you had answered your own question 
until that caught my eye on the second or third read-through).



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 28, 2008 3:47 PM
Subject: Re: [agi] constructivist issues


Mark,

Thank you, that clarifies somewhat.

But, *my* answer to *your* question would seem to depend on what you
mean when you say "fully defined". Under the classical interpretation,
yes: the question is fully defined, so it is a "pi question". Under
the constructivist interpretation, no: the question is not fully
defined, so it is a "cat question".

Numbers can be fully defined in the classical sense, but not in the
constructivist sense. So, when you say "fully defined question", do
you mean a question for which all answers are stipulated by logical
necessity (classical), or logical deduction (constructivist)?

--Abram Demski

On Tue, Oct 28, 2008 at 3:28 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

In that case, shouldn't
you agree with the classical perspective on Godelian incompleteness,
since Godel's incompleteness theorem is about mathematical systems?


It depends.  Are you asking me a fully defined question within the current
axioms of what you call mathematical systems (i.e. a pi question) or a cat
question (which could *eventually* be defined by some massive extensions 
to

your mathematical systems but which isn't currently defined in what you're
calling mathematical systems)?

Saying that Gödel is about mathematical systems is not saying that it's 
not

about cat-including systems.

- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, October 28, 2008 12:06 PM
Subject: Re: [agi] constructivist issues





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

In that case, shouldn't
you agree with the classical perspective on Godelian incompleteness,
since Godel's incompleteness theorem is about mathematical systems?


It depends.  Are you asking me a fully defined question within the current 
axioms of what you call mathematical systems (i.e. a pi question) or a cat 
question (which could *eventually* be defined by some massive extensions to 
your mathematical systems but which isn't currently defined in what you're 
calling mathematical systems)?


Saying that Gödel is about mathematical systems is not saying that it's not 
about cat-including systems.


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 28, 2008 12:06 PM
Subject: Re: [agi] constructivist issues



Mark,

Yes, I do keep dropping the context. This is because I am concerned
only with mathematical knowledge at the moment. I should have been
more specific.

So, if I understand you right, you are saying that you take the
classical view when it comes to mathematics. In that case, shouldn't
you agree with the classical perspective on Godelian incompleteness,
since Godel's incompleteness theorem is about mathematical systems?

--Abram

On Tue, Oct 28, 2008 at 10:20 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

Hi,

  We keep going around and around because you keep dropping my 
distinction

between two different cases . . . .

  The statement that "The cat is red" is undecidable by arithmetic 
because
it can't even be defined in terms of the axioms of arithmetic (i.e. it 
has

*meaning* outside of arithmetic).  You need to construct
additions/extensions to arithmetic to even start to deal with it.

  The statement that "Pi is a normal number" is decidable by arithmetic
because each of the terms has meaning in arithmetic (so it certainly can 
be
disproved by counter-example).  It may not be deducible from the axioms 
but

the meaning of the statement is contained within the axioms.

  The first example is what you call a constructivist view.  The second
example is what you call a classical view.  Which one I take is eminently
context-dependent and you keep dropping the context.  If the meaning of 
the
statement is contained within the system, it is decidable even if it is 
not

deducible.  If the meaning is beyond the system, then it is not decidable
because you can't even express what you're deciding.

  Mark


- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, October 28, 2008 9:32 AM
Subject: Re: [agi] constructivist issues



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
>> The question that is puzzling, though, is: how can it be that these 
>> uncomputable, inexpressible entities are so bloody useful ;-)  ... for 
>> instance in differential calculus ...

Differential calculus doesn't use those individual entities . . . . 

>> Also, to say that uncomputable entities don't exist because they can't be 
>> finitely described, is basically just to *define* existence as "finite 
>> describability."

I never said any such thing.  I referenced a class of numbers that I defined as 
never physically manifesting and never being conceptually distinct and then 
asked if they existed.  Clearly some portion of your liver that I can't define 
finitely still exists because it is physically manifest.

>> So this is more a philosophical position on what "exists"  means than an 
>> argument that could convince anyone.

Yes, in that I basically defined my version of exists as physically manifest 
and/or described or invoked and then asked if that matched Abram's definition.  
No, in that you're now coming in with half (or less) of my definition and 
arguing that I'm unconvincing.  :-)


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 28, 2008 11:44 AM
  Subject: Re: [agi] constructivist issues



  Mark,

  The question that is puzzling, though, is: how can it be that these 
uncomputable, inexpressible entities are so bloody useful ;-)  ... for instance 
in differential calculus ...

  Also, to say that uncomputable entities don't exist because they can't be 
finitely described, is basically just to *define* existence as "finite 
describability."  So this is more a philosophical position on what "exists"  
means than an argument that could convince anyone.

  I have some more detailed thoughts on these issues that I'll write down 
sometime soon when I get the time.   My position is fairly close to yours but I 
think that with these sorts of issues, the devil is in the details.

  ben


  On Tue, Oct 28, 2008 at 6:53 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

Abram,

I could agree with the statement that there are uncountably many 
*potential* numbers but I'm going to argue that any number that actually exists 
is eminently describable.

Take the set of all numbers that are defined far enough after the decimal 
point that they never accurately describe anything manifest in the physical 
universe and are never described or invoked by any entity in the physical 
universe (specifically including a method for the generation of that number).

Pi is clearly not in the set since a) it describes all sorts of ratios in 
the physical universe and b) there is a clear formula for generating successive 
approximations of it.

My question is -- do these numbers really exist?  And, if so, by what 
definition of exist since my definition is meant to rule out any form of 
manifestation whether physical or as a concept.

Clearly these numbers have the potential to exist -- but it should be 
equally clear that they do not actually "exist" (i.e. they are never 
individuated out of the class).

Any number which truly exists has at least one description either of the 
type of a) the number which is manifest as or b) the number which is generated 
by. 

Classicists seem to want to insist that all of these potential numbers 
actually do exist -- so they can make statements like "There are uncountably 
many real numbers that no one can ever describe in any manner."  

I ask of them (and you) -- Show me just one.:-)




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects."  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

Hi,

   We keep going around and around because you keep dropping my distinction 
between two different cases . . . .


   The statement that "The cat is red" is undecidable by arithmetic because 
it can't even be defined in terms of the axioms of arithmetic (i.e. it has 
*meaning* outside of arithmetic).  You need to construct 
additions/extensions to arithmetic to even start to deal with it.


   The statement that "Pi is a normal number" is decidable by arithmetic 
because each of the terms has meaning in arithmetic (so it certainly can be 
disproved by counter-example).  It may not be deducible from the axioms but 
the meaning of the statement is contained within the axioms.


   The first example is what you call a constructivist view.  The second 
example is what you call a classical view.  Which one I take is eminently 
context-dependent and you keep dropping the context.  If the meaning of the 
statement is contained within the system, it is decidable even if it is not 
deducible.  If the meaning is beyond the system, then it is not decidable 
because you can't even express what you're deciding.


   Mark


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 28, 2008 9:32 AM
Subject: Re: [agi] constructivist issues



Mark,

You assert that the extensions are judged on how well they reflect the 
world.


The extension currently under discussion is one that allows us to
prove the consistency of Arithmetic. So, it seems, you count that as
something observable in the world-- no mathematician has ever proved a
contradiction from the axioms of arithmetic, so they seem consistent.
If this is indeed what you are saying, then you are in line with the
classical view in this respect (and with my opinion).

But, if this is your view, I don't see how you can maintain the
constructivist assertion that Godelian statements are undecidable
because they are undefined by the axioms. It seems that, instead, you
are agreeing with the classical notion that there is in fact a truth
of the matter concerning Godelian statements, we're just unable to
deduce that truth from the axioms.

--Abram

On Tue, Oct 28, 2008 at 7:21 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

*That* is what I was asking about when I asked which side you fell on.


Do you think such extensions are arbitrary, or do you think there is a
fact of the matter?

The extensions are clearly judged on whether or not they accurately 
reflect
the empirical world *as currently known* -- so they aren't arbitrary in 
that

sense.

On the other hand, there may not be just a single set of extensions that
accurately reflect the world so I guess that you could say that choosing
among sets of extensions that both accurately reflect the world is
(necessarily) an arbitrary process since there is no additional 
information

to go on (though there are certainly heuristics like Occam's razor -- but
they are more about getting a usable or "more likely" to hold up under
future observations or more likely to be easily modified to match future
observations theory . . . .).

The world is real.  Our explanations and theories are constructed.  For 
any
complete system, you can take the classical approach but incompleteness 
(of
current information which then causes undecidability) ever forces you 
into

constructivism to create an ever-expanding series of shells of stronger
systems to explain those systems contained by them.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser
Abram,

I could agree with the statement that there are uncountably many *potential* 
numbers but I'm going to argue that any number that actually exists is 
eminently describable.

Take the set of all numbers that are defined far enough after the decimal point 
that they never accurately describe anything manifest in the physical universe 
and are never described or invoked by any entity in the physical universe 
(specifically including a method for the generation of that number).

Pi is clearly not in the set since a) it describes all sorts of ratios in the 
physical universe and b) there is a clear formula for generating successive 
approximations of it.

My question is -- do these numbers really exist?  And, if so, by what 
definition of exist since my definition is meant to rule out any form of 
manifestation whether physical or as a concept.

Clearly these numbers have the potential to exist -- but it should be equally 
clear that they do not actually "exist" (i.e. they are never individuated out 
of the class).

Any number which truly exists has at least one description either of the type 
of a) the number which is manifest as or b) the number which is generated by. 

Classicists seem to want to insist that all of these potential numbers actually 
do exist -- so they can make statements like "There are uncountably many real 
numbers that no one can ever describe in any manner."  

I ask of them (and you) -- Show me just one.:-)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-28 Thread Mark Waser

*That* is what I was asking about when I asked which side you fell on.

Do you think such extensions are arbitrary, or do you think there is a
fact of the matter?

The extensions are clearly judged on whether or not they accurately reflect 
the empirical world *as currently known* -- so they aren't arbitrary in that 
sense.


On the other hand, there may not be just a single set of extensions that 
accurately reflect the world so I guess that you could say that choosing 
among sets of extensions that both accurately reflect the world is 
(necessarily) an arbitrary process since there is no additional information 
to go on (though there are certainly heuristics like Occam's razor -- but 
they are more about getting a usable or "more likely" to hold up under 
future observations or more likely to be easily modified to match future 
observations theory . . . .).


The world is real.  Our explanations and theories are constructed.  For any 
complete system, you can take the classical approach but incompleteness (of 
current information which then causes undecidability) ever forces you into 
constructivism to create an ever-expanding series of shells of stronger 
systems to explain those systems contained by them.


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Monday, October 27, 2008 5:43 PM
Subject: Re: [agi] constructivist issues


Mark,

Sorry, I accidentally called you "Mike" in the previous email!

Anyway, you said:

"Also, you seem to be ascribing arbitrariness to constructivism which
is emphatically not the case."

I didn't mean to ascribe arbitrariness to constructivism-- what I
meant was that constructivists would (as I understand it) ascribe
arbitrariness to extensions of arithmetic. A constructivist sees the
fact of the matter as undefined for undecidable statements, so adding
axioms that make them decidable is necessarily an arbitrary process.
The classical view, on the other hand, sees it as an attempt to
increase the amount of true information contained in the axioms-- so
there is a right and wrong.

*That* is what I was asking about when I asked which side you fell on.
Do you think such extensions are arbitrary, or do you think there is a
fact of the matter?

--Abram

On Mon, Oct 27, 2008 at 3:33 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

The number of possible descriptions is countable


I disagree.


if we were able to randomly pick a real number between 1 and 0, it would
be indescribable with probability 1.


If we were able to randomly pick a real number between 1 and 0, it would 
be

indescribable with probability *approaching* 1.


Which side do you fall on?


I still say that the sides are parts of the same coin.

In other words, we're proving arithmetic consistent only by adding to 
its
definition, which hardly counts. The classical viewpoint, of course, is 
that

the stronger system is actually correct. Its additional axioms are not
arbitrary. So, the proof reflects the truth.


What is the stronger system other than an addition?  And the viewpoint 
that

the stronger system is actually correct -- is that an assumption? a truth?
what?  (And how do you know?)

Also, you seem to be ascribing arbitrariness to constructivism which is
emphatically not the case.


- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Monday, October 27, 2008 2:53 PM
Subject: Re: [agi] constructivist issues


Mark,

The number of possible descriptions is countable, while the number of
possible real numbers is uncountable. So, there are infinitely many
more real numbers that are individually indescribable, then
describable; so much so that if we were able to randomly pick a real
number between 1 and 0, it would be indescribable with probability 1.
I am getting this from Chaitin's book "Meta Math!".

"I believe that arithmetic is a formal and complete system.  I'm not a
constructivist where formal and complete systems are concerned (since
there is nothing more to construct)."

Oh, I believe there is some confusion here because of my use of the
word "arithmetic". I don't mean grade-school
addition/subtraction/multiplication/division. What I mean is the
axiomatic theory of numbers, which Godel showed to be incomplete if it
is consistent. Godel also proved that one of the incompletenesses in
arithmetic was that it could not prove its own consistency. Stronger
logical systems can and have proven its consistency, but any
particular logical system cannot prove its own consistency. It seems
to me that the constructivist viewpoint says, "The so-called stronger
system merely defines truth in more cases; but, we could just as
easily take the opposite definitions." In other words, we're proving
arithmetic consistent only by adding to its definition, which hardly
counts. The classical viewpoint, of course, is

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser

The number of possible descriptions is countable


I disagree.

if we were able to randomly pick a real number between 1 and 0, it would 
be indescribable with probability 1.


If we were able to randomly pick a real number between 1 and 0, it would be 
indescribable with probability *approaching* 1.



Which side do you fall on?


I still say that the sides are parts of the same coin.

In other words, we're proving arithmetic consistent only by adding to its 
definition, which hardly counts. The classical viewpoint, of course, is 
that the stronger system is actually correct. Its additional axioms are 
not arbitrary. So, the proof reflects the truth.


What is the stronger system other than an addition?  And the viewpoint that 
the stronger system is actually correct -- is that an assumption? a truth? 
what?  (And how do you know?)


Also, you seem to be ascribing arbitrariness to constructivism which is 
emphatically not the case.



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Monday, October 27, 2008 2:53 PM
Subject: Re: [agi] constructivist issues


Mark,

The number of possible descriptions is countable, while the number of
possible real numbers is uncountable. So, there are infinitely many
more real numbers that are individually indescribable, then
describable; so much so that if we were able to randomly pick a real
number between 1 and 0, it would be indescribable with probability 1.
I am getting this from Chaitin's book "Meta Math!".

"I believe that arithmetic is a formal and complete system.  I'm not a
constructivist where formal and complete systems are concerned (since
there is nothing more to construct)."

Oh, I believe there is some confusion here because of my use of the
word "arithmetic". I don't mean grade-school
addition/subtraction/multiplication/division. What I mean is the
axiomatic theory of numbers, which Godel showed to be incomplete if it
is consistent. Godel also proved that one of the incompletenesses in
arithmetic was that it could not prove its own consistency. Stronger
logical systems can and have proven its consistency, but any
particular logical system cannot prove its own consistency. It seems
to me that the constructivist viewpoint says, "The so-called stronger
system merely defines truth in more cases; but, we could just as
easily take the opposite definitions." In other words, we're proving
arithmetic consistent only by adding to its definition, which hardly
counts. The classical viewpoint, of course, is that the stronger
system is actually correct. Its additional axioms are not arbitrary.
So, the proof reflects the truth.

Which side do you fall on?

--Abram

On Mon, Oct 27, 2008 at 1:03 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

I, being of the classical persuasion, believe that arithmetic is either
consistent or inconsistent. You, to the extent that you are a
constructivist, should say that the matter is undecidable and therefore
undefined.


I believe that arithmetic is a formal and complete system.  I'm not a
constructivist where formal and complete systems are concerned (since 
there

is nothing more to construct).

On the other hand, if you want to try to get into the "meaning" of
arithmetic . . . .

= = = = = = =


since the infinity of real numbers is larger than the infinity of
possible names/descriptions.


Huh?  The constructivist in me points out that via compound constructions
the infinity of possible names/descriptions is exponentially larger than 
the
infinity of real numbers.  You can reference *any* real number to the 
extent

that you can define it.  And yes, that is both a trick statement AND also
the crux of the matter at the same time -- you can't name pi as a sequence
of numbers but you certainly can define it by a description of what it is
and what it does and any description can also be said to be a name (or a
"true name" if you will :-).


If the Gödelian truths are unreachable because they are undefined, then
there is something *wrong* with the classical insistence that they are 
true

or false but we just don't know which.


They are undefined unless they are part of a formal and complete system. 
If
they are part of a formal and complete system, then they are defined but 
may

be indeterminable.  There is nothing *wrong* with the classical insistence
as long as it is applied to a limited domain (i.e. that of closed systems)
which is what you are doing.


- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Monday, October 27, 2008 12:29 PM
Subject: Re: [agi] constructivist issues


Mark,

An example of people who would argue with the meaningfulness of
classical mathematics: there are some people who contest the concept
of real numbers. The cite things like that the vast majority of real
numbers cannot even be named or referenced in any 

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser
I, being of the classical persuasion, believe that arithmetic is either 
consistent or inconsistent. You, to the extent that you are a 
constructivist, should say that the matter is undecidable and therefore 
undefined.


I believe that arithmetic is a formal and complete system.  I'm not a 
constructivist where formal and complete systems are concerned (since there 
is nothing more to construct).


On the other hand, if you want to try to get into the "meaning" of 
arithmetic . . . .


= = = = = = =

since the infinity of real numbers is larger than the infinity of 
possible names/descriptions.


Huh?  The constructivist in me points out that via compound constructions 
the infinity of possible names/descriptions is exponentially larger than the 
infinity of real numbers.  You can reference *any* real number to the extent 
that you can define it.  And yes, that is both a trick statement AND also 
the crux of the matter at the same time -- you can't name pi as a sequence 
of numbers but you certainly can define it by a description of what it is 
and what it does and any description can also be said to be a name (or a 
"true name" if you will :-).


If the Gödelian truths are unreachable because they are undefined, then 
there is something *wrong* with the classical insistence that they are 
true or false but we just don't know which.


They are undefined unless they are part of a formal and complete system.  If 
they are part of a formal and complete system, then they are defined but may 
be indeterminable.  There is nothing *wrong* with the classical insistence 
as long as it is applied to a limited domain (i.e. that of closed systems) 
which is what you are doing.



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Monday, October 27, 2008 12:29 PM
Subject: Re: [agi] constructivist issues


Mark,

An example of people who would argue with the meaningfulness of
classical mathematics: there are some people who contest the concept
of real numbers. The cite things like that the vast majority of real
numbers cannot even be named or referenced in any way as individuals,
since the infinity of real numbers is larger than the infinity of
possible names/descriptions.

"OK.  But I'm not sure where this is going . . . . I agree with all
that you're saying but can't see where/how it's supposed to address/go
back into my domain model ;-)"

Well, you already agreed that classical mathematics is meaningful.
But, you also asserted that you are a constructivist where meaning is
concerned, and therefore collapse Godel's and Tarski's theorems. I do
not think you can consistently assert both! If the Godelian truths are
unreachable because they are undefined, then there is something
*wrong* with the classical insistence that they are true or false but
we just don't know which.

To take a concrete example: One of these truths that suffers from
Godelian incompleteness is the consistency of arithmetic. I, being of
the classical persuasion, believe that arithmetic is either consistent
or inconsistent. You, to the extent that you are a constructivist,
should say that the matter is undecidable and therefore undefined.

--Abram

On Mon, Oct 27, 2008 at 12:04 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Hi,

  It's interesting (and useful) that you didn't use the word meaning until
your last paragraph.


I'm not sure what you mean when you say that meaning is constructed,
yet truth is absolute. Could you clarify?


  Hmmm.  What if I say that meaning is your domain model and that truth is
whether that domain model (or rather, a given preposition phrased in the
semantics of the domain model) accurately represents the empirical world?

= = = = = = = =


I'm a classicalist in the sense that I think classical mathematics needs
to be accounted for in a theory of meaning.


Would *anyone* argue with this?  Is there anyone (with a clue ;-) who 
isn't

a classicist in this sense?


 I am also a classicalist in the sense that I think that the
mathematically true is a proper subset of the mathematically provable, 
so

that Gödelian truths are not undefined, just unprovable.


OK.  But that is talking about a formal (and complete -- though still
infinite) system.


I might be called a constructivist in the sense that I think there needs
to be a tight, well-defined connection between syntax and semantics...


Agreed but you seem to be overlooking the question of "Syntax and 
semantics

of what?"


The semantics of an AGI's internal logic needs to follow from its
manipulation rules.


Absolutely.


But, partly because I accept the


implementability of super-recursive algorithms, I think there is a
chance to allow at least *some* classical mathematics into the
picture. And, since I believe in the computational nature of the mind,
I think that and classical mathematics that

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Mark Waser
Cool.  Thank you for the assist.

I think that math has the distinction that it is a closed formal system and 
that therefore people segregate it from the open mess that science has to deal 
with (though arguably the scientific method applies).

Art seems to be that which deals with an even bigger open mess (since it always 
includes humans in the system ;-) and which is even less codified though it 
seems to frequently want to migrate to be science.

= = = = = = 

Actually, in a way, it almost seems as if you want a spectrum running from  
MATH  through  SCIENCE  continuing through  ART  to  ??DISORDER??
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 27, 2008 12:07 PM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI



  I think you're converging on better and better wording ... however, I think 
somehow you do need to account for the differences between

  -- science

  on the one hand and

  -- math
  -- art

  etc. on the other hand, which also involve group learning and codification 
and communication of results, etc. ... but are different from science.  I'm not 
sure the best way to formalize the difference in general, in a way that 
encompasses all the cases of science and is descriptive rather than normative 
... but I haven't thought about it much and have other stuff to do...

  ben


  On Mon, Oct 27, 2008 at 8:40 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> You've now changed your statement to "science = optimal formalized group 
learning" ... I'm not sure if this is intended as descriptive or prescriptive

Our previous e-mails about the sociology of science should have made it 
quite clear that its not descriptive  ;-)  Of course it was intended to be 
prescriptive.  (Though, on second thought, if you removed the "optimal", maybe 
it could be descriptive -- what do you think?)

And yes, I'm constantly changing the phrasing of my statement in an attempt 
to get my intended meaning across.  This is  going to loop back to 
my belief that the degree to which you are a general intelligence is the degree 
to which you're a(n optimal) scientist.  So I haven't really changed my basic 
point at all (although admittedly, I've certainly refined it some -- which is 
my whole purpose in having this discussion :-)

>> Also, learning could be learning about mathematics, which we don't 
normally think of as being science ...

True.  But I would argue that that is a shortcoming of our thinking.  This 
is similar to your previous cosmology example.  I'm including both under the 
umbrella of what you'd clearly be happier phrasing as "a system of thought 
intended to guide a group in learning about . . . ."

What would you say if I defined science as "a system of thought intended to 
guide a group in learning about the empirical world" and a scientist simply as 
someone who employs science (i.e. that system of thought).

I would also tend to think of "system of thought" as being interchangeable 
with "process" and/or "method".

>> If you said "A scientific research programme is a system of thought 
intended to guide a group in learning about some aspect of the empirical world 
(as understood by this group) and formalizing their conclusions and methods" I 
wouldn't complain as much...

So you like SCIENCE PROGRAM = SYSTEM FOR GROUP LEARNING + FORMALIZATION OF 
RESULTS but you don't like SCIENCE = LEARNING + TRANSMISSION (which is the 
individual case) OR SCIENCE = GROUP LEARNING (which probably should have + 
CODIFICATION added to assist the learning of future group members).

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 27, 2008 10:55 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is 
no AGI



  No, it's really just that I've been spending too much time on this 
mailing list.  I've got an AGI to build, as well as too many other 
responsibilities ;-p

  You've now changed your statement to "science = optimal formalized group 
learning" ... I'm not sure if this is intended as descriptive or prescriptive

  Obviously, science as practiced is not optimal and has many cultural 
properties besides those implied by being "group learning"

  Also, learning could be learning about mathematics, which we don't 
normally think of as being science ...

  If you said "A scientific research programme is a system of thought 
intended to guide a group in learning about some aspect of the empirical world 
(as understood by this group) and formalizing their conclusions and methods" I 
wouldn't complain as much...

  ben



  O

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser

Hi,

   It's interesting (and useful) that you didn't use the word meaning until 
your last paragraph.



I'm not sure what you mean when you say that meaning is constructed,
yet truth is absolute. Could you clarify?


   Hmmm.  What if I say that meaning is your domain model and that truth is 
whether that domain model (or rather, a given preposition phrased in the 
semantics of the domain model) accurately represents the empirical world?


= = = = = = = =
I'm a classicalist in the sense that I think classical mathematics needs 
to be accounted for in a theory of meaning.


Would *anyone* argue with this?  Is there anyone (with a clue ;-) who isn't 
a classicist in this sense?


 I am also a classicalist in the sense that I think that the 
mathematically true is a proper subset of the mathematically provable, so 
that Gödelian truths are not undefined, just unprovable.


OK.  But that is talking about a formal (and complete -- though still 
infinite) system.


I might be called a constructivist in the sense that I think there needs 
to be a tight, well-defined connection between syntax and semantics...


Agreed but you seem to be overlooking the question of "Syntax and semantics 
of what?"


The semantics of an AGI's internal logic needs to follow from its 
manipulation rules.


Absolutely.


But, partly because I accept the

implementability of super-recursive algorithms, I think there is a
chance to allow at least *some* classical mathematics into the
picture. And, since I believe in the computational nature of the mind,
I think that and classical mathematics that *can't* fit into the
picture is literally nonsense! So, since I don't feel like much of
math is nonsense, I won't be satisfied until I've fit most of it in.

OK.  But I'm not sure where this is going . . . . I agree with all that 
you're saying but can't see where/how it's supposed to address/go back into 
my domain model ;-)




- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Monday, October 27, 2008 11:05 AM
Subject: Re: [agi] constructivist issues


Mark,

I'm a classicalist in the sense that I think classical mathematics
needs to be accounted for in a theory of meaning. (Ben seems to think
that a constructivist can do this by equating classical mathematics
with axiom-systems-of-classical-mathematics, but I am unconvinced.) I
am also a classicalist in the sense that I think that the
mathematically true is a proper subset of the mathematically provable,
so that Godelian truths are not undefined, just unprovable.

I might be called a constructivist in the sense that I think there
needs to be a tight, well-defined connection between syntax and
semantics... The semantics of an AGI's internal logic needs to follow
from its manipulation rules. But, partly because I accept the
implementability of super-recursive algorithms, I think there is a
chance to allow at least *some* classical mathematics into the
picture. And, since I believe in the computational nature of the mind,
I think that and classical mathematics that *can't* fit into the
picture is literally nonsense! So, since I don't feel like much of
math is nonsense, I won't be satisfied until I've fit most of it in.

I'm not sure what you mean when you say that meaning is constructed,
yet truth is absolute. Could you clarify?

--Abram

On Mon, Oct 27, 2008 at 10:27 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
Hmmm.  I think that some of our miscommunication might have been due to 
the

fact that you seem to be talking about two things while I think that I'm
talking about third . . . .

I believe that *meaning* is constructed.
I believe that truth is absolute (within a given context) and is a proper
subset of meaning.
I believe that proof is constructed and is a proper subset of truth (and
therefore a proper subset of meaning as well).

So, fundamentally, I *am* a constructivist as far as meaning is concerned
and take Gödel's theorem to say that meaning is not completely defined or
definable.

Since I'm being a constructionist about meaning, it would seem that your
statement that


A constructivist would be justified in asserting the equivalence of
Gödel's incompleteness theorem and Tarski's undefinability theorem,


would mean that I was "correct" (or, at least, not wrong) in using Gödel's
theorem but probably not as clear as I could have been if I'd used Tarski
since an additional condition/assumption (constructivism) was required.


So, interchanging the two theorems is fully justifiable in some
intellectual circles! Just don't do it when non-constructivists are
around :).


I guess the question is . . . . How many people *aren't* constructivists
when it comes to meaning?  Actually, I get the impression that this 
mailing

list is seriously split . . . .

Where do 

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Mark Waser
>> You've now changed your statement to "science = optimal formalized group 
>> learning" ... I'm not sure if this is intended as descriptive or prescriptive

Our previous e-mails about the sociology of science should have made it quite 
clear that its not descriptive  ;-)  Of course it was intended to be 
prescriptive.  (Though, on second thought, if you removed the "optimal", maybe 
it could be descriptive -- what do you think?)

And yes, I'm constantly changing the phrasing of my statement in an attempt to 
get my intended meaning across.  This is  going to loop back to my 
belief that the degree to which you are a general intelligence is the degree to 
which you're a(n optimal) scientist.  So I haven't really changed my basic 
point at all (although admittedly, I've certainly refined it some -- which is 
my whole purpose in having this discussion :-)

>> Also, learning could be learning about mathematics, which we don't normally 
>> think of as being science ...

True.  But I would argue that that is a shortcoming of our thinking.  This is 
similar to your previous cosmology example.  I'm including both under the 
umbrella of what you'd clearly be happier phrasing as "a system of thought 
intended to guide a group in learning about . . . ."

What would you say if I defined science as "a system of thought intended to 
guide a group in learning about the empirical world" and a scientist simply as 
someone who employs science (i.e. that system of thought).

I would also tend to think of "system of thought" as being interchangeable with 
"process" and/or "method".

>> If you said "A scientific research programme is a system of thought intended 
>> to guide a group in learning about some aspect of the empirical world (as 
>> understood by this group) and formalizing their conclusions and methods" I 
>> wouldn't complain as much...

So you like SCIENCE PROGRAM = SYSTEM FOR GROUP LEARNING + FORMALIZATION OF 
RESULTS but you don't like SCIENCE = LEARNING + TRANSMISSION (which is the 
individual case) OR SCIENCE = GROUP LEARNING (which probably should have + 
CODIFICATION added to assist the learning of future group members).

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Monday, October 27, 2008 10:55 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI



  No, it's really just that I've been spending too much time on this mailing 
list.  I've got an AGI to build, as well as too many other responsibilities ;-p

  You've now changed your statement to "science = optimal formalized group 
learning" ... I'm not sure if this is intended as descriptive or prescriptive

  Obviously, science as practiced is not optimal and has many cultural 
properties besides those implied by being "group learning"

  Also, learning could be learning about mathematics, which we don't normally 
think of as being science ...

  If you said "A scientific research programme is a system of thought intended 
to guide a group in learning about some aspect of the empirical world (as 
understood by this group) and formalizing their conclusions and methods" I 
wouldn't complain as much...

  ben



  On Mon, Oct 27, 2008 at 3:13 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

Or, in other words, you can't even start to draw a clear distinction in a 
small number of words.  That would argue that maybe those equalities aren't so 
silly after all.

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 26, 2008 7:38 PM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is 
no AGI



  Sorry, I'm just going to have to choose to be ignored on this topic ;-) 
... I have too much AGI stuff to do to be spending so much time chatting on 
mailing lists ... and I've already published my thoughts on philosophy of 
science in The Hidden Pattern and online...

  ben g


  On Sun, Oct 26, 2008 at 9:51 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> These equations seem silly to me ... obviously science is much more 
than that, as Mark should know as he has studied philosophy of science 
extensively

Mark is looking for well-defined distinctions.  Claiming that science 
is "obviously" much more than  is a non-sequitor.  What does science 
include that learning does not?  Please be specific or you *should* be ignored.

The transmission or communication of results (or, as Matt puts it, 
language) is one necessary addition.  Do you wish to provide another or do you 
just want to say that there must be one without being able to come up with one?

Mark c

Re: [agi] constructivist issues

2008-10-27 Thread Mark Waser
Hmmm.  I think that some of our miscommunication might have been due to the 
fact that you seem to be talking about two things while I think that I'm 
talking about third . . . .


I believe that *meaning* is constructed.
I believe that truth is absolute (within a given context) and is a proper 
subset of meaning.
I believe that proof is constructed and is a proper subset of truth (and 
therefore a proper subset of meaning as well).


So, fundamentally, I *am* a constructivist as far as meaning is concerned 
and take Gödel's theorem to say that meaning is not completely defined or 
definable.


Since I'm being a constructionist about meaning, it would seem that your 
statement that

A constructivist would be justified in asserting the equivalence of
Gödel's incompleteness theorem and Tarski's undefinability theorem,
would mean that I was "correct" (or, at least, not wrong) in using Gödel's 
theorem but probably not as clear as I could have been if I'd used Tarski 
since an additional condition/assumption (constructivism) was required.



So, interchanging the two theorems is fully justifiable in some
intellectual circles! Just don't do it when non-constructivists are
around :).


I guess the question is . . . . How many people *aren't* constructivists 
when it comes to meaning?  Actually, I get the impression that this mailing 
list is seriously split . . . .


Where do you fall on the constructivism of meaning?

- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, October 26, 2008 10:00 PM
Subject: Re: [agi] constructivist issues



Mark,

After some thought...

A constructivist would be justified in asserting the equivalence of
Godel's incompleteness theorem and Tarski's undefinability theorem,
based on the idea that truth is constructable truth. Where classical
logicians take Godels theorem to prove that provability cannot equal
truth, constructivists can take it to show that provability is not
completely defined or definable (and neither is truth, since they are
the same).

So, interchanging the two theorems is fully justifiable in some
intellectual circles! Just don't do it when non-constructivists are
around :).

--Abram

On Sat, Oct 25, 2008 at 6:18 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
OK.  A good explanation and I stand corrected and more educated.  Thank 
you.


- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Saturday, October 25, 2008 6:06 PM
Subject: Re: [agi] constructivist issues



Mark,

Yes.

I wouldn't normally be so picky, but Godel's theorem *really* gets
misused.

Using Godel's theorem to say made it sound (to me) as if you have a
very fundamental confusion. You were using a theorem about the
incompleteness of proof to talk about the incompleteness of truth, so
it sounded like you thought "logically true" and "logically provable"
were equivalent, which is of course the *opposite* of what Godel
proved.

Intuitively, Godel's theorem says "If a logic can talk about number
theory, it can't have a complete system of proof." Tarski's says, "If
a logic can talk about number theory, it can't talk about its own
notion of truth." Both theorems rely on the Diagonal Lemma, which
states "If a logic can talk about number theory, it can talk about its
own proof method." So, Tarski's theorem immediately implies Godel's
theorem: if a logic can talk about its own notion of proof, but not
its own notion of truth, then the two can't be equivalent!

So, since Godel's theorem follows so closely from Tarski's (even
though Tarski's came later), it is better to invoke Tarski's by
default if you aren't sure which one applies.

--Abram

On Sat, Oct 25, 2008 at 4:22 PM, Mark Waser <[EMAIL PROTECTED]> 
wrote:


So you're saying that if I switch to using Tarski's theory (which I
believe
is fundamentally just a very slightly different aspect of the same
critical
concept -- but unfortunately much less well-known and therefore less
powerful as an explanation) that you'll agree with me?

That seems akin to picayune arguments over phrasing when trying to 
simply

reach general broad agreement . . . . (or am I misinterpreting?)

- Original Message - From: "Abram Demski" 
<[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 5:29 PM
Subject: Re: [agi] constructivist issues





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: htt

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Mark Waser
Or, in other words, you can't even start to draw a clear distinction in a small 
number of words.  That would argue that maybe those equalities aren't so silly 
after all.

- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 26, 2008 7:38 PM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI



  Sorry, I'm just going to have to choose to be ignored on this topic ;-) ... I 
have too much AGI stuff to do to be spending so much time chatting on mailing 
lists ... and I've already published my thoughts on philosophy of science in 
The Hidden Pattern and online...

  ben g


  On Sun, Oct 26, 2008 at 9:51 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> These equations seem silly to me ... obviously science is much more than 
that, as Mark should know as he has studied philosophy of science extensively

Mark is looking for well-defined distinctions.  Claiming that science is 
"obviously" much more than  is a non-sequitor.  What does science 
include that learning does not?  Please be specific or you *should* be ignored.

The transmission or communication of results (or, as Matt puts it, 
language) is one necessary addition.  Do you wish to provide another or do you 
just want to say that there must be one without being able to come up with one?

Mark can still think of at least one other thing (which may be multiples 
depending upon how you look at it) but isn't comfortable that he has an optimal 
view of it so he's looking for other viewpoints/phrasings.

>> Cognitively, the precursor for science seems to be Piaget's formal stage 
of cognitive development.  If you have a community of minds that have reached 
the formal stage, then potentially they can develop the mental and social 
patterns corresponding to the practice of science.

So how is science different from optimal formalized group learning?  What's 
the distinction?



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 26, 2008 11:14 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is 
no AGI




  These equations seem silly to me ... obviously science is much more than 
that, as Mark should know as he has studied philosophy of science extensively

  Cognitively, the precursor for science seems to be Piaget's formal stage 
of cognitive development.  If you have a community of minds that have reached 
the formal stage, then potentially they can develop the mental and social 
patterns corresponding to the practice of science.

  -- Ben


  On Sun, Oct 26, 2008 at 8:08 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote:

> Would it then be accurate to saySCIENCE = LEARNING +
> TRANSMISSION?
>
> Or, how about,SCIENCE = GROUP LEARNING?


Science = learning + language.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription: https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, 
butcher a hog, conn a ship, design a building, write a sonnet, balance 
accounts, build a wall, set a bone, comfort the dying, take orders, give 
orders, cooperate, act alone, solve equations, analyze a new problem, pitch 
manure, program a computer, cook a tasty meal, fight efficiently, die 
gallantly. Specialization is for insects."  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects."  -- Robert Heinlein




--
agi | Archives  | Modify Y

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-26 Thread Mark Waser
>> These equations seem silly to me ... obviously science is much more than 
>> that, as Mark should know as he has studied philosophy of science extensively

Mark is looking for well-defined distinctions.  Claiming that science is 
"obviously" much more than  is a non-sequitor.  What does science 
include that learning does not?  Please be specific or you *should* be ignored.

The transmission or communication of results (or, as Matt puts it, language) is 
one necessary addition.  Do you wish to provide another or do you just want to 
say that there must be one without being able to come up with one?

Mark can still think of at least one other thing (which may be multiples 
depending upon how you look at it) but isn't comfortable that he has an optimal 
view of it so he's looking for other viewpoints/phrasings.

>> Cognitively, the precursor for science seems to be Piaget's formal stage of 
>> cognitive development.  If you have a community of minds that have reached 
>> the formal stage, then potentially they can develop the mental and social 
>> patterns corresponding to the practice of science.

So how is science different from optimal formalized group learning?  What's the 
distinction?



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Sunday, October 26, 2008 11:14 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI



  These equations seem silly to me ... obviously science is much more than 
that, as Mark should know as he has studied philosophy of science extensively

  Cognitively, the precursor for science seems to be Piaget's formal stage of 
cognitive development.  If you have a community of minds that have reached the 
formal stage, then potentially they can develop the mental and social patterns 
corresponding to the practice of science.

  -- Ben


  On Sun, Oct 26, 2008 at 8:08 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote:

> Would it then be accurate to saySCIENCE = LEARNING +
> TRANSMISSION?
>
> Or, how about,SCIENCE = GROUP LEARNING?


Science = learning + language.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects."  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Mark Waser
>> -- truly general AI, even assuming the universe is computable, is impossible 
>> for any finite system

Excellent.  Unfortunately, I personally missed (or have forgotten) how AIXI 
shows or proves this (as opposed to invoking some other form of incompleteness) 
unless it is merely because of the assumption that the universe itself is 
assumed to be infinite (which I do understand but which then makes the argument 
rather pedestrian and less interesting).

>> The computability of the universe is something that can't really be proved, 
>> but I argue that it's an implicit assumption underlying the whole scientific 
>> method.

It seems to me (and I certainly can be wrong about this) that computability is 
frequently improperly conflated with consistency (though may be you want to 
argue that such a conflation isn't improper) and that the (actually explicit) 
assumption underlying the whole scientific method is that the same causes 
produces the same results.  Comments?

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Saturday, October 25, 2008 7:48 PM
  Subject: **SPAM** Re: AIXI (was Re: [agi] If your AGI can't learn to play 
chess it is no AGI)



  AIXI shows a couple interesting things...

  -- truly general AI, even assuming the universe is computable, is impossible 
for any finite system

  -- given any finite level L of general intelligence that one desires, there 
are some finite R, M so that you can create a computer with less than R 
processing speed and M memory capacity, so that the computer can achieve level 
L of general intelligence

  This doesn't tell you *anything* about how to make AGI in practice.  It does 
tell you that, in principle, creating AGI is a matter of *computational 
efficiency* ... assuming the universe is computable.

  The computability of the universe is something that can't really be proved, 
but I argue that it's an implicit assumption underlying the whole scientific 
method.  If the universe can't be usefully modelable as computable then the 
whole methodology of gathering finite datasets of finite-precision data is 
fundamentally limited in what it can tell us about the universe ... which would 
really suck...

  -- Ben G

  -- Ben G


  On Sat, Oct 25, 2008 at 7:21 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote:

> Ummm.  It seems like you were/are saying then that because
> AIXI makes an
> assumption limiting it's own applicability/proof (that
> it requires that the
> environment be computable) and because AIXI can make some
> valid conclusions,
> that that "suggests" that AIXI's limiting
> assumptions are true of the
> universe.  That simply doesn't work, dude, unless you
> have a very loose
> inductive-type definition of "suggests" that is
> more suited for inference
> control than anything like a logical proof.

I am arguing by induction, not deduction:

If the universe is computable, then Occam's Razor holds.
Occam's Razor holds.
Therefore the universe is computable.

Of course, I have proved no such thing.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects."  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no AGI)

2008-10-25 Thread Mark Waser

I am arguing by induction, not deduction:

If the universe is computable, then Occam's Razor holds.
Occam's Razor holds.
Therefore the universe is computable.

Of course, I have proved no such thing.


Yep.  That's a better summation of what I was trying to say . . . .

Except that I'd like to bring back my point that induction really is only 
suited for inference control and determining what should be 
evaluated/examined/proved . . . . NOT actually doing the evaluation/proving 
*with*.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, October 25, 2008 7:21 PM
Subject: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no 
AGI)




--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote:


Ummm.  It seems like you were/are saying then that because
AIXI makes an
assumption limiting it's own applicability/proof (that
it requires that the
environment be computable) and because AIXI can make some
valid conclusions,
that that "suggests" that AIXI's limiting
assumptions are true of the
universe.  That simply doesn't work, dude, unless you
have a very loose
inductive-type definition of "suggests" that is
more suited for inference
control than anything like a logical proof.


I am arguing by induction, not deduction:

If the universe is computable, then Occam's Razor holds.
Occam's Razor holds.
Therefore the universe is computable.

Of course, I have proved no such thing.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Mark Waser
>> People seem to debate programming languages and OS's endlessly, and this 
>> list is no exception.

Yes.  And like all other debates there are good points and bad points.:-)

>> To make progress on AGI, you  just gotta make *some* reasonable choice and 
>> start building

Strongly agree.  Otherwise it's just empty theorizing.  But sometimes it's 
worth gathering up your learning from what you have and making a fresh start (a 
la Eric Raymond).  You may not be at that point yet . . . . you may be past 
that point since a lot of the stuff that OpenCog is depending upon seems to 
have been lost in the mists of time (to judge by some of the e-mails among team 
members).  The OpenCog documents are a great start (though it's too bad that 
some important stuff seems to have been lost before they were created and now 
remains to be rediscovered -- though that's pretty typical of *any* large 
long-term project)

>> there's no choice that's going to please everyone, since this stuff is so 
>> contentious...

On the other hand, there are projects where most of the people are pleased with 
them (or, at least, not displeased) and horrible projects.  You seem to be 
pretty much on the correct side with many of your naysayers more of the variety 
keeping you honest than really actively disagreeing with you.

Actually I don't debate language and OS endlessly -- indeed, I generally don't 
argue them at all if you truly understand what I'm arguing (i.e. platform which 
is distinct from either though it may be dependent upon or include both -- to 
it's detriment)  -- but I do bring them up periodically (or rather respond to 
others brining them up) just to keep people abreast of changing circumstances 
(admittedly, as I see/evaluate them).  I'd debate your coding on Windows 
comment (since I don't code on Windows even though Windows is the operating 
system that my computer is running) but I think we've reached the point where 
continuing to agree to disagree pending further developments is best.  :-)

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Saturday, October 25, 2008 6:38 PM
  Subject: **SPAM** Re: [agi] On programming languages






Strong agreement with what you say but then effective rejection as a valid 
point because language issues frequently are a total barrier to entry for 
people who might have been able to do the algorithms and structures and 
cognitive architecture.

I'll even go so far as to use myself as an example.  I can easily do C++ 
(since I've done so in the past) but all the baggage around it make me consider 
it not worth my while.  I certainly won't hesitate to use what is learned on 
that architecture but I'll be totally shocked if you aren't massively 
leap-frogged because of the inherent shortcomings of what you're trying to work 
with.


  Somewhat similarly, I've done coding on Windows before, but I dislike the 
operating system quite a lot, so in general I try to avoid any projects where I 
have to use it.   

  However, if I found some AGI project that I thought were more promising than 
OpenCog/Novamente on the level of algorithms, philosophy-of-mind and structures 
... and, egads, this project ran only on Windows ... I would certainly not 
hesitate to join that project, even though my feeling is that any serious 
large-scale software project based exclusively on Windows is going to be 
seriously impaired by its OS choice...

  In short, I just don't think these issues are **all that** important.  
They're important, but having the right AGI design is far, far more so.

  People seem to debate programming languages and OS's endlessly, and this list 
is no exception.  There are smart people on multiple sides of these debates.  
To make progress on AGI, you  just gotta make *some* reasonable choice and 
start building ... there's no choice that's going to please everyone, since 
this stuff is so contentious...

  -- Ben G




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser

Ah.  An excellent distinction . . . .Thank you.  Very helpful.

Would it then be accurate to saySCIENCE = LEARNING + TRANSMISSION?

Or, how about,SCIENCE = GROUP LEARNING?

- Original Message - 
From: "Russell Wallace" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, October 25, 2008 6:27 PM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no 
AGI




On Sat, Oct 25, 2008 at 11:14 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Anyone else want to take up the issue of whether there is a distinction
between competent scientific research and competent learning (whether or 
not

both are being done by a machine) and, if so, what that distinction is?


Science is about public knowledge. I can learn from personal
experience, but it's only science if I publish my results in such a
way that other people can repeat them.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-25 Thread Mark Waser

OK.  A good explanation and I stand corrected and more educated.  Thank you.

- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, October 25, 2008 6:06 PM
Subject: Re: [agi] constructivist issues



Mark,

Yes.

I wouldn't normally be so picky, but Godel's theorem *really* gets 
misused.


Using Godel's theorem to say made it sound (to me) as if you have a
very fundamental confusion. You were using a theorem about the
incompleteness of proof to talk about the incompleteness of truth, so
it sounded like you thought "logically true" and "logically provable"
were equivalent, which is of course the *opposite* of what Godel
proved.

Intuitively, Godel's theorem says "If a logic can talk about number
theory, it can't have a complete system of proof." Tarski's says, "If
a logic can talk about number theory, it can't talk about its own
notion of truth." Both theorems rely on the Diagonal Lemma, which
states "If a logic can talk about number theory, it can talk about its
own proof method." So, Tarski's theorem immediately implies Godel's
theorem: if a logic can talk about its own notion of proof, but not
its own notion of truth, then the two can't be equivalent!

So, since Godel's theorem follows so closely from Tarski's (even
though Tarski's came later), it is better to invoke Tarski's by
default if you aren't sure which one applies.

--Abram

On Sat, Oct 25, 2008 at 4:22 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
So you're saying that if I switch to using Tarski's theory (which I 
believe
is fundamentally just a very slightly different aspect of the same 
critical

concept -- but unfortunately much less well-known and therefore less
powerful as an explanation) that you'll agree with me?

That seems akin to picayune arguments over phrasing when trying to simply
reach general broad agreement . . . . (or am I misinterpreting?)

- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Friday, October 24, 2008 5:29 PM
Subject: Re: [agi] constructivist issues





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser

So where is the difference

There is no difference.


Cool.  That's one vote.

Anyone else want to take up the issue of whether there is a distinction 
between competent scientific research and competent learning (whether or not 
both are being done by a machine) and, if so, what that distinction is?


Or how about if I'm bold and follow up with the question of whether there is 
a distinction between a machine (or other entity) that is capable of 
competent scientific research/competent generic learning and a general 
intelligence?


That's an interesting definition of general intelligence that could have an 
awful lot of power if it's acceptable . . . . .



- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, October 25, 2008 5:59 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI



--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote:


> Scientists choose experiments to maximize information
> gain. There is no
> reason that machine learning algorithms couldn't
> do this, but often they don't.

Heh.  I would say that scientists attempt to do this and
machine learning
algorithms should do it.

So where is the difference other than in the quality of
implementation (i.e.
"other than who performs it, of course").


There is no difference. I originally distinguished machine learning 
because all of the usual algorithms depend on minimizing the complexity of 
the hypothesis space. For example, we use neural networks with the minimum 
number of connections to learn the training data because we want to avoid 
over fitting. Likewise, decision trees and rule learning algorithms like 
RIPPER try to find the minimum number of rules that fit the data. I knew 
about clustering algorithms, but not why they worked. I learned all these 
different strategies for various algorithms in a machine learning course I 
took, but was unaware of the general principle and the reasoning behind it 
until I learned about AIXI.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Ummm.  It seems like you were/are saying then that because AIXI makes an 
assumption limiting it's own applicability/proof (that it requires that the 
environment be computable) and because AIXI can make some valid conclusions, 
that that "suggests" that AIXI's limiting assumptions are true of the 
universe.  That simply doesn't work, dude, unless you have a very loose 
inductive-type definition of "suggests" that is more suited for inference 
control than anything like a logical proof.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, October 25, 2008 5:51 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI



--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote:

> The fact that Occam's Razor works in the real world
> suggests that the
> physics of the universe is computable. Otherwise AIXI
> would not apply.

Hmmm.  I don't get this.  Occam's razor simply says
go with the simplest
explanation until forced to expand it and then only expand
it as necessary.

How does this suggest that the physics of the universe is
computable?

Or conversely, why and how would Occam's razor *not*
work in a universe
where the physics aren't computable.


The proof of AIXI assumes the environment is computable by a Turing 
machine (possibly with a random element). I realize this is not a proof 
that the universe is computable.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Mark Waser

Surely a coherent reply to this assertion would involve the phrases
"superstitious", "ignorant" and "FUD"


So why don't you try to generate one to prove your guess?

Are you claiming that I'm superstitious and ignorant?  That I'm fearful and 
uncertain or trying to generate fearfulness and uncertainty?


Or are you just trying to win a perceived argument by innuendo since you 
don't have any competent response that you can defend?


This is an example of the worst of this mailing list.  Hey Ben, can you at 
least speak out against garbage like this?



- Original Message - 
From: "Eric Burton" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, October 25, 2008 5:41 PM
Subject: **SPAM** Re: [agi] On programming languages



I'll even go so far as to use myself as an example.  I can easily do C++
(since I've done so in the past) but all the baggage around it make me
consider it not worth my while.  I certainly won't hesitate to use what 
is

learned on that architecture but I'll be totally shocked if you aren't
massively leap-frogged because of the inherent shortcomings of what 
you're

trying to work with.


Surely a coherent reply to this assertion would involve the phrases
"superstitious", "ignorant" and "FUD"

Eric B


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser

Which "faulty" reasoning step are you talking about?

You said that there is an alternative to ad hoc in optimal approximation.

My request is that you show that the optimal approximation isn't going to 
just be determined in an ad hoc fashion.


Your absurd strawman example of *using* a bad solution instead of a good one 
doesn't address Matt's point of how you *arrive at* a solution at all.


Are you sure that you know the meaning of the term "ad hoc"?

- Original Message - 
From: "Vladimir Nesov" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, October 25, 2008 5:32 PM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no 
AGI




On Sun, Oct 26, 2008 at 1:19 AM, Mark Waser <[EMAIL PROTECTED]> wrote:


You are now apparently declining to provide an algorithmic solution 
without

arguing that not doing so is a disproof of your statement.
Or, in other words, you are declining to prove that Matt is incorrect in
saying that we have no choice -- You're just simply repeating your
insistence that your now-unsupported point is valid.



This is tedious. I didn't try to prove that the conclusion is wrong, I
pointed to a faulty reasoning step by showing that in general that
reasoning step is wrong. If you need to find the best solution to
x*3=7, but you can only use integers, the perfect solution is
impossible, but it doesn't mean that we are justified in using x=3
that looks good enough, as x=2 is the best solution given limitations.

--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Vladimir said> > I pointed out only that it doesn't follow from AIXI that 
ad-hoc is justified.


Matt used a chain of logic that went as follows:


AIXI says that a perfect solution is not computable. However, a very
general principle of both scientific research and machine learning is
to favor simple hypotheses over complex ones. AIXI justifies these
> practices in a formal way. It also says we can stop looking for a 
universal

solution, which I think is important. It justifies our current ad-hoc
approach to problem solving -- we have no choice.


Or, in summary, ad hoc is justified because we have no choice.

You claimed that we had a choice *BECAUSE* optimal approximation is an 
alternative to ad hoc.


I then asked
So what is an optimal approximation under uncertainty?  How do you know 
when

you've gotten there?

and said:
If you don't believe in ad-hoc then you must have an algorithmic solution 
.


You are now apparently declining to provide an algorithmic solution without 
arguing that not doing so is a disproof of your statement.
Or, in other words, you are declining to prove that Matt is incorrect in 
saying that we have no choice -- You're just simply repeating your 
insistence that your now-unsupported point is valid.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
Scientists choose experiments to maximize information gain. There is no 
reason that machine learning algorithms couldn't do this, but often they 
don't.


Heh.  I would say that scientists attempt to do this and machine learning 
algorithms should do it.


So where is the difference other than in the quality of implementation (i.e. 
"other than who performs it, of course").


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, October 25, 2008 1:41 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI



--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote:


> AIXI says that a perfect solution is not computable. However, a very
> general principle of both scientific research and machine learning is 
> to
> favor simple hypotheses over complex ones. AIXI justifies these 
> practices

> in a formal way. It also says we can stop looking for a universal
> solution, which I think is important. It justifies our current ad-hoc
> approach to problem solving -- we have no choice.

Excellent.  Thank you.  Another good point to be pinned
(since a number of  people frequently go around and around on it).

Is there anything else that it tells us that is useful and
not a  distraction?


The fact that Occam's Razor works in the real world suggests that the 
physics of the universe is computable. Otherwise AIXI would not apply.



- - - - - - - - - - - - - -
Also, since you invoked the two in the same sentence as if
they were  different things . . . .

What is the distinction between scientific research and machine learning
(other than who performs it, of course).  Or, re-phrased, what is the
difference between a machine doing scientific research and
a machine that is  simply learning?




Scientists choose experiments to maximize information gain. There is no 
reason that machine learning algorithms couldn't do this, but often they 
don't.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
The fact that Occam's Razor works in the real world suggests that the 
physics of the universe is computable. Otherwise AIXI would not apply.


Hmmm.  I don't get this.  Occam's razor simply says go with the simplest 
explanation until forced to expand it and then only expand it as necessary.


How does this suggest that the physics of the universe is computable?

Or conversely, why and how would Occam's razor *not* work in a universe 
where the physics aren't computable.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, October 25, 2008 1:41 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI



--- On Sat, 10/25/08, Mark Waser <[EMAIL PROTECTED]> wrote:


> AIXI says that a perfect solution is not computable. However, a very
> general principle of both scientific research and machine learning is 
> to
> favor simple hypotheses over complex ones. AIXI justifies these 
> practices

> in a formal way. It also says we can stop looking for a universal
> solution, which I think is important. It justifies our current ad-hoc
> approach to problem solving -- we have no choice.

Excellent.  Thank you.  Another good point to be pinned
(since a number of  people frequently go around and around on it).

Is there anything else that it tells us that is useful and
not a  distraction?


The fact that Occam's Razor works in the real world suggests that the 
physics of the universe is computable. Otherwise AIXI would not apply.



- - - - - - - - - - - - - -
Also, since you invoked the two in the same sentence as if
they were  different things . . . .

What is the distinction between scientific research and machine learning
(other than who performs it, of course).  Or, re-phrased, what is the
difference between a machine doing scientific research and
a machine that is  simply learning?




Scientists choose experiments to maximize information gain. There is no 
reason that machine learning algorithms couldn't do this, but often they 
don't.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Mark Waser
>> Anyway language issues are just not the main problem in creating AGI.  
>> Getting the algorithms and structures and cognitive architecture right are 
>> dramatically more important.

Strong agreement with what you say but then effective rejection as a valid 
point because language issues frequently are a total barrier to entry for 
people who might have been able to do the algorithms and structures and 
cognitive architecture.

I'll even go so far as to use myself as an example.  I can easily do C++ (since 
I've done so in the past) but all the baggage around it make me consider it not 
worth my while.  I certainly won't hesitate to use what is learned on that 
architecture but I'll be totally shocked if you aren't massively leap-frogged 
because of the inherent shortcomings of what you're trying to work with.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 7:40 PM
  Subject: **SPAM** Re: [agi] On programming languages



  Mark,

  In OpenCog we use all sorts of libraries for all sorts of things, of course.  
 Like everyone else we try to avoid reinventing the wheel.  We nearly always 
avoid coding our own data structures, using either Boost or STL stuff, or 
third-party stuff such as the vtree library that is the basis of PLN and MOSES 
libraries (soon to be replaced with Moshe's superior variant treetree, though 
;-).

  The peeve you have seems to be with the Atomspace, which is custom code for 
managing the Atom knowledge base ... but this is one piece of code that was 
written in 2001 and works and has not consumed a significant percentage of the 
time of the project.   This particular object seemed so central to the system 
and so performance and memory-usage critical that it seemed worthwhile to 
create it in a custom way.  But even if this judgment was wrong (and I'm not 
saying it was) it does not represent a particularly large impact on the project.

  The main problem I have seen with using C++ for OpenCog is the large barrier 
to entry.  Not that many programmers are really good at C++.  But LISP has the 
same problem.  For ease of entry I'd probably choose Java, I guess ... or C# if 
Mono were better.

  Of course, C++ being a complex language there are plusses and minuses to 
various choices within it.  We've made really good use of the power afforded by 
templates, but it's also true that debugging complex template constructs can be 
a bitch.

  Anyway language issues are just not the main problem in creating AGI.  
Getting the algorithms and structures and cognitive architecture right are 
dramatically more important.

  Ben G




  On Fri, Oct 24, 2008 at 3:51 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> Relatively a small amount of code is my own creation, and the libraries 
I used, e.go. Sesame, Glib, are well maintained.

Steve is a man after my own heart.  Grab the available solid 
infrastructure/libraries and build on top of it/them.

To me, it's all a question of the size and coherence of the communities 
building and maintaining the infrastructure.  My personal *best guess* is that 
the Windows community is more cohesive and therefore the rate of interoperable 
infrastructure is growing faster.  It's even clearer that *nix started with a 
big lead.  Currently I'd still say that which is best to use for any given 
project depends upon the project timeline, your comfort factor, whether or not 
you're willing to re-write and/or port, etc., etc. -- but I'm also increasingly 
of the *opinion* that the balance is starting to swing and swing hard . . . . 
(but I'm not really willing to defend that *opinion* against entrenched 
resistance -- merely to suggest and educate to those who don't know all of the 
things that are now available "out-of-the-box").

The only people that I mean to criticize are those who are attempting to do 
everything themselves and are re-inventing the same things that many others are 
doing and continue to do . . . 
  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 1:42 PM
  Subject: **SPAM** Re: [agi] On programming languages


  Hi Mark,

  I readily concede that .Net is superior to Java out-of-the box with 
respect to reflection and metadata support as you say.  I spent my first 
project year creating three successive versions of a Java persistence framework 
for an RDF quad store using third party libraries for these features.  Now I am 
completely satisfied with respect to these capabilities.  Relatively a small 
amount of code is my own creation, and the libraries I used, e.g. Sesame, 
Cglib, are well maintained.

  -Steve


  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3

Re: [agi] constructivist issues

2008-10-25 Thread Mark Waser
So you're saying that if I switch to using Tarski's theory (which I believe 
is fundamentally just a very slightly different aspect of the same critical 
concept -- but unfortunately much less well-known and therefore less 
powerful as an explanation) that you'll agree with me?


That seems akin to picayune arguments over phrasing when trying to simply 
reach general broad agreement . . . . (or am I misinterpreting?)


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 5:29 PM
Subject: Re: [agi] constructivist issues



Mike,

"Personally, I always have trouble separating out Godel and Tarski as
they are obviously both facets of the same underlying principles."

This is essentially what I'm complaining about. If you had used
Tarski's theorem to begin with, I wouldn't be bugging you :).

--Abram

On Fri, Oct 24, 2008 at 12:58 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

I'm making the point "natural language is incompletely defined" for
you, but *not* the point "natural language suffers from Godelian
incompleteness", unless you specify what concept of "proof" applies to
natural language.


I'm back to being lost I think.  You agree that natural language is
incompletely defined.  Cool.  My saying that natural language suffers 
from
Godelian incompleteness merely adds that it *can't* be defined.  Do you 
mean

to say that natural languages *can* be completely defined?  Or are you
arguing that I can't *prove* that they can't be defined?  If it is the 
last,
then that's like saying that Godel's theorem can't prove itself -- which 
is

exactly the point to what Godel's theorem says . . . .


Have you heard of Tarski's undefinability theorem? It is relevant to
this discussion.
http://en.wikipedia.org/wiki/Indefinability_theory_of_truth


Yes.  In fact, the restatement of Tarski's theory as "No sufficiently
powerful language is strongly-semantically-self-representational" also
fundamentally says that I can't prove in natural language what you're 
asking

me to prove about natural language.

Personally, I always have trouble separating out Godel and Tarski as they
are obviously both facets of the same underlying principles.

I'm still not sure of what you're getting at.  If it's a "proof", then 
Godel
says I can't give it to you.  If it's something else, then I'm not 
getting

it.


- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Friday, October 24, 2008 11:31 AM
Subject: Re: [agi] constructivist issues



Mark,

"It makes sense but I'm arguing that you're making my point for me . . .
."

I'm making the point "natural language is incompletely defined" for
you, but *not* the point "natural language suffers from Godelian
incompleteness", unless you specify what concept of "proof" applies to
natural language.

"It emphatically does *not* tell us anything about "any approach that
can be implemented on normal computers" and this is where all the
people who insist that "because computers operate algorithmically,
they will never achieve true general intelligence" are going wrong."

It tells us that any approach that is implementable on a normal
computer will not always be able to come up with correct answers to
all halting-problem questions (along with other problems that suffer
from incompleteness).

"You are correct in saying that Godel's theory has been improperly
overused and abused over the years but my point was merely that AGI is
Godellian Incomplete, natural language is Godellian Incomplete, "

Specify "truth" and "proof" in these domains before applying the
theorem, please. For "agi" I am OK, since "X is provable" would mean
"the AGI will come to believe X", and "X is true" would mean something
close to what it intuitively means. But for natural language? "Natural
language will come to believe X" makes no sense, so it can't be our
definition of proof...

Really, it is a small objection, and I'm only making it because I
don't want the theorem abused. You could fix your statement just by
saying "any proof system we might want to provide" will be incomplete
for "any well-defined subset of natural language semantics that is
large enough to talk fully about numbers". Doing this just seems
pointless, because the real point you are trying to make is that the
semantics is ill-defined in general, *not* that some hypothetical
proof system is incomplete.

"and effectively AGI-Complete most probably pretty much exactly means
Godellian-Incomplete. (Yes, that is a radically new phrasing and not
necessarily quite

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser

No, it doesn't justify ad-hoc, even when perfect solution is
impossible, you could still have an optimal approximation under given
limitations.


So what is an optimal approximation under uncertainty?  How do you know when 
you've gotten there?


If you don't believe in ad-hoc then you must have an algorithmic solution . 
. . .


Numerous scientific studies show that humans frequently under- and 
over-think problems and data collection.  Tell us your solution for fixing 
this.



- Original Message - 
From: "Vladimir Nesov" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 5:03 PM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no 
AGI



On Sat, Oct 25, 2008 at 12:54 AM, Matt Mahoney <[EMAIL PROTECTED]> 
wrote:


AIXI says that a perfect solution is not computable. [...]
It justifies our current ad-hoc approach to problem solving -- we have no 
choice.




No, it doesn't justify ad-hoc, even when perfect solution is
impossible, you could still have an optimal approximation under given
limitations.

--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Mark Waser
AIXI says that a perfect solution is not computable. However, a very 
general principle of both scientific research and machine learning is to 
favor simple hypotheses over complex ones. AIXI justifies these practices 
in a formal way. It also says we can stop looking for a universal 
solution, which I think is important. It justifies our current ad-hoc 
approach to problem solving -- we have no choice.


Excellent.  Thank you.  Another good point to be pinned (since a number of 
people frequently go around and around on it).


Is there anything else that it tells us that is useful and not a 
distraction?


- - - - - - - - - - - - - -
Also, since you invoked the two in the same sentence as if they were 
different things . . . .


What is the distinction between scientific research and machine learning 
(other than who performs it, of course).  Or, re-phrased, what is the 
difference between a machine doing scientific research and a machine that is 
simply learning?






- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 4:54 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI



--- On Fri, 10/24/08, Mark Waser <[EMAIL PROTECTED]> wrote:


Cool.  And you're saying that intelligence is not
computable.  So why else
are we constantly invoking AIXI?  Does it tell us anything
else about
general intelligence?


AIXI says that a perfect solution is not computable. However, a very 
general principle of both scientific research and machine learning is to 
favor simple hypotheses over complex ones. AIXI justifies these practices 
in a formal way. It also says we can stop looking for a universal 
solution, which I think is important. It justifies our current ad-hoc 
approach to problem solving -- we have no choice.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Mark Waser
>> Relatively a small amount of code is my own creation, and the libraries I 
>> used, e.go. Sesame, Glib, are well maintained.

Steve is a man after my own heart.  Grab the available solid 
infrastructure/libraries and build on top of it/them.

To me, it's all a question of the size and coherence of the communities 
building and maintaining the infrastructure.  My personal *best guess* is that 
the Windows community is more cohesive and therefore the rate of interoperable 
infrastructure is growing faster.  It's even clearer that *nix started with a 
big lead.  Currently I'd still say that which is best to use for any given 
project depends upon the project timeline, your comfort factor, whether or not 
you're willing to re-write and/or port, etc., etc. -- but I'm also increasingly 
of the *opinion* that the balance is starting to swing and swing hard . . . . 
(but I'm not really willing to defend that *opinion* against entrenched 
resistance -- merely to suggest and educate to those who don't know all of the 
things that are now available "out-of-the-box").

The only people that I mean to criticize are those who are attempting to do 
everything themselves and are re-inventing the same things that many others are 
doing and continue to do . . . 
  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 1:42 PM
  Subject: **SPAM** Re: [agi] On programming languages


  Hi Mark,

  I readily concede that .Net is superior to Java out-of-the box with respect 
to reflection and metadata support as you say.  I spent my first project year 
creating three successive versions of a Java persistence framework for an RDF 
quad store using third party libraries for these features.  Now I am completely 
satisfied with respect to these capabilities.  Relatively a small amount of 
code is my own creation, and the libraries I used, e.g. Sesame, Cglib, are well 
maintained.

  -Steve


  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860




  - Original Message 
  From: Mark Waser <[EMAIL PROTECTED]>
  To: agi@v2.listbox.com
  Sent: Friday, October 24, 2008 12:28:36 PM
  Subject: Re: [agi] On programming languages


  AGI *really* needs an environment that comes with reflection and metadata 
support (including persistence, accessibility, etc.) baked right in.

  http://msdn.microsoft.com/en-us/magazine/cc301780.aspx

  (And note that the referenced article is six years old and several major 
releases back)

  This isn't your father's programming *language* . . . .

- Original Message - 
From: Stephen Reed 
To: agi@v2.listbox.com 
Sent: Friday, October 24, 2008 12:55 PM
Subject: **SPAM** Re: [agi] On programming languages


Russell asked:

But if it can't read the syntax tree, how will it know what the main body 
actually does?


My line of thinking arose while considering how to reason over syntax 
trees.  I came to realize that source code composition is somewhat analogous to 
program compilation in this way:  When a source code program is compiled into 
executable machine instructions, much of the conceptual intent of the 
programmer is lost, but the computer can none the less execute the program.  
Humans cannot read compiled binary code; they cannot reason about it.  We need 
source code for reasoning about programs.  Accordingly, I thought about the 
program composition process.  Exactly what is lost, i.e. not explicitly 
recorded, when a human programmer writes a correct source code program from 
high-level specifications.  This "lost" information is what I model as the 
nested composition framework.  When a programmer tries to understand a source 
code program written by someone else, the programmer must reverse-engineer the 
deductive chain that leads from the observed source code back to the perhaps 
only partially known original specifications.

I will not have a worked out example until next year, but a sketch would be 
as follows.  In Java, a main body could be a method or a block within a method. 
 For a method, I do not persist simply the syntax tree for the method, but 
rather the nested composition operations that when subsequently processed 
generate the method source code.   For a composed method I would persist:

  a.. composed preconditions with respect to the method parameters and 
possibly other scoped variables such as class variables

  b.. composed invariant conditions 
  c.. composed postconditions 
  d.. composed method comment 
  e.. composed method type 
  f.. composed method access modifiers (i.e. public, private, abstract 
etc.) 
  g.. composed method parameter type, comment, modifier (e.g. final) 
  h.. composed statements
Composed statements generate 

Re: [agi] On programming languages

2008-10-24 Thread Mark Waser

The obvious fly in the ointment is that a lot of technical work is
done on Unix, so an AI project really wants to keep that option open
if at all possible. Is Mono ready for prime time yet?


No.  Unfortunately not.  But I would argue that most work done on *nix is 
not easily accessible to or usable by other work.



Why do you say that? Python code is concise and very readable, both of
which are positive attributes for extensibility and maintenance.


Yes, but not markedly more so than most other choices.

The problem is that Python does not enforce, promote, or even facilitate any 
number of practices and procedures that are necessary to make large complex 
project extensible and maintainable.  A person who knew all of these 
practices and procedures could laboriously follow and/or re-implement them 
but in many ways it's like trying to program in assembly language.  Quick 
and dirty (and simple) is always a trade-off for complex and lasting and 
extensible.


I always hate discussion about languages because to me the most advanced 
versions of Basic, Pascal, Object-Oriented C (whether ObjectiveC, C++, or 
C#), Java, etc. are basically the same language with slightly different 
syntax from case to case.  The *real* difference between the languages is 
the infrastructure/functionality that is immediately available.  I sneered 
at C++ in an earlier message not because of it's syntax but because you are 
*always* burdened with memory management, garbage collection, etc.  This 
makes C++ code much more expensive (and slow) to develop, maintain, and 
extend.  Python does not have much of the rich infrastructure that other 
languages frequently have and all the really creative Python work seems to 
be migrating on to Ruby . . . .


As you say, I really don't want to choose either language or platform.  What 
I want is the most flexibility so that I can get access to the widest 
variety of already created and tested infrastructure.  Language is far more 
splintered than platform and each language has a rational place  (i.e. a set 
of trade-offs) where it is best.  .Net is a great platform because it 
provides the foundations for all languages to co-exist, work together, and 
even intermingle.  It's even better because it's building up more and more 
and more useful infrastructure while *nix development continues to fragment 
into the flavor of the month.  Take a look, for example, at an the lambda 
closure and LINQ stuff that is now part of the platform and available to 
*all* of the supported languages.  Now look at all the people who are 
re-implementing all that stuff in their project.


The bad point of .Net is that it is now an absolute, stone-cold b*tch to 
learn because there is so much infrastructure available and it's not always 
clear what is most effective when . . . . but once you start using it, 
you'll find that due to the available infrastructure you'll need an order of 
magnitude less code (literally) to produce the same functionality.


But I'm going to quit here.  Language is politics and I find myself tiring 
easily of that these days  :-)





- Original Message - 
From: "Russell Wallace" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 12:56 PM
Subject: **SPAM** Re: [agi] On programming languages



On Fri, Oct 24, 2008 at 5:37 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Instead of arguing language, why don't you argue platform?


Platform is certainly an interesting question. I take the view that
Common Lisp has the advantage of allowing me to defer the choice of
platform. You take the view that .Net has the advantage of allowing
you to defer the choice of language, which is not unreasonable. As far
as I know, there isn't a version of Common Lisp for .Net, but there is
a Scheme, which would be suitable for writing things that the AI needs
to understand, and still allow interoperability with other chunks of
code written in C# or whatever.

The obvious fly in the ointment is that a lot of technical work is
done on Unix, so an AI project really wants to keep that option open
if at all possible. Is Mono ready for prime time yet?

And as for Python?  Great for getting reasonably small projects up 
quickly

and easily.  The cost is trade-offs on extensibility and maintenance --
 which means that, for a large, complex system, some day you're either 
going
to rewrite and replace it (not necessarily a bad thing) or you're going 
to

rue the day that you used it.


Why do you say that? Python code is concise and very readable, both of
which are positive attributes for extensibility and maintenance.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






--

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
Cool.  And you're saying that intelligence is not computable.  So why else 
are we constantly invoking AIXI?  Does it tell us anything else about 
general intelligence?


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 12:59 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI


--- On Fri, 10/24/08, Mark Waser <[EMAIL PROTECTED]> wrote:


The value of AIXI is not that it tells us how to solve AGI.
The value is that it tells us intelligence is not computable



Define "not computable" Too many people are
incorrectly interpreting it to mean "not implementable on a
computer".


Not implementable by a Turing machine. AIXI says the optimal solution is to 
find the shortest program consistent with observation so far. This implies 
the ability to compute Kolmogorov complexity.

http://en.wikipedia.org/wiki/Kolmogorov_complexity

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Mark Waser
AGI *really* needs an environment that comes with reflection and metadata 
support (including persistence, accessibility, etc.) baked right in.

http://msdn.microsoft.com/en-us/magazine/cc301780.aspx

(And note that the referenced article is six years old and several major 
releases back)

This isn't your father's programming *language* . . . .

  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 12:55 PM
  Subject: **SPAM** Re: [agi] On programming languages


  Russell asked:

  But if it can't read the syntax tree, how will it know what the main body 
actually does?


  My line of thinking arose while considering how to reason over syntax trees.  
I came to realize that source code composition is somewhat analogous to program 
compilation in this way:  When a source code program is compiled into 
executable machine instructions, much of the conceptual intent of the 
programmer is lost, but the computer can none the less execute the program.  
Humans cannot read compiled binary code; they cannot reason about it.  We need 
source code for reasoning about programs.  Accordingly, I thought about the 
program composition process.  Exactly what is lost, i.e. not explicitly 
recorded, when a human programmer writes a correct source code program from 
high-level specifications.  This "lost" information is what I model as the 
nested composition framework.  When a programmer tries to understand a source 
code program written by someone else, the programmer must reverse-engineer the 
deductive chain that leads from the observed source code back to the perhaps 
only partially known original specifications.

  I will not have a worked out example until next year, but a sketch would be 
as follows.  In Java, a main body could be a method or a block within a method. 
 For a method, I do not persist simply the syntax tree for the method, but 
rather the nested composition operations that when subsequently processed 
generate the method source code.   For a composed method I would persist:

a.. composed preconditions with respect to the method parameters and 
possibly other scoped variables such as class variables

b.. composed invariant conditions
c.. composed postconditions
d.. composed method comment
e.. composed method type
f.. composed method access modifiers (i.e. public, private, abstract etc.)
g.. composed method parameter type, comment, modifier (e.g. final)
h.. composed statements
  Composed statements generate Java statements such as an assignment statement, 
block statement and so forth.  You can see that there is a tree structure that 
can be navigated when performing a deductive composition operation like "is 
ArrayList imported into the containing class? - if not then compose that import 
in the right place". 

  Persisted composition instances are KB terms that can be linked to the 
justifying algorithmic and domain knowledge.  I hypothesize this is cleaner and 
more flexible than directly tying lower-level persisted syntax trees to their 
justifications. 


   -Steve


  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860




  - Original Message 
  From: Russell Wallace <[EMAIL PROTECTED]>
  To: agi@v2.listbox.com
  Sent: Friday, October 24, 2008 10:28:39 AM
  Subject: Re: [agi] On programming languages

  On Fri, Oct 24, 2008 at 4:10 PM, Stephen Reed <[EMAIL PROTECTED]> wrote:
  > Hi Russell,
  > Although I've already chosen an implementation language for my Texai project
  > - Java, I believe that my experience may interest you.

  Very much so, thank you.

  > I moved up one level of procedural abstraction to view program composition
  > as the key intelligent activity.  Supporting this abstraction level is the
  > capability to perform source code editing for the desired target language -
  > in my case Java.  In my paradigm, its not the program syntax tree that gets
  > persisted in the knowledge base but rather the nested composition framework
  > that bottoms out in primitives that generate Java program elements.  The
  > nested composition framework is my attempt to model the conceptual aspects
  > of program composition.  For example a procedure may have an initialization
  > section, a main body, and a finalization section.  I desire Texai to be able
  > to figure out for itself where to insert a new required variable in the
  > source code so that it has the appropriate scope, and so forth.

  But if it can't read the syntax tree, how will it know what the main
  body actually does?


  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?&;
  Powered by Listbox: http://www.listbox.com





Re: [agi] On programming languages

2008-10-24 Thread Mark Waser

But I thought I'd mention that for OpenCog we are planning on a
cross-language approach.  The core system is C++, for scalability and
efficiency reasons, but the MindAgent objects that do the actual AI
algorithms should be creatable in various languages, including Scheme or
LISP.


*nods* As you know, I'm of the opinion that C++ is literally the worst
possible choice in this context. However...


ROTFL.  OpenCog is dead-set on reinventing the wheel while developing the 
car.  They may eventually create a better product for doing so -- but many 
of us software engineers contend that the car could be more quickly and 
easily developed without going that far back (while the OpenCog folk contend 
that the current wheel is insufficient).


(To be clear, the specific "wheels" in this case are things like memory 
management, garbage collection, etc. -- all those things that need to be 
written in C++ and are baked into more modern languages and platforms).


Note:  You can even create AGI in machine code -- I just wouldn't want to 
try (unless of course, it's simply by creating it a real competent set of 
langauges and compiling it down)


But then again, this is an argument that Ben and I have been having for 
years (and he, admittedly has the dollars and the programmers ;-).



- Original Message - 
From: "Russell Wallace" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 12:45 PM
Subject: **SPAM** Re: [agi] On programming languages



On Fri, Oct 24, 2008 at 5:30 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:

Interesting!  I have a good friend who is also an AGI enthusiast who
followed the same path as you ... a lot of time burned making his own
superior, stripped-down, AGI-customized variant of LISP, followed by a
decision to just go with LISP ;-)


I'm not surprised :-)


But I thought I'd mention that for OpenCog we are planning on a
cross-language approach.  The core system is C++, for scalability and
efficiency reasons, but the MindAgent objects that do the actual AI
algorithms should be creatable in various languages, including Scheme or
LISP.


*nods* As you know, I'm of the opinion that C++ is literally the worst
possible choice in this context. However...


We can do self-modification of components of the system by coding these
components in LISP or other highly manipulable languages.


This is good, and for what it's worth I think the best approach for
OpenCog at this stage to aim to stabilize the C++ core as soon as
possible, and try to write AI code at the higher level in Lisp, Combo
or whatever.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser

I'm making the point "natural language is incompletely defined" for
you, but *not* the point "natural language suffers from Godelian
incompleteness", unless you specify what concept of "proof" applies to
natural language.


I'm back to being lost I think.  You agree that natural language is 
incompletely defined.  Cool.  My saying that natural language suffers from 
Godelian incompleteness merely adds that it *can't* be defined.  Do you mean 
to say that natural languages *can* be completely defined?  Or are you 
arguing that I can't *prove* that they can't be defined?  If it is the last, 
then that's like saying that Godel's theorem can't prove itself -- which is 
exactly the point to what Godel's theorem says . . . .



Have you heard of Tarski's undefinability theorem? It is relevant to
this discussion.
http://en.wikipedia.org/wiki/Indefinability_theory_of_truth


Yes.  In fact, the restatement of Tarski's theory as "No sufficiently 
powerful language is strongly-semantically-self-representational" also 
fundamentally says that I can't prove in natural language what you're asking 
me to prove about natural language.


Personally, I always have trouble separating out Godel and Tarski as they 
are obviously both facets of the same underlying principles.


I'm still not sure of what you're getting at.  If it's a "proof", then Godel 
says I can't give it to you.  If it's something else, then I'm not getting 
it.



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 11:31 AM
Subject: Re: [agi] constructivist issues



Mark,

"It makes sense but I'm arguing that you're making my point for me . . . 
."


I'm making the point "natural language is incompletely defined" for
you, but *not* the point "natural language suffers from Godelian
incompleteness", unless you specify what concept of "proof" applies to
natural language.

"It emphatically does *not* tell us anything about "any approach that
can be implemented on normal computers" and this is where all the
people who insist that "because computers operate algorithmically,
they will never achieve true general intelligence" are going wrong."

It tells us that any approach that is implementable on a normal
computer will not always be able to come up with correct answers to
all halting-problem questions (along with other problems that suffer
from incompleteness).

"You are correct in saying that Godel's theory has been improperly
overused and abused over the years but my point was merely that AGI is
Godellian Incomplete, natural language is Godellian Incomplete, "

Specify "truth" and "proof" in these domains before applying the
theorem, please. For "agi" I am OK, since "X is provable" would mean
"the AGI will come to believe X", and "X is true" would mean something
close to what it intuitively means. But for natural language? "Natural
language will come to believe X" makes no sense, so it can't be our
definition of proof...

Really, it is a small objection, and I'm only making it because I
don't want the theorem abused. You could fix your statement just by
saying "any proof system we might want to provide" will be incomplete
for "any well-defined subset of natural language semantics that is
large enough to talk fully about numbers". Doing this just seems
pointless, because the real point you are trying to make is that the
semantics is ill-defined in general, *not* that some hypothetical
proof system is incomplete.

"and effectively AGI-Complete most probably pretty much exactly means
Godellian-Incomplete. (Yes, that is a radically new phrasing and not
necessarily quite what I mean/meant but . . . . )."

I used to agree that Godelian incompleteness was enough to show that
the semantics of a knowledge representation was strong enough for AGI.
But, that alone doesn't seem to guarantee that a knowledge
representation can faithfully reflect concepts like "continuous
differentiable function" (which gets back to the whole discussion with
Ben).

Have you heard of Tarski's undefinability theorem? It is relevant to
this discussion.
http://en.wikipedia.org/wiki/Indefinability_theory_of_truth

--Abram

On Fri, Oct 24, 2008 at 9:19 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

I'm saying Godelian completeness/incompleteness can't be easily
defined in the context of natural language, so it shouldn't be applied
there without providing justification for that application
(specifically, unambiguous definitions of "provably true" and
"semantically true" for natural language). Does that make sense, 

Re: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
>> The value of AIXI is not that it tells us how to solve AGI. The value is 
>> that it tells us intelligence is not computable

Define "not computable"  Too many people are incorrectly interpreting it to 
mean "not implementable on a computer".

  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 10:49 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI


The value of AIXI is not that it tells us how to solve AGI. The value 
is that it tells us intelligence is not computable.

-- Matt Mahoney, [EMAIL PROTECTED]

    --- On Fri, 10/24/08, Mark Waser <[EMAIL PROTECTED]> wrote:

  From: Mark Waser <[EMAIL PROTECTED]>
  Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI
  To: agi@v2.listbox.com
  Date: Friday, October 24, 2008, 9:51 AM


  >> E.g. according to this, AIXI (with infinite computational power) 
but not AIXItl
  >> would have general intelligence, because the latter can only find 
regularities
  >> expressible using programs of length bounded by l and runtime 
bounded
  >> by t

  

  I hate AIXI because not only does it have infinite computational 
power but people also unconsciously assume that it has infinite data (or, at 
least, sufficient data to determine *everything*).

  AIXI is *not* a general intelligence by any definition that I would 
use.  It is omniscient and need only be a GLUT (giant look-up table) and I 
argue that that is emphatically *NOT* intelligence.  

  AIXI may have the problem-solving capabilities of general 
intelligence but does not operate under the constraints that *DEFINE* a general 
intelligence.  If it had to operate under those constraints, it would fail, 
fail, fail.

  AIXI is useful for determining limits but horrible for drawing other 
types of conclusions about GI.

  


- Original Message - 
From: Ben Goertzel 
To: agi@v2.listbox.com 
Sent: Friday, October 24, 2008 5:02 AM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess 
it is no AGI





On Fri, Oct 24, 2008 at 4:09 AM, Dr. Matthias Heger <[EMAIL 
PROTECTED]> wrote:


  No Mike. AGI must be able to discover regularities of all kind in 
all
  domains.
  If you can find a single domain where your AGI fails, it is no 
AGI.


According to this definition **no finite computational system can 
be an AGI**,
so this is definition obviously overly strong for any practical 
purposes

E.g. according to this, AIXI (with infinite computational power) 
but not AIXItl
would have general intelligence, because the latter can only find 
regularities
expressible using programs of length bounded by l and runtime 
bounded
by t

Unfortunately, the pragmatic notion of AGI we need to use as 
researchers is
not as simple as the above ... but fortunately, it's more 
achievable ;-)

One could view the pragmatic task of AGI as being able to discover 
all regularities
expressible as programs with length bounded by l and runtime 
bounded by t ...
[and one can add a restriction about the resources used to make this
discover], but the thing is, this depends highly on the underlying 
computational model,
which then can be used to encode some significant "domain bias."

-- Ben G
 





  agi | Archives  | Modify Your Subscription   


--
agi | Archives  | Modify Your Subscription  
   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Mark Waser

Instead of arguing language, why don't you argue platform?

Name a language and there's probably a .Net version.  They are all 
interoperable so you can use whatever is most appropriate.  Personally, the 
fact that you can now even easily embed functional language statements  in 
procedural languages (via methods like using F#-style calls in your C# code 
and vice versa) means that it is silly to use languages and platforms that 
lack the wide variety of features and interoperability of one single, common 
low-level architecture supporting all the variety that people need and want.


(but, then again, I'm just a voice in the wilderness on this list ;-)


And as for Python?  Great for getting reasonably small projects up quickly 
and easily.  The cost is trade-offs on extensibility and maintenance --  
which means that, for a large, complex system, some day you're either going 
to rewrite and replace it (not necessarily a bad thing) or you're going to 
rue the day that you used it.



- Original Message - 
From: "Russell Wallace" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 10:41 AM
Subject: **SPAM** Re: [agi] On programming languages



On Fri, Oct 24, 2008 at 3:37 PM, Eric Burton <[EMAIL PROTECTED]> wrote:

Due to a characteristic paucity of datatypes, all powerful, and a
terse, readable syntax, I usually recommend Python for any project
that is just out the gate. It's my favourite way by far at present to
mangle huge tables. By far!


Python is definitely a very good language. Unless this has changed
since I last looked at it, though, it doesn't expose the parse tree,
so isn't suitable for representing AI content?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser
But I do not agree that most humans can be scientists. If this is 
necessary

for general intelligence then most humans are not general intelligences.


Soften "be scientists" to "generally use the scientific method".  Does this 
change your opinion?


- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 10:27 AM
Subject: AW: [agi] constructivist issues



Mark Waser wrote:



Can we get a listing of what you believe these limitations are and whether
or not you believe that they apply to humans?

I believe that humans are constrained by *all* the limits of finite 
automata


yet are general intelligences so I'm not sure of your point.
<<<<<<<<

It is also my opinion that humans are constrained by *all* the limits of
finite automata.
But I do not agree that most humans can be scientists. If this is 
necessary

for general intelligence then most humans are not general intelligences.

It depends on your definition of general intelligence.

Surely there are rules (=algorithms) to be a scientist. If not, AGI would
not be possible and there would not be any scientist at all.

But you cannot separate the rules (algorithm) from the evaluation whether 
a
human or a machine is intelligent. Intelligence comes essentially from 
these

rules and from a lot of data.

The mere ability to use arbitrary rules does not imply general 
intelligence.

Your computer has this ability but without the rules it is not intelligent
at all.

- Matthias





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Mark Waser

Abram,

   Would you agree that this thread is analogous to our debate?

- Original Message - 
From: "Vladimir Nesov" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 6:49 AM
Subject: **SPAM** Re: [agi] On programming languages



On Fri, Oct 24, 2008 at 2:16 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
On Fri, Oct 24, 2008 at 10:56 AM, Vladimir Nesov <[EMAIL PROTECTED]> 
wrote:



Needing many different
features just doesn't look like a natural thing for AI-generated
programs.


No, it doesn't, does it? And then you run into this requirement that
wasn't obvious on day one, and you cater for that, and then you run
into another requirement, that has to be dealt with in a different
way, and then you run into another... and you end up realizing you've
wasted a great deal of irreplaceable time for no good reason
whatsoever.

So I figure I might as well document the mistake, in case it saves
someone having to repeat it.



Well, my point was that maybe the mistake is use of additional
language constructions and not their absence? You yourself should be
able to emulate anything in lambda-calculus (you can add interpreter
for any extension as a part of a program), and so should your AI, if
it's to ever learn open-ended models.

--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
>> E.g. according to this, AIXI (with infinite computational power) but not 
>> AIXItl
>> would have general intelligence, because the latter can only find 
>> regularities
>> expressible using programs of length bounded by l and runtime bounded
>> by t



I hate AIXI because not only does it have infinite computational power but 
people also unconsciously assume that it has infinite data (or, at least, 
sufficient data to determine *everything*).

AIXI is *not* a general intelligence by any definition that I would use.  It is 
omniscient and need only be a GLUT (giant look-up table) and I argue that that 
is emphatically *NOT* intelligence.  

AIXI may have the problem-solving capabilities of general intelligence but does 
not operate under the constraints that *DEFINE* a general intelligence.  If it 
had to operate under those constraints, it would fail, fail, fail.

AIXI is useful for determining limits but horrible for drawing other types of 
conclusions about GI.




  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Friday, October 24, 2008 5:02 AM
  Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI





  On Fri, Oct 24, 2008 at 4:09 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:


No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.


  According to this definition **no finite computational system can be an AGI**,
  so this is definition obviously overly strong for any practical purposes

  E.g. according to this, AIXI (with infinite computational power) but not 
AIXItl
  would have general intelligence, because the latter can only find regularities
  expressible using programs of length bounded by l and runtime bounded
  by t

  Unfortunately, the pragmatic notion of AGI we need to use as researchers is
  not as simple as the above ... but fortunately, it's more achievable ;-)

  One could view the pragmatic task of AGI as being able to discover all 
regularities
  expressible as programs with length bounded by l and runtime bounded by t ...
  [and one can add a restriction about the resources used to make this
  discover], but the thing is, this depends highly on the underlying 
computational model,
  which then can be used to encode some significant "domain bias."

  -- Ben G
   




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser
This does not imply that people usually do not use visual patterns to 
solve

chess.
It only implies that visual patterns are not necessary.


So . . . wouldn't dolphins and bats use sonar patterns to play chess?

So . . . is it *vision* or is it the most developed (for the individual), 
highest bandwidth sensory modality that allows the creation and update of a 
competent domain model?


Humans usually do use vision . . . . Sonar may prove to be more easily 
implemented for AGI.



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 4:30 AM
Subject: **SPAM** AW: [agi] If your AGI can't learn to play chess it is no 
AGI



This does not imply that people usually do not use visual patterns to 
solve

chess.
It only implies that visual patterns are not necessary.

Since I do not know any good blind chess player I would suspect that 
visual

patterns are better for chess
then those patterns which are used by blind people.

http://www.psych.utoronto.ca/users/reingold/publications/Reingold_Charness_P
omplun_&_Stampe_press/


http://www.psychology.gatech.edu/create/pubs/reingold&charness_perception-in
-chess_2005_underwood.pdf


Von: Trent Waddington [mailto:[EMAIL PROTECTED] wrote

http://www.eyeway.org/inform/sp-chess.htm

Trent




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser
The limitations of Godelian completeness/incompleteness are a subset of 
the much stronger limitations of finite automata.


Can we get a listing of what you believe these limitations are and whether 
or not you believe that they apply to humans?


I believe that humans are constrained by *all* the limits of finite automata 
yet are general intelligences so I'm not sure of your point.


- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 4:09 AM
Subject: AW: [agi] constructivist issues


The limitations of Godelian completeness/incompleteness are a subset of the
much stronger limitations of finite automata.

If you want to build a spaceship to go to mars it is of no practical
relevance to think whether it is theoretically possible to move through
wormholes in the universe.

I think, this comparison is adequate to evaluate the role of Gödel's theorem
for AGI.

- Matthias




Abram Demski [mailto:[EMAIL PROTECTED] wrote


I agree with your point in this context, but I think you also mean to
imply that Godel's incompleteness theorem isn't of any importance for
artificial intelligence, which (probably pretty obviously) I wouldn't
agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...

--Abram




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Mark Waser

No Mike. AGI must be able to discover regularities of all kind in all
domains.


Must it be able to *discover* regularities or must it be able to be taught 
and subsequently effectively use regularities?  I would argue the latter. 
(Can we get a show of hands of those who believe the former?  I think that 
it's a small minority but . . . )



If you can find a single domain where your AGI fails, it is no AGI.


Failure is an interesting evaluation.  Ben's made it quite clear that 
advanced science is a domain that stupid (if not non-exceptional) humans 
fail at.  Does that mean that most humans aren't general intelligences?



Chess is broad and narrow at the same time.
It is easy programmable and testable and humans can solve problems of this
domain using abilities which are essential for AGI. Thus chess is a good
milestone.


Chess is a good milestone because of it's very difficulty.  The reason why 
human's learn chess so easily (and that is a relative term) is because they 
already have an excellent spatial domain model in place, a ton of strategy 
knowledge available from other learned domains, and the immense array of 
mental tools that we're going to need to bootstrap an AI.  Chess as a GI 
task (or, via a GI approach) is emphatically NOT easily programmable.



- Original Message - 
From: "Dr. Matthias Heger" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 24, 2008 4:09 AM
Subject: **SPAM** AW: [agi] If your AGI can't learn to play chess it is no 
AGI





No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.

Chess is broad and narrow at the same time.
It is easy programmable and testable and humans can solve problems of this
domain using abilities which are essential for AGI. Thus chess is a good
milestone.

Of course it is not sufficient for AGI. But before you think about
sufficient features, necessary abilities are good milestones to verify
whether your roadmap towards AGI will not go into a dead-end after a long
way of vague hope, that future embodied experience will solve your 
problems

which you cannot solve today.

- Matthias



Mike wrote
P.S. Matthias seems to be cheerfully cutting his own throat here. The idea
of a single domain AGI  or pre-AGI is a contradiction in terms every which
way - not just in terms of domains/subjects or fields, but also sensory
domains.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-24 Thread Mark Waser

I'm saying Godelian completeness/incompleteness can't be easily
defined in the context of natural language, so it shouldn't be applied
there without providing justification for that application
(specifically, unambiguous definitions of "provably true" and
"semantically true" for natural language). Does that make sense, or am
I still confusing?


It makes sense but I'm arguing that you're making my point for me . . . .


agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...


Godel's incompleteness theorem tells us important limitations of all formal 
*and complete* approaches and systems (like logic).  It clearly means that 
any approach to AI is going to have to be open-ended (Godellian-incomplete? 
;-)


It emphatically does *not* tell us anything about "any approach that can be 
implemented on normal computers" and this is where all the people who insist 
that "because computers operate algorithmically, they will never achieve 
true general intelligence" are going wrong.


The later argument is similar to saying that because an inductive 
mathematical proof always operates only on just the next number, it will 
never successfully prove anything about infinity.  I'm a firm believe in 
inductive proofs and the fact that general intelligences can be implemented 
on the computers that we have today.


You are correct in saying that Godel's theory has been improperly overused 
and abused over the years but my point was merely that AGI is Godellian 
Incomplete, natural language is Godellian Incomplete, and effectively 
AGI-Complete most probably pretty much exactly means Godellian-Incomplete. 
(Yes, that is a radically new phrasing and not necessarily quite what I 
mean/meant but . . . . ).



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, October 23, 2008 11:42 PM
Subject: Re: [agi] constructivist issues



Mark,

I'm saying Godelian completeness/incompleteness can't be easily
defined in the context of natural language, so it shouldn't be applied
there without providing justification for that application
(specifically, unambiguous definitions of "provably true" and
"semantically true" for natural language). Does that make sense, or am
I still confusing?

Matthias,

I agree with your point in this context, but I think you also mean to
imply that Godel's incompleteness theorem isn't of any importance for
artificial intelligence, which (probably pretty obviously) I wouldn't
agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...

--Abram

On Thu, Oct 23, 2008 at 4:07 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

So to sum up, while you think linguistic vagueness comes from Godelian
incompleteness, I think Godelian incompleteness can't even be defined
in this context, due to linguistic vagueness.


OK.  Personally, I think that you did a good job of defining Godelian
Incompleteness this time but arguably you did it by reference and by
building a new semantic structure as you went along.

On the other hand, you now seem to be arguing that my thinking that
linguistic vagueness comes from Godelian incompleteness is wrong because
Godelian incompleteness can't be defined . . . .

I'm sort of at a loss as to how to proceed from here.  If Godelian
Incompleteness can't be defined, then by definition I can't prove 
anything

but you can't disprove anything.

This is nicely Escheresque and very Hofstadterian but . . . .


- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Thursday, October 23, 2008 11:54 AM
Subject: Re: [agi] constructivist issues





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-23 Thread Mark Waser

So to sum up, while you think linguistic vagueness comes from Godelian
incompleteness, I think Godelian incompleteness can't even be defined
in this context, due to linguistic vagueness.


OK.  Personally, I think that you did a good job of defining Godelian 
Incompleteness this time but arguably you did it by reference and by 
building a new semantic structure as you went along.


On the other hand, you now seem to be arguing that my thinking that 
linguistic vagueness comes from Godelian incompleteness is wrong because 
Godelian incompleteness can't be defined . . . .


I'm sort of at a loss as to how to proceed from here.  If Godelian 
Incompleteness can't be defined, then by definition I can't prove anything 
but you can't disprove anything.


This is nicely Escheresque and very Hofstadterian but . . . .


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, October 23, 2008 11:54 AM
Subject: Re: [agi] constructivist issues



Mark,

My type 1 & 2 are probably the source of your confusion, since I
phrased them so that (as you said) they depend on "intention".
Logicians  codify the intension using semantics, so it is actually
well defined, even though it sounds messy. But, since that explanation
did not work well, let me try to put it a completely different way
rather than trying to better explain the difference between 1 and 2.

Godel's incompleteness theorem says that any logic with a sufficiently
strong semantics will be syntactically incomplete; there will be
theorems that are true according to the semantics, but based on
allowed proofs, they will be neither true nor false. So Godel's
theorem is about an essential lack of match-up between proof and
truth, or as is often said, syntax and semantics.

To apply the theorem to natural language, we've got to identify the
syntax and semantics: the notions of "proof" and "truth"
that apply. But in attempting to define these, we will run into some
serious problems: proof and truth in natural language is only
partially defined. Furthermore, those "serious problems" are (it seems
to me) precisely what you are referring to.

So to sum up, while you think linguistic vagueness comes from Godelian
incompleteness, I think Godelian incompleteness can't even be defined
in this context, due to linguistic vagueness.

--Abram

On Thu, Oct 23, 2008 at 9:54 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

But, I still do not agree with the way you are using the incompleteness
theorem.


Um.  OK.  Could you point to a specific example where you disagree?  I'm 
a

little at a loss here . . . .


It is important to distinguish between two different types of
incompleteness.
1. Normal Incompleteness-- a logical theory fails to completely specify
something.
2. Godelian Incompleteness-- a logical theory fails to completely 
specify

something, even though we want it to.


I'm also not getting this.  If I read the words, it looks like the
difference between Normal and Godelian incompleteness is based upon our
desires.  I think I'm having a complete disconnect with your intended
meaning.


However, it seems like all you need is type 1 completeness for what


you are saying.


So, Godel's theorem is way overkill here in my opinion.


Um.  OK.  So I used a bazooka on a fly?  If it was a really pesky fly and 
I

didn't destroy anything else, is that wrong?  :-)

It seems as if you're not arguing with my conclusion but saying that my
arguments were way better than they needed to be (like I'm being
over-efficient?) . . . .

= = = = =

Seriously though, I having a complete disconnect here.  Maybe I'm just
having a bad morning but . . .  huh?   :-)
If I read the words, all I'm getting is that you disagree with the way 
that
I am using the theory because the theory is overkill for what is 
necessary.


- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, October 22, 2008 9:05 PM
Subject: Re: [agi] constructivist issues


Mark,

I own and have read the book-- but my first introduction to Godel's
Theorem was Douglas Hofstadter's earlier work, Godel Escher Bach.
Since I had already been guided through the details of the proof (and
grappled with the consequences), to be honest chapter 10 you refer to
was a little boring :).

But, I still do not agree with the way you are using the incompleteness
theorem.

It is important to distinguish between two different types of
incompleteness.

1. Normal Incompleteness-- a logical theory fails to completely
specify something.
2. Godelian Incompleteness-- a logical theory fails to completely
specify something, even though we want it to.

Logicians always mean type 2 incompleteness when they use the term. To
formalize the difference between the two, the measuring stick of
"semantics" is 

Re: [agi] Understanding and Problem Solving

2008-10-23 Thread Mark Waser
I like that.  NLU isn't AGI-complete but achieving it is (if you've got a 
vaguely mammalian-brain-like architecture   :-)
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Thursday, October 23, 2008 10:18 AM
  Subject: **SPAM** Re: [agi] Understanding and Problem Solving



  On whether NLU is AGI-complete, it really depends on the particulars of the 
definition of NLU ... but according to my own working definition of NLU I agree 
that it isn't ... 

  However, as I stated before, within any vaguely mammalian-brain-like AI 
architecture, I do suspect that achieving NLU is AGI-complete...

  -- Ben G



  On Thu, Oct 23, 2008 at 10:12 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> 
wrote:

I do not agree. Understanding a domain does not imply the ability to solve 
problems in that domain.

And the ability to solve problems in a domain even does not imply to have a 
generally a deeper understanding of that domain.



Once again my example of the problem to find a path within a graph from 
node A to node B:

Program p1 (= problem solver) can find a path.

Program p2  (= expert in understanding) can verify and analyze paths.



For instance, p2 could be able compare the length of the path for the first 
half of the nodes with the length of the path for the second half of the nodes. 
It is not necessary that  P1 can do this as well.



P2 can not necessarily find a path. But p1 can not necessarily analyze its 
solution.



Understanding  and problem solving are different things which might have a 
common subset but it is wrong that the one implies the other one or vice versa.



And that's the main reason why natural language understanding is not 
necessarily AGI-complete.



-Matthias





Terren Suydam [mailto:[EMAIL PROTECTED]  wrote:






  Once again, there is a depth to understanding - it's not simply a 
binary proposition.

  Don't you agree that a grandmaster understands chess better than you 
do, even if his moves are understandable to you in hindsight?

  If I'm not good at math, I might not be able to solve y=3x+4 for x, 
but I might understand that y equals 3 times x plus four. My understanding is 
superficial compared to someone who can solve for x. 

  Finally, don't you agree that understanding natural language requires 
solving problems? If not, how would you account for an AI's ability to 
understand novel metaphor? 

  Terren

  --- On Thu, 10/23/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:

  From: Dr. Matthias Heger <[EMAIL PROTECTED]>
  Subject: [agi] Understanding and Problem Solving
  To: agi@v2.listbox.com
  Date: Thursday, October 23, 2008, 1:47 AM

  Terren Suydam wrote:

  >>>  

  Understanding goes far beyond mere knowledge - understanding *is* the 
ability to solve problems. One's understanding of a situation or problem is 
only as deep as one's (theoretical) ability to act in such a way as to achieve 
a desired outcome. 

  <<<  



  I disagree. A grandmaster of chess can explain his decisions and I 
will understand them. Einstein could explain his theory to other physicist(at 
least a subset) and they could understand it.



  I can read a proof in mathematics and I will understand it – because 
I only have to understand (= check) every step of the proof.



  Problem solving is much much more than only understanding.

  Problem solving is the ability to *create* a sequence of actions to 
change a system's state from A to a desired state B.



  For example: Problem Find a path from A to B within a graph.

  An algorithm which can check a solution and can answer details about 
the solution is not necessarily able to find a solution.



  -Matthias






--

agi | Archives | Modify Your Subscription
   
   
 






  agi | Archives | Modify Your Subscription
 
 





  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects."  -- Robert Heinlein




--

Re: Lojban (was Re: [agi] constructivist issues)

2008-10-23 Thread Mark Waser
Hi.  I don't understand the following statements.  Could you explain it some 
more?

- Natural language can be learned from examples. Formal language can not.

I think that you're basing this upon the methods that *you* would apply to each 
of the types of language.  It makes sense to me that because of the 
regularities of a formal language that you would be able to use more effective 
methods -- but it doesn't mean that the methods used on natural language 
wouldn't work (just that they would be as inefficient as they are on natural 
languages.

I would also argue that the same argument applies to the first statement of 
following the following two.

- Formal language must be parsed before it can be understood. Natural language 
must be understood before it can be parsed.


  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 9:23 PM
  Subject: Lojban (was Re: [agi] constructivist issues)


Why would anyone use a simplified or formalized English (with regular 
grammar and no ambiguities) as a path to natural language understanding? Formal 
language processing has nothing to do with natural language processing other 
than sharing a common lexicon that make them appear superficially similar.

- Natural language can be learned from examples. Formal language can 
not.
- Formal language has an exact grammar and semantics. Natural language 
does not.
- Formal language must be parsed before it can be understood. Natural 
language must be understood before it can be parsed.
- Formal language is designed to be processed efficiently on a fast, 
reliable, sequential computer that neither makes nor tolerates errors, between 
systems that have identical, fixed language models. Natural language evolved to 
be processed efficiently by a slow, unreliable, massively parallel computer 
with enormous memory in a noisy environment between systems that have different 
but adaptive language models.

So how does yet another formal language processing system help us 
understand natural language? This route has been a dead end for 50 years, in 
spite of the ability to always make some initial progress before getting stuck.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Wed, 10/22/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

  From: Ben Goertzel <[EMAIL PROTECTED]>
  Subject: Re: [agi] constructivist issues
  To: agi@v2.listbox.com
  Cc: [EMAIL PROTECTED]
  Date: Wednesday, October 22, 2008, 12:27 PM



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled 
via reference to WordNet via usages like run_1, run_2, etc. ... or as you say 
by using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be 
handled in a similar way, e.g. by defining an ontology of preposition meanings 
like with_1, with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing 
resources into a preposition-meaning ontology like this a while back ... the 
so-called PrepositionWordNet ... or as it eventually came to be called the 
LARDict or LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts 
of subscripts, and in this way to recognize a highly controlled English that 
would be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple 
sentences, so the only real hassle to deal with is disambiguation.   We could 
use similar hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 
urinated in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with 
an AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



      On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser <[EMAIL PROTECTED]> 
wrote:

>> IMHO that is an almost hopeless approach, ambiguity is too 
integral to English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always 
use big words and never use small words and/or you use a specific phrase as a 
"word".  Ambiguous prepositions just disambiguate to one of 
three/four/five/more possible unambiguous words/phrases.

 

Re: [agi] constructivist issues

2008-10-23 Thread Mark Waser
But, I still do not agree with the way you are using the incompleteness 
theorem.


Um.  OK.  Could you point to a specific example where you disagree?  I'm a 
little at a loss here . . . .


It is important to distinguish between two different types of 
incompleteness.
1. Normal Incompleteness-- a logical theory fails to completely specify 
something.
2. Godelian Incompleteness-- a logical theory fails to completely specify 
something, even though we want it to.


I'm also not getting this.  If I read the words, it looks like the 
difference between Normal and Godelian incompleteness is based upon our 
desires.  I think I'm having a complete disconnect with your intended 
meaning.



However, it seems like all you need is type 1 completeness for what

you are saying.

So, Godel's theorem is way overkill here in my opinion.


Um.  OK.  So I used a bazooka on a fly?  If it was a really pesky fly and I 
didn't destroy anything else, is that wrong?  :-)


It seems as if you're not arguing with my conclusion but saying that my 
arguments were way better than they needed to be (like I'm being 
over-efficient?) . . . .


= = = = =

Seriously though, I having a complete disconnect here.  Maybe I'm just 
having a bad morning but . . .  huh?   :-)
If I read the words, all I'm getting is that you disagree with the way that 
I am using the theory because the theory is overkill for what is necessary.


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 9:05 PM
Subject: Re: [agi] constructivist issues


Mark,

I own and have read the book-- but my first introduction to Godel's
Theorem was Douglas Hofstadter's earlier work, Godel Escher Bach.
Since I had already been guided through the details of the proof (and
grappled with the consequences), to be honest chapter 10 you refer to
was a little boring :).

But, I still do not agree with the way you are using the incompleteness 
theorem.


It is important to distinguish between two different types of 
incompleteness.


1. Normal Incompleteness-- a logical theory fails to completely
specify something.
2. Godelian Incompleteness-- a logical theory fails to completely
specify something, even though we want it to.

Logicians always mean type 2 incompleteness when they use the term. To
formalize the difference between the two, the measuring stick of
"semantics" is used. If a logic's provably-true statements don't match
up to its semantically-true statements, it is incomplete.

However, it seems like all you need is type 1 completeness for what
you are saying. Nobody claims that there is a complete, well-defined
semantics for natural language against which we could measure the
"provably-true" (whatever THAT would mean).

So, Godel's theorem is way overkill here in my opinion.

--Abram

On Wed, Oct 22, 2008 at 7:48 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Most of what I was thinking of and referring to is in Chapter 10.  Gödel's
Quintessential Strange Loop (pages 125-145 in my version) but I would
suggest that you really need to read the shorter Chapter 9. Pattern and
Provability (pages 113-122) first.

I actually had them conflated into a single chapter in my memory.

I think that you'll enjoy them tremendously.

- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, October 22, 2008 4:19 PM
Subject: Re: [agi] constructivist issues



Mark,

Chapter number please?

--Abram

On Wed, Oct 22, 2008 at 1:16 PM, Mark Waser <[EMAIL PROTECTED]> wrote:


Douglas Hofstadter's newest book I Am A Strange Loop (currently 
available

from Amazon for $7.99 -
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM)
has
an excellent chapter showing Godel in syntax and semantics.  I highly
recommend it.

The upshot is that while it is easily possible to define a complete
formal
system of syntax, that formal system can always be used to convey
something
(some semantics) that is (are) outside/beyond the system -- OR, to
paraphrase -- meaning is always incomplete because it can always be 
added

to
even inside a formal system of syntax.

This is why I contend that language translation ends up being
AGI-complete
(although bounded subsets clearly don't need to be -- the question is
whether you get a usable/useful subset more easily with or without first
creating a seed AGI).

- Original Message - From: "Abram Demski" 
<[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 12:38 PM
Subject: Re: [agi] constructivist issues



Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser <[EMAIL PROTECTED]>
wrote:


It looks like a

Re: AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-23 Thread Mark Waser

I have already proved something stronger


What would you consider your best reference/paper outlining your arguments? 
Thanks in advance.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 8:55 PM
Subject: Re: AW: AW: [agi] Language learning (was Re: Defining AGI)



--- On Wed, 10/22/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:


You make the implicit assumption that a natural language
understanding system will pass the turing test. Can you prove this?


If you accept that a language model is a probability distribution over 
text, then I have already proved something stronger. A language model 
exactly duplicates the distribution of answers that a human would give. 
The output is indistinguishable by any test. In fact a judge would have 
some uncertainty about other people's language models. A judge could be 
expected to attribute some errors in the model to normal human variation.



Furthermore,  it is just an assumption that the ability to
have and to apply
the rules are really necessary to pass the turing test.

For these two reasons, you still haven't shown 3a and
3b.


I suppose you are right. Instead of encoding mathematical rules as a 
grammar, with enough training data you can just code all possible 
instances that are likely to be encountered. For example, instead of a 
grammar rule to encode the commutative law of addition,


 5 + 3 = a + b = b + a = 3 + 5

a model with a much larger training data set could just encode instances 
with no generalization:


 12 + 7 = 7 + 12
 92 + 0.5 = 0.5 + 92
 etc.

I believe this is how Google gets away with brute force n-gram statistics 
instead of more sophisticated grammars. It's language model is probably 
10^5 times larger than a human model (10^14 bits vs 10^9 bits). Shannon 
observed in 1949 that random strings generated by n-gram models of English 
(where n is the number of either letters or words) look like natural 
language up to length 2n. For a typical human sized model (1 GB text), n 
is about 3 words. To model strings longer than 6 words we would need more 
sophisticated grammar rules. Google can model 5-grams (see 
http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html ) 
, so it is able to generate and recognize (thus appear to understand) 
sentences up to about 10 words.



By the way:
The turing test must convince 30% of the people.
Today there is a system which can already convince 25%

http://www.sciencedaily.com/releases/2008/10/081013112148.htm


It would be interesting to see a version of the Turing test where the 
human confederate, machine, and judge all have access to a computer with 
an internet connection. I wonder if this intelligence augmentation would 
make the test easier or harder to pass?




-Matthias


> 3) you apply rules such as 5 * 7 = 35 -> 35 / 7 = 5
but
> you have not shown that
> 3a) that a language understanding system
necessarily(!) has
> this rules
> 3b) that a language understanding system
necessarily(!) can
> apply such rules

It must have the rules and apply them to pass the Turing
test.

-- Matt Mahoney, [EMAIL PROTECTED]



-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
My point was meant to be that  control is part of 
effective concept creation.  You had phrased it as if concept creation was an 
additional necessity on top of inference control.

But I think we're reaching the point of silliness here . . . .   
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 6:35 PM
  Subject: **SPAM** Re: AW: [agi] If your AGI can't learn to play chess it is 
no AGI



  all these words ...  "inference", "control", "concept", "creation" ... are 
inadequately specified in natural language so misunderstandings will be easy to 
come by.  However, I don't have time to point out the references to my 
particular intended definitions..

  I did not mean to imply that the control involved would be entirely in the 
domain of inference, even when inference is broadly construed... just that 
control of inference, broadly construed, is a key aspect...

  ben g


  On Wed, Oct 22, 2008 at 5:41 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> No system can make those kinds of inventions without sophisticated 
inference control.  Concept creation of course is required also, though.

I'd argue that this is bad phrasing.  

Sure, effective control is necessary to create the concepts that you need 
to fulfill your goals (as opposed to far too many random unuseful concepts) . . 
. . 

But it isn't "Concept creation of course is required also", it really is 
"Effective control is necessary for effective concept creation which is 
necessary for effective goal fulfillment."

And assuming that control must be sophisticated and necessarily entirely in 
the realm of inference are just assumptions . . . .   :-)

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 3:54 PM
  Subject: **SPAM** Re: AW: [agi] If your AGI can't learn to play chess it 
is no AGI







>> Mathematics, though, is interesting in other ways.  I don't believe 
that much of mathematics involves the logical transformations performed in 
proof steps.  A system that invents new fields of mathematics, new terms, new 
mathematical "ideas" -- that is truly interesting.  Inference control is 
boring, but inventing mathematical induction, complex numbers, or ring theory 
-- THAT is AGI-worthy.
 
Is this different from generic concept formulation and explanation 
(just in a slightly different domain)?


  No system can make those kinds of inventions without sophisticated 
inference control.  Concept creation of course is required also, though.

  -- Ben
   



--
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects."  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
Most of what I was thinking of and referring to is in Chapter 10.  Gödel's 
Quintessential Strange Loop (pages 125-145 in my version) but I would 
suggest that you really need to read the shorter Chapter 9. Pattern and 
Provability (pages 113-122) first.


I actually had them conflated into a single chapter in my memory.

I think that you'll enjoy them tremendously.

- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 4:19 PM
Subject: Re: [agi] constructivist issues



Mark,

Chapter number please?

--Abram

On Wed, Oct 22, 2008 at 1:16 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

Douglas Hofstadter's newest book I Am A Strange Loop (currently available
from Amazon for $7.99 -
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM) 
has

an excellent chapter showing Godel in syntax and semantics.  I highly
recommend it.

The upshot is that while it is easily possible to define a complete 
formal
system of syntax, that formal system can always be used to convey 
something

(some semantics) that is (are) outside/beyond the system -- OR, to
paraphrase -- meaning is always incomplete because it can always be added 
to

even inside a formal system of syntax.

This is why I contend that language translation ends up being 
AGI-complete

(although bounded subsets clearly don't need to be -- the question is
whether you get a usable/useful subset more easily with or without first
creating a seed AGI).

- Original Message - From: "Abram Demski" <[EMAIL PROTECTED]>
To: 
Sent: Wednesday, October 22, 2008 12:38 PM
Subject: Re: [agi] constructivist issues



Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser <[EMAIL PROTECTED]> 
wrote:


It looks like all this "disambiguation" by moving to a more formal
language is about sweeping the problem under the rug, removing the
need for uncertain reasoning from surface levels of syntax and
semantics, to remember about it 10 years later, retouch the most
annoying holes with simple statistical techniques, and continue as
before.


That's an excellent criticism but not the intent.

Godel's Incompleteness Theorem means that you will be forever building 
.

. .
.

All that disambiguation does is provides a solid, commonly-agreed upon
foundation to build from.

English and all natural languages are *HARD*.  They are not optimal for
simple understanding particularly given the realms we are currently in
and
ambiguity makes things even worse.

Languages have so many ambiguities because of the way that they (and
concepts) develop.  You see something new, you grab the nearest analogy
and
word/label and then modify it to fit.  That's why you then later need 
the

much longer words and very specific scientific terms and names.

Simple language is what you need to build the more specific complex
language.  Having an unambiguous constructed language is simply a
template
or mold that you can use as scaffolding while you develop NLU. 
Children

start out very unambiguous and concrete and so should we.

(And I don't believe in statistical techniques unless you have the
resources
of Google or AIXI)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
>> No system can make those kinds of inventions without sophisticated inference 
>> control.  Concept creation of course is required also, though.

I'd argue that this is bad phrasing.  

Sure, effective control is necessary to create the concepts that you need to 
fulfill your goals (as opposed to far too many random unuseful concepts) . . . 
. 

But it isn't "Concept creation of course is required also", it really is 
"Effective control is necessary for effective concept creation which is 
necessary for effective goal fulfillment."

And assuming that control must be sophisticated and necessarily entirely in the 
realm of inference are just assumptions . . . .   :-)

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 3:54 PM
  Subject: **SPAM** Re: AW: [agi] If your AGI can't learn to play chess it is 
no AGI







>> Mathematics, though, is interesting in other ways.  I don't believe that 
much of mathematics involves the logical transformations performed in proof 
steps.  A system that invents new fields of mathematics, new terms, new 
mathematical "ideas" -- that is truly interesting.  Inference control is 
boring, but inventing mathematical induction, complex numbers, or ring theory 
-- THAT is AGI-worthy.
 
Is this different from generic concept formulation and explanation (just in 
a slightly different domain)?


  No system can make those kinds of inventions without sophisticated inference 
control.  Concept creation of course is required also, though.

  -- Ben
   



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


RE: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
>> Still, using chess as a test case may not be useless; a system that produces 
>> a convincing story about concept formation in the chess domain (that is, 
>> that invents concepts for pinning, pawn chains, speculative sacrifices in 
>> exchange for piece mobility, zugzwang, and so on without an identifiable 
>> bias toward these things) would at least be interesting to those interested 
>> in AGI.

I believe that generic concept formation and explanation is an AGI-complete 
problem.  Would you agree or disagree?


>> Mathematics, though, is interesting in other ways.  I don't believe that 
>> much of mathematics involves the logical transformations performed in proof 
>> steps.  A system that invents new fields of mathematics, new terms, new 
>> mathematical "ideas" -- that is truly interesting.  Inference control is 
>> boring, but inventing mathematical induction, complex numbers, or ring 
>> theory -- THAT is AGI-worthy.
 
Is this different from generic concept formulation and explanation (just in a 
slightly different domain)?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Mark Waser
A couple of distinctions that I think would be really helpful for this 
discussion . . . . 

There is a profound difference between learning to play chess legally and 
learning to play chess well.

There is an equally profound difference between discovering how to play chess 
well and being taught to play chess well.

Personally, I think that a minimal AGI should be able to be taught to play 
chess reasonably well (i.e. about how well an average human would play after 
being taught the rules and playing a reasonable number of games with 
hints/pointers/tutoring provided) at about the same rate as a human when given 
the same assistance as that human.

Given that grandmasters don't learn solely from chess-only examples without 
help or without analogies and strategies from other domains, I don't see why an 
AGI should be forced to operate under those constraints.  Being taught is much 
faster and easier than discovering on your own.  Translating an analogy or 
transferring a strategy from another domain is much faster than discovering 
something new or developing something from scratch.  Why are we crippling our 
AGI in the name of simplicity?

(And Go is obviously the same)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [OpenCog] Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> Well, I am confident my approach with subscripts to handle disambiguation 
>> and reference resolution would work, in conjunction with the existing 
>> link-parser/RelEx framework...
>> If anyone wants to implement it, it seems like "just" some hacking with the 
>> open-source Java RelEx code...

Like what I called a "semantically-driven English->subset translator"?.  

Oh, I'm pretty confidant that it will work as well . . . . after the LaBrea tar 
pit of implementations . . . . (exactly how little semantic-related coding do 
you think will be necessary? ;-)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 1:06 PM
  Subject: Re: [OpenCog] Re: [agi] constructivist issues



  Well, I am confident my approach with subscripts to handle disambiguation and 
reference resolution would work, in conjunction with the existing 
link-parser/RelEx framework...

  If anyone wants to implement it, it seems like "just" some hacking with the 
open-source Java RelEx code...

  ben g


  On Wed, Oct 22, 2008 at 12:59 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> I think this would be a relatively pain-free way to communicate with an 
AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

Awesome.  Like I said, it's a piece of something that I'm trying currently. 
 If I get positive results, I'm certainly not going to hide the fact.  ;-)

(or, it could turn into a learning experience like my attempts with 
Simplified English and Basic English :-)
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 12:27 PM
  Subject: [OpenCog] Re: [agi] constructivist issues



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled via 
reference to WordNet via usages like run_1, run_2, etc. ... or as you say by 
using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be handled 
in a similar way, e.g. by defining an ontology of preposition meanings like 
with_1, with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing 
resources into a preposition-meaning ontology like this a while back ... the 
so-called PrepositionWordNet ... or as it eventually came to be called the 
LARDict or LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts of 
subscripts, and in this way to recognize a highly controlled English that would 
be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple sentences, 
so the only real hassle to deal with is disambiguation.   We could use similar 
hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 
urinated in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with an 
AI that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



  On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> IMHO that is an almost hopeless approach, ambiguity is too integral 
to English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use 
big words and never use small words and/or you use a specific phrase as a 
"word".  Ambiguous prepositions just disambiguate to one of 
three/four/five/more possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic 
English) actually *favored* the small tremendously over-used/ambiguous words 
(because you got so much more "bang for the buck" with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

>> If you want to take this sort of approach, you'd better start with 
Lojban instead  Learning Lojban is a pain but far less pain than you'll 
have trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you 
can come up with an unambiguous Engl

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
Douglas Hofstadter's newest book I Am A Strange Loop (currently available 
from Amazon for $7.99 - 
http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/B001FA23HM) has 
an excellent chapter showing Godel in syntax and semantics.  I highly 
recommend it.


The upshot is that while it is easily possible to define a complete formal 
system of syntax, that formal system can always be used to convey something 
(some semantics) that is (are) outside/beyond the system -- OR, to 
paraphrase -- meaning is always incomplete because it can always be added to 
even inside a formal system of syntax.


This is why I contend that language translation ends up being AGI-complete 
(although bounded subsets clearly don't need to be -- the question is 
whether you get a usable/useful subset more easily with or without first 
creating a seed AGI).


- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 12:38 PM
Subject: Re: [agi] constructivist issues



Mark,

The way you invoke Godel's Theorem is strange to me... perhaps you
have explained your argument more fully elsewhere, but as it stands I
do not see your reasoning.

--Abram

On Wed, Oct 22, 2008 at 12:20 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

It looks like all this "disambiguation" by moving to a more formal
language is about sweeping the problem under the rug, removing the
need for uncertain reasoning from surface levels of syntax and
semantics, to remember about it 10 years later, retouch the most
annoying holes with simple statistical techniques, and continue as
before.


That's an excellent criticism but not the intent.

Godel's Incompleteness Theorem means that you will be forever building . 
. .

.

All that disambiguation does is provides a solid, commonly-agreed upon
foundation to build from.

English and all natural languages are *HARD*.  They are not optimal for
simple understanding particularly given the realms we are currently in 
and

ambiguity makes things even worse.

Languages have so many ambiguities because of the way that they (and
concepts) develop.  You see something new, you grab the nearest analogy 
and

word/label and then modify it to fit.  That's why you then later need the
much longer words and very specific scientific terms and names.

Simple language is what you need to build the more specific complex
language.  Having an unambiguous constructed language is simply a 
template

or mold that you can use as scaffolding while you develop NLU.  Children
start out very unambiguous and concrete and so should we.

(And I don't believe in statistical techniques unless you have the 
resources

of Google or AIXI)



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [OpenCog] Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> I think this would be a relatively pain-free way to communicate with an AI 
>> that lacks the common sense to carry out disambiguation and reference 
>> resolution reliably.   Also, the log of communication would provide a nice 
>> training DB for it to use in studying disambiguation.

Awesome.  Like I said, it's a piece of something that I'm trying currently.  If 
I get positive results, I'm certainly not going to hide the fact.  ;-)

(or, it could turn into a learning experience like my attempts with Simplified 
English and Basic English :-)
  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Cc: [EMAIL PROTECTED] 
  Sent: Wednesday, October 22, 2008 12:27 PM
  Subject: [OpenCog] Re: [agi] constructivist issues



  This is the standard Lojban dictionary

  http://jbovlaste.lojban.org/

  I am not so worried about word meanings, they can always be handled via 
reference to WordNet via usages like run_1, run_2, etc. ... or as you say by 
using rarer, less ambiguous words

  Prepositions are more worrisome, however, I suppose they can be handled in a 
similar way, e.g. by defining an ontology of preposition meanings like with_1, 
with_2, with_3, etc.

  In fact we had someone spend a couple months integrating existing resources 
into a preposition-meaning ontology like this a while back ... the so-called 
PrepositionWordNet ... or as it eventually came to be called the LARDict or 
LogicalArgumentRelationshipDictionary ...

  I think it would be feasible to tweak RelEx to recognize these sorts of 
subscripts, and in this way to recognize a highly controlled English that would 
be unproblematic to map semantically...

  We would then say e.g.

  I ate dinner with_2 my fork

  I live in_2 Maryland

  I have lived_6 for_3 41 years

  (where I suppress all _1's, so that e.g. ate means ate_1)

  Because, RelEx already happily parses the syntax of all simple sentences, so 
the only real hassle to deal with is disambiguation.   We could use similar 
hacking for reference resolution, temporal sequencing, etc.

  The terrorists_v1 robbed_v2 my house.   After that_v2, the jerks_v1 urinated 
in_3 my yard.  

  I think this would be a relatively pain-free way to communicate with an AI 
that lacks the common sense to carry out disambiguation and reference 
resolution reliably.   Also, the log of communication would provide a nice 
training DB for it to use in studying disambiguation.

  -- Ben G



  On Wed, Oct 22, 2008 at 12:00 PM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> IMHO that is an almost hopeless approach, ambiguity is too integral to 
English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use big 
words and never use small words and/or you use a specific phrase as a "word".  
Ambiguous prepositions just disambiguate to one of three/four/five/more 
possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic 
English) actually *favored* the small tremendously over-used/ambiguous words 
(because you got so much more "bang for the buck" with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

>> If you want to take this sort of approach, you'd better start with 
Lojban instead  Learning Lojban is a pain but far less pain than you'll 
have trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you can 
come up with an unambiguous English word or very short phrase for each Lojban 
word.  If you can do it, my approach will work and will have the advantage that 
the output can be read by anyone (i.e. it's the equivalent of me having done it 
in Lojban and then added a Lojban -> English translation on the end) though the 
input is still *very* problematical (thus the need for a semantically-driven 
English->subset translator).  If you can't do it, then my approach won't work.

Can you do it?  Why or why not?  If you can, do you still believe that my 
approach won't work?  Oh, wait . . . . a Lojban-to-English dictionary *does* 
attempt to come up with an unambiguous English word or very short phrase for 
each Lojban word.  :-)

Actually, h . . . . a Lojban dictionary would probably help me focus my 
efforts a bit better and highlight things that I may have missed . . . . do you 
have a preferred dictionary or resource?  (Google has too many for me to do a 
decent perusal quickly)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues







Personally, rather than starting with NLP, I think that we're going to 
need to start with a formal language that is a

Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser

What I meant was, it seems like humans are
"logically complete" in some sense. In practice we are greatly limited
by memory and processing speed and so on; but I *don't* think we're
limited by lacking some important logical construct. It would be like
us discovering some alien species whose mathematicians were able to
understand each individual case of mathematical induction, but were
unable to comprehend the argument for accepting it as a general
principle, because they lack the abstraction. Something like that is
what I find implausible.


I like the phrase "logically complete".

The way that I like to think about it is that we have the necessary seed of 
whatever intelligence/competence is that can be logically extended to cover 
all circumstances.


We may not have the personal time or resources to do so but given infinite 
time and resources there is no block on the path from what we have to 
getting there.


Note, however, that it is my understanding that a number of people on this 
list do not agree with this statement (feel free to chime in with you 
reasons why folks).



- Original Message - 
From: "Abram Demski" <[EMAIL PROTECTED]>

To: 
Sent: Wednesday, October 22, 2008 12:20 PM
Subject: Re: [agi] constructivist issues



Too many responses for me to comment on everything! So, sorry to those
I don't address...

Ben,

When I claim a mathematical entity exists, I'm saying loosely that
meaningful statements can be made using it. So, I think "meaning" is
more basic. I mentioned already what my current definition of meaning
is: a statement is meaningful if it is associated with a computable
rule of deduction that it can use to operate on other (meaningful)
statements. This is in contrast to positivist-style definitions of
meaning, that would instead require a computable test of truth and/or
falsehood.

So, a statement is meaningful if it has procedural deductive meaning.
We *understand* a statement if we are capable of carrying out the
corresponding deductive procedure. A statement is *true* if carrying
out that deductive procedure only produces more true statements. We
*believe* a statement if we not only understand it, but proceed to
apply its deductive procedure.

There is of course some basic level of meaningful statements, such as
sensory observations, so that this is a working recursive definition
of meaning and truth.

By this definition of meaning, any statement in the arithmetical
hierarchy is meaningful (because each statement can be represented by
computable consequences on other statements in the arithmetical
hierarchy). I think some hyperarithmetical truths are captured as
well. I am more doubtful about it capturing anything beyond the first
level of the analytic hierarchy, and general set-theoretic discourse
seems far beyond its reach. Regardless, the definition of meaning
makes a very large number of uncomputable truths nonetheless
meaningful.

Russel,

I think both Ben and I would approximately agree with everything you
said, but that doesn't change our disagreeing with each other :).

Mark,

Good call... I shouldn't be talking like I think it is terrifically
unlikely that some more-intelligent alien species would find humans
mathematically crude. What I meant was, it seems like humans are
"logically complete" in some sense. In practice we are greatly limited
by memory and processing speed and so on; but I *don't* think we're
limited by lacking some important logical construct. It would be like
us discovering some alien species whose mathematicians were able to
understand each individual case of mathematical induction, but were
unable to comprehend the argument for accepting it as a general
principle, because they lack the abstraction. Something like that is
what I find implausible.

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> Come by the house, we'll drop some acid together and you'll be convinced ;-)

Been there, done that.  Just because some logically inconsistent thoughts have 
no value doesn't mean that all logically inconsistent thoughts have no value.

Not to mention the fact that hallucinogens, if not the subsequently warped 
thoughts, do have the serious value of raising your mental Boltzmann 
temperature.

  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues





  On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

>> I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

I doesn't seem to make sense that something would have personal value and 
then not have scientific or engineering value.

  Come by the house, we'll drop some acid together and you'll be convinced ;-)
   



--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
(joke)

What?  You don't love me any more?  


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues



  (joke)


  On Wed, Oct 22, 2008 at 11:11 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:




On Wed, Oct 22, 2008 at 10:51 AM, Mark Waser <[EMAIL PROTECTED]> wrote:

  >> I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

  I doesn't seem to make sense that something would have personal value and 
then not have scientific or engineering value.

Come by the house, we'll drop some acid together and you'll be convinced ;-)
 





  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "A human being should be able to change a diaper, plan an invasion, butcher a 
hog, conn a ship, design a building, write a sonnet, balance accounts, build a 
wall, set a bone, comfort the dying, take orders, give orders, cooperate, act 
alone, solve equations, analyze a new problem, pitch manure, program a 
computer, cook a tasty meal, fight efficiently, die gallantly. Specialization 
is for insects."  -- Robert Heinlein




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> IMHO that is an almost hopeless approach, ambiguity is too integral to 
>> English or any natural language ... e.g preposition ambiguity

Actually, I've been making pretty good progress.  You just always use big words 
and never use small words and/or you use a specific phrase as a "word".  
Ambiguous prepositions just disambiguate to one of three/four/five/more 
possible unambiguous words/phrases.

The problem is that most previous subsets (Simplified English, Basic English) 
actually *favored* the small tremendously over-used/ambiguous words (because 
you got so much more "bang for the buck" with them).

Try only using big unambiguous words and see if you still have the same 
opinion.  

>> If you want to take this sort of approach, you'd better start with Lojban 
>> instead  Learning Lojban is a pain but far less pain than you'll have 
>> trying to make a disambiguated subset of English.

My first reaction is . . . . Take a Lojban dictionary and see if you can come 
up with an unambiguous English word or very short phrase for each Lojban word.  
If you can do it, my approach will work and will have the advantage that the 
output can be read by anyone (i.e. it's the equivalent of me having done it in 
Lojban and then added a Lojban -> English translation on the end) though the 
input is still *very* problematical (thus the need for a semantically-driven 
English->subset translator).  If you can't do it, then my approach won't work.

Can you do it?  Why or why not?  If you can, do you still believe that my 
approach won't work?  Oh, wait . . . . a Lojban-to-English dictionary *does* 
attempt to come up with an unambiguous English word or very short phrase for 
each Lojban word.  :-)

Actually, h . . . . a Lojban dictionary would probably help me focus my 
efforts a bit better and highlight things that I may have missed . . . . do you 
have a preferred dictionary or resource?  (Google has too many for me to do a 
decent perusal quickly)



  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Wednesday, October 22, 2008 11:11 AM
  Subject: Re: [agi] constructivist issues







Personally, rather than starting with NLP, I think that we're going to need 
to start with a formal language that is a disambiguated subset of English 


  IMHO that is an almost hopeless approach, ambiguity is too integral to 
English or any natural language ... e.g preposition ambiguity

  If you want to take this sort of approach, you'd better start with Lojban 
instead  Learning Lojban is a pain but far less pain than you'll have 
trying to make a disambiguated subset of English.

  ben g 




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> I disagree, and believe that I can think X: "This is a thought (T) that is 
>> way too complex for me to ever have."
>> Obviously, I can't think T and then think X, but I might represent T as a 
>> combination of myself plus a notebook or some other external media. Even if 
>> I only observe part of T at once, I might appreciate that it is one thought 
>> and believe (perhaps in error) that I could never think it.
>> I might even observe T in action, if T is the result of billions of 
>> measurements, comparisons and calculations in a computer program.
>> Isn't it just like thinking "This is an image that is way too detailed for 
>> me to ever see"?

Excellent!  This is precisely how I feel about intelligence . . . .  (and why 
we *can* understand it even if we can't hold the totality of it -- or fully 
predict it -- sort of like the weather :-)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> Well, if you are a computable system, and if by "think" you mean "represent 
>> accurately and internally" then you can only think that odd thought via 
>> being logically inconsistent... ;-)

True -- but why are we assuming *internally*?  Drop that assumption as Charles 
clearly did and there is no problem.

It's like infrastructure . . . . I don't have to know all the details of 
something to use it under normal circumstances though I frequently need to know 
the details is I'm doing something odd with it or looking for extreme 
performance and I definitely need to know the details if I'm 
diagnosing/fixing/debugging it -- but I can always learn them as I go . . . . 


  - Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, October 21, 2008 11:26 PM
  Subject: Re: [agi] constructivist issues



  Well, if you are a computable system, and if by "think" you mean "represent 
accurately and internally" then you can only think that odd thought via being 
logically inconsistent... ;-)




  On Tue, Oct 21, 2008 at 11:23 PM, charles griffiths <[EMAIL PROTECTED]> wrote:

  I disagree, and believe that I can think X: "This is a thought (T) 
that is way too complex for me to ever have."

  Obviously, I can't think T and then think X, but I might represent T 
as a combination of myself plus a notebook or some other external media. Even 
if I only observe part of T at once, I might appreciate that it is one thought 
and believe (perhaps in error) that I could never think it.

  I might even observe T in action, if T is the result of billions of 
measurements, comparisons and calculations in a computer program.

  Isn't it just like thinking "This is an image that is way too 
detailed for me to ever see"?

  Charles Griffiths

  --- On Tue, 10/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: Re: [agi] constructivist issues
To: agi@v2.listbox.com
Date: Tuesday, October 21, 2008, 7:56 PM



I am a Peircean pragmatist ...

I have no objection to using infinities in mathematics ... they can 
certainly be quite useful.  I'd rather use differential calculus to do 
calculations, than do everything using finite differences.

It's just that, from a science perspective, these mathematical 
infinities have to be considered finite formal constructs ... they don't existP 
except in this way ...

I'm not going to claim the pragmatist perspective is the only 
subjectively meaningful one.  But so far as I can tell it's the only useful one 
for science and engineering...

To take a totally different angle, consider the thought X = "This 
is a thought that is way too complex for me to ever have"

Can I actually think X?

Well, I can understand the *idea* of X.  I can manipulate it 
symbolically and formally.  I can reason about it and empathize with it by 
analogy to "A thought that is way too complex for my three-year-old past-self 
to have ever had" , and so forth.

But it seems I can't ever really think X, except by being logically 
inconsistent within that same thought ... this is the Godel limitation applied 
to my own mind...

I don't want to diss the personal value of logically inconsistent 
thoughts.  But I doubt their scientific and engineering value.

-- Ben G




On Tue, Oct 21, 2008 at 10:43 PM, Abram Demski <[EMAIL PROTECTED]> 
wrote:

  Ben,

  How accurate would it be to describe you as a finitist or
  ultrafinitist? I ask because your view about restricting 
quantifiers
  seems to reject even the infinities normally allowed by
  constructivists.

  --Abram



  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/

  Modify Your Subscription: https://www.listbox.com/member/?&;

  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be 
first overcome "  - Dr Samuel Johnson





  agi | Archives  | Modify Your Subscription  
 




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections

Re: AW: AW: [agi] Re: Defining AGI

2008-10-22 Thread Mark Waser

I'm also confused. This has been a strange thread. People of average
and around-average intelligence are trained as lab technicians or
database architects every day. Many of them are doing real science.
Perhaps a person with down's syndrome would do poorly in one of these
largely practical positions. Perhaps.

The consensus seems to be that there is no way to make a fool do a
scientist's job. But he can do parts of it. A scientist with a dozen
fools at hand could be a great deal more effective than a rival with
none, whereas a dozen fools on their own might not be expected to do
anything at all. So it is complicated.


Or maybe another way to rephrase it is combine it with another thread . . . 
.


Any individual piece of science is understandable/teachable to (or my 
original point -- verifiable or able to be validated by) any general 
intelligence but the totally of science combined with the world is far too 
large to . . . . (which is effectively Ben's point) 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser

(1) We humans understand the semantics of formal system X.


No.  This is the root of your problem.  For example, replace "formal system 
X" with "XML".  Saying that "We humans understand the semantics of XML" 
certainly doesn't work and why I would argue that natural language 
understanding is AGI-complete (i.e. by the time all the RDF, OWL, and other 
ontology work is completed -- you'll have an AGI).  Any formal system can 
always be extended *within it's defined syntax* to have any meaning.  That 
is the essence of Godel's Incompleteness Theorem.


It's also sort of the basis for my argument with Dr. Matthias Heger. 
Semantics are never finished except when your model of the world is finished 
(including all possibilities and infinitudes) so language understanding 
can't be simple and complete.


Personally, rather than starting with NLP, I think that we're going to need 
to start with a formal language that is a disambiguated subset of English 
and figure out how to use our world model/knowledge to translate English to 
this disambiguated subset -- and then we can build from there.  (or maybe 
this makes Heger's argument for him . . . .  ;-)





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> You have not convinced me that you can do anything a computer can't do.
>> And, using language or math, you never will -- because any finite set of 
>> symbols
>> you can utter, could also be uttered by some computational system.
>> -- Ben G

Can we pin this somewhere?

(Maybe on Penrose?  ;-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Mark Waser
>> I don't want to diss the personal value of logically inconsistent thoughts.  
>> But I doubt their scientific and engineering value.

I doesn't seem to make sense that something would have personal value and then 
not have scientific or engineering value.

I can sort of understand science if you're interpreting science looking for the 
final correct/optimal value but engineering generally goes for either "good 
enough" or "the best of the currently known available options" and anything 
that really/truly has personal value would seem to have engineering value.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


  1   2   3   4   5   6   7   8   9   >