Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI
Sorry, I'm just going to have to choose to be ignored on this topic ;-) ... I
have too much AGI stuff to do to be spending so much time chatting on mailing
lists ... and I've already published my thoughts on philosophy
:* **SPAM** Re: [agi] If your AGI can't learn to play chess it is
no AGI
Sorry, I'm just going to have to choose to be ignored on this topic ;-) ...
I have too much AGI stuff to do to be spending so much time chatting on
mailing lists ... and I've already published my thoughts on philosophy
the learning of future group members).
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Monday, October 27, 2008 10:55 AM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI
No, it's really just that I've been spending too
should have +
CODIFICATION added to assist the learning of future group members).
- Original Message -
*From:* Ben Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Monday, October 27, 2008 10:55 AM
*Subject:* **SPAM** Re: [agi] If your AGI can't learn to play chess
to ??DISORDER??
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Monday, October 27, 2008 12:07 PM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI
I think you're converging on better and better wording ... however, I think
the (actually explicit) assumption underlying the whole
scientific method is that the same causes produces the same results.
That's determinism/inevitabilism and it's only one philosophy of science, if
arguably still the major one. [One set of causes produces one set of
effects]. There's an
The notion of cause is not part of any major scientific theory, actually.
It's a folk-psychology concept that humans use to help them intuitively
understand science and other things. There is no formal notion of causation
in physics, chemistry, biology, etc.
On Sun, Oct 26, 2008 at 5:20 AM, Mike
Ben,
So what's the connection according to you between viruses and illness/disease,
heating water and boiling, force applied to object and acceleration of object?
Ben:
The notion of cause is not part of any major scientific theory, actually.
It's a folk-psychology concept that humans use
--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:
Would it then be accurate to saySCIENCE = LEARNING +
TRANSMISSION?
Or, how about,SCIENCE = GROUP LEARNING?
Science = learning + language.
-- Matt Mahoney, [EMAIL PROTECTED]
Ben:
The notion of cause is not part of any major scientific theory, actually.
It's a folk-psychology concept that humans use to help them intuitively
understand science and other things. There is no formal notion of causation in
physics, chemistry, biology, etc.
P.S.
Googling
About F=ma ... I think Norwood Russel Hanson, in Patterns of Discovery,
wrote nicely about the multiple possible interpretations..
About the other things you mention: whether I as a human would describe
these things as causal wasn't really my point.
You can have scientific theories of the form
Cause is a time-bound notion. These processes work both ways in time
-- does a virus cause a disease? Or is the existence of a host a more
significant factor?
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
(Note, I also am unfamiliar with the absence of formal causation from
rigorous scientific fields. So I guessed)
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
These equations seem silly to me ... obviously science is much more than
that, as Mark should know as he has studied philosophy of science
extensively
Cognitively, the precursor for science seems to be Piaget's formal stage of
cognitive development. If you have a community of minds that have
Ben,
My first thought here is that - ironically given recent discussion - this is
entirely a *philosophical* POV.
Yes, a great deal of science takes the form below, i.e. of establishing
correlations - and v. often between biological or environmental factors and
diseases.
However, it is
--- On Sun, 10/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
So what's the connection according to you between
viruses and illness/disease, heating water and boiling,
force applied to object and acceleration of object?
Observing illness causes me to believe a virus might be present. Observing
the
distinction?
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Sunday, October 26, 2008 11:14 AM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI
These equations seem silly to me ... obviously science is much more than
Matt Mahoney wrote:
--- On Sun, 10/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
So what's the connection according to you between
viruses and illness/disease, heating water and boiling,
force applied to object and acceleration of object?
Observing illness causes me to believe a virus
optimal formalized group learning? What's
the distinction?
- Original Message -
*From:* Ben Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Sunday, October 26, 2008 11:14 AM
*Subject:* **SPAM** Re: [agi] If your AGI can't learn to play chess it is
no AGI
These equations
] If your AGI can't learn to play chess it is no AGI
--- On Fri, 10/24/08, Mark Waser [EMAIL PROTECTED] wrote:
Cool. And you're saying that intelligence is not
computable. So why else
are we constantly invoking AIXI? Does it tell us anything
else about
general intelligence?
AIXI says
--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:
AIXI says that a perfect solution is not computable. However, a very
general principle of both scientific research and machine learning is to
favor simple hypotheses over complex ones. AIXI justifies these practices
in a formal
On Sun, Oct 26, 2008 at 12:17 AM, Mark Waser [EMAIL PROTECTED] wrote:
No, it doesn't justify ad-hoc, even when perfect solution is
impossible, you could still have an optimal approximation under given
limitations.
So what is an optimal approximation under uncertainty? How do you know when
, 2008 1:41 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI
--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:
AIXI says that a perfect solution is not computable. However, a very
general principle of both scientific research and machine learning is
to
favor
in the quality of implementation (i.e.
other than who performs it, of course).
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 1:41 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI
--- On Sat, 10
Vladimir said I pointed out only that it doesn't follow from AIXI that
ad-hoc is justified.
Matt used a chain of logic that went as follows:
AIXI says that a perfect solution is not computable. However, a very
general principle of both scientific research and machine learning is
to favor
On Sun, Oct 26, 2008 at 1:19 AM, Mark Waser [EMAIL PROTECTED] wrote:
You are now apparently declining to provide an algorithmic solution without
arguing that not doing so is a disproof of your statement.
Or, in other words, you are declining to prove that Matt is incorrect in
saying that we
: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no
AGI
On Sun, Oct 26, 2008 at 1:19 AM, Mark Waser [EMAIL PROTECTED] wrote:
You are now apparently declining to provide an algorithmic solution
without
arguing that not doing so is a disproof of your statement.
Or, in other
--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:
The fact that Occam's Razor works in the real world
suggests that the
physics of the universe is computable. Otherwise AIXI
would not apply.
Hmmm. I don't get this. Occam's razor simply says
go with the simplest
explanation
, October 25, 2008 5:51 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI
--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:
The fact that Occam's Razor works in the real world
suggests that the
physics of the universe is computable. Otherwise AIXI
would not apply
--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:
Scientists choose experiments to maximize information
gain. There is no
reason that machine learning algorithms couldn't
do this, but often they don't.
Heh. I would say that scientists attempt to do this and
machine learning
that could have an
awful lot of power if it's acceptable . . . . .
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 5:59 PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI
--- On Sat, 10/25/08
On Sat, Oct 25, 2008 at 11:14 PM, Mark Waser [EMAIL PROTECTED] wrote:
Anyone else want to take up the issue of whether there is a distinction
between competent scientific research and competent learning (whether or not
both are being done by a machine) and, if so, what that distinction is?
--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:
Ummm. It seems like you were/are saying then that because
AIXI makes an
assumption limiting it's own applicability/proof (that
it requires that the
environment be computable) and because AIXI can make some
valid conclusions,
AIXI shows a couple interesting things...
-- truly general AI, even assuming the universe is computable, is impossible
for any finite system
-- given any finite level L of general intelligence that one desires, there
are some finite R, M so that you can create a computer with less than R
: Saturday, October 25, 2008 6:27 PM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no
AGI
On Sat, Oct 25, 2008 at 11:14 PM, Mark Waser [EMAIL PROTECTED] wrote:
Anyone else want to take up the issue of whether there is a distinction
between competent scientific research
: Saturday, October 25, 2008 7:21 PM
Subject: AIXI (was Re: [agi] If your AGI can't learn to play chess it is no
AGI)
--- On Sat, 10/25/08, Mark Waser [EMAIL PROTECTED] wrote:
Ummm. It seems like you were/are saying then that because
AIXI makes an
assumption limiting it's own applicability/proof
scientific method is that the same causes
produces the same results. Comments?
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Saturday, October 25, 2008 7:48 PM
Subject: **SPAM** Re: AIXI (was Re: [agi] If your AGI can't learn to play
chess it is no AGI
can't learn to play
chess it is no AGI)
AIXI shows a couple interesting things...
-- truly general AI, even assuming the universe is computable, is impossible
for any finite system
-- given any finite level L of general intelligence that one desires, there
are some finite R, M so that you
On Fri, Oct 24, 2008 at 12:14 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
Well as a somewhat good chess instructor myself, I have to say I
completely agree with it. People who play well against computers
rarely rank above first time players.. in fact, most of them tend to
not even know the
Matthias: AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.
General Intelligence is the ability to cross over from one domain into
*another* - to a) independently learn new, additional domains and b) to make
P.S. The classical psychological term for this repeated substitution of
relatively easy, narrow AI discussions for what should be hard AGI discussions
is displacement behaviour.
http://www.animalbehavioronline.com/displacementbehavior.html
http://en.wikipedia.org/wiki/Displacement_(psychology)
for determining limits but horrible for drawing other types of
conclusions about GI.
/rant
- Original Message -
From: Ben Goertzel
To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 5:02 AM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI
, October 24, 2008 5:02
AM
Subject: **SPAM** Re: [agi] If your AGI
can't learn to play chess it is no AGI
On Fri, Oct 24, 2008 at 4:09 AM, Dr. Matthias Heger
[EMAIL PROTECTED] wrote:
No
Mike. AGI must be able to discover regularities of all kind in
all
domains.
If you can
To: agi@v2.listbox.com
Sent: Friday, October 24, 2008 10:49 AM
Subject: **SPAM** Re: [agi] If your AGI can't learn to play chess it is no AGI
The value of AIXI is not that it tells us how to solve AGI. The value
is that it tells us intelligence is not computable.
-- Matt
--- On Fri, 10/24/08, Mark Waser [EMAIL PROTECTED] wrote:
The value of AIXI is not that it tells us how to solve AGI.
The value is that it tells us intelligence is not computable
Define not computable Too many people are
incorrectly interpreting it to mean not implementable on a
computer.
PM
Subject: Re: [agi] If your AGI can't learn to play chess it is no AGI
--- On Fri, 10/24/08, Mark Waser [EMAIL PROTECTED] wrote:
The value of AIXI is not that it tells us how to solve AGI.
The value is that it tells us intelligence is not computable
Define not computable Too many people
Matthias:No Mike. AGI must be able to discover regularities of all kind in
all
domains.
If you can find a single domain where your AGI fails, it is no AGI.
Matthias,
Well, it's v. easy to say no. Can you back it up with a single example?A
single analogy, metaphor, creative
--- On Fri, 10/24/08, Mark Waser [EMAIL PROTECTED] wrote:
Cool. And you're saying that intelligence is not
computable. So why else
are we constantly invoking AIXI? Does it tell us anything
else about
general intelligence?
AIXI says that a perfect solution is not computable. However, a
On Thu, Oct 23, 2008 at 4:13 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
On Thu, Oct 23, 2008 at 8:39 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
If you consider programming an AI social activity, you very
unnaturally generalized this term, confusing other people. Chess
programs do learn
On Fri, Oct 24, 2008 at 8:41 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Yes ... at the moment the styles of human and computer chess players are
different enough that doing well against computer players does not imply
doing nearly equally well against human players ... though it certainly
helps
On Thu, Oct 23, 2008 at 6:46 PM, Trent Waddington
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 8:41 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Yes ... at the moment the styles of human and computer chess players are
different enough that doing well against computer players does not imply
On Fri, Oct 24, 2008 at 8:48 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
I suspect that's a half-truth...
Well as a somewhat good chess instructor myself, I have to say I
completely agree with it. People who play well against computers
rarely rank above first time players.. in fact, most of them
.listbox.com
*Betreff:* Re: [agi] If your AGI can't learn to play chess it is no AGI
On Thu, Oct 23, 2008 at 5:38 PM, Trent Waddington
[EMAIL PROTECTED] wrote:
On Thu, Oct 23, 2008 at 6:11 PM, Dr. Matthias Heger [EMAIL PROTECTED]
wrote:
I am sure that everyone who learns chess
Within the domain of chess there is everything to know about chess.
So if it comes up to be a good chess player learning chess from playing
chess must be sufficient. Thus, an AGI which is not able to enhance its
abilities in chess from playing chess alone is no AGI.
I'm jumping into this
On Fri, Oct 24, 2008 at 10:38 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
I think humans represent chess by a huge number of *visual* patterns.
http://www.eyeway.org/inform/sp-chess.htm
Trent
---
agi
Archives:
Trent:
On Fri, Oct 24, 2008 at 10:38 AM, Dr. Matthias Heger [EMAIL PROTECTED]
wrote:
I think humans represent chess by a huge number of *visual* patterns.
http://www.eyeway.org/inform/sp-chess.htm
We've been over this one several times in the past (perhaps you haven't been
here). Blind
On Fri, Oct 24, 2008 at 1:04 PM, Mike Tintner [EMAIL PROTECTED] wrote:
We've been over this one several times in the past (perhaps you haven't been
here). Blind people can see - they can draw the shapes of objects. . They
create their visual shapes out of touch.Touch comes prior to vision in
Matthias,
You've presented a straw man argument to criticize embodiment; As a
counter-example, in the OCP AGI-development plan, embodiment is not
primarily used to provide domains (via artificial environments) in which an
AGI might work out abstract problems, directly or comparatively (not to
On Wed, Oct 22, 2008 at 3:20 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
It seems to me that many people think that embodiment is very important for
AGI.
I'm not one of these people, but I at least learn what their
arguments. You seem to have made up an argument which you've then
knocked
On Wed, Oct 22, 2008 at 6:23 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
I see no argument in your text against my main argumentation, that an AGI
should be able to learn chess from playing chess alone. This I call straw
man replies.
No-one can learn chess from playing chess alone.
Chess
On Wed, Oct 22, 2008 at 2:10 PM, Trent Waddington
[EMAIL PROTECTED] wrote:
No-one can learn chess from playing chess alone.
Chess is necessarily a social activity.
As such, your suggestion isn't even sensible, let alone reasonable.
Current AIs learn chess without engaging in social
61 matches
Mail list logo