Wow! Reading this, I must
say that I was struck by the Geddes-like proportion of *claims* to *reasonable
proofs* (or even true discussionswhere you've deigned to share even the
beginnings of a proof).
Claims like "AGI understanding
will always run ahead of FAI understanding" are
AWESOME post . . . .
- Original Message -
From:
Russell Wallace
To: agi@v2.listbox.com
Sent: Wednesday, June 07, 2006 2:20
AM
Subject: Re: [agi] Two draft papers: AI
and existential risk; heuristics and biases
On 6/7/06, Eliezer S. Yudkowsky
[EMAIL
I think that we as a community need to get off our butts and start
building consensus as to what even the barest framework of friendliness
is. I think that we've seen more than enough proof that no one here can
go on for more than twenty lines without numerous people objecting
vociferously to
The point is that you're unlikely to murder the human race, if given the
chance, and neither is Kass. In fact, if given the chance, you will
protect them.
But what about all of those lovely fundamentalist Christians or Muslims
who see no problem with killing infidels (see Crusades, Jihad,
be.
Mark
- Original Message -
From: Peter de Blanc [EMAIL PROTECTED]
To: agi@v2.listbox.com
Cc: [EMAIL PROTECTED]
Sent: Wednesday, June 07, 2006 1:18 PM
Subject: Re: [agi] Two draft papers: AI and existential risk; heuristics and
biases
On Wed, 2006-06-07 at 11:32 -0400, Mark
What was your operational definition of friendliness, again?
My personal operational definition of friendliness is simply what my current
self would be willing to see implemented as the highest level goal of an
AGI.
Obviously, that includes being robust enough that it doesn't evolve into
iety
must enforce vaccination for the good of
all.
Mark
- Original Message -
From: "Charles D Hixson" [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, June 07, 2006 5:26 PM
Subject: Re: [agi] Two draft papers: AI and
existential risk; heuristics and biases
Mark Waser wrot
day, June 07, 2006 4:51 PM
Subject: Re: [agi] Two draft papers: AI and
existential risk; heuristics and biases
On Wed, 2006-06-07 at 16:13 -0400, Mark Waser
wrote: I'm pretty sure that I've got the
science and math that I need and, as I Okay. I supposed the
opposite, not because of anything y
From: "Eugen Leitl" [EMAIL PROTECTED]
Sent: Thursday, June 08, 2006 12:59
PM
Let us hear these axioms, please. I don't
think a rehash of Asimov's 4 laws is going to cut the mustard,
though.
I thought you'd never ask . . .
. (and no, the disproofs of Asimov's laws are handled quite well
From: James Ratcliff
To: agi@v2.listbox.com
Sent: Friday, June 09, 2006 4:13 PM
Subject: Re: [agi] Two draft papers: AI and
existential risk; heuristics and biases
Hmm, now what again is your goal, I am confused?
To maximally increase Volition
actualization/wish fulfillment (Axiom 1).
- Original Message - From: "Jef
Allbright" [EMAIL PROTECTED]Sent:
Thursday, June 08, 2006 10:04 PMSubject: Re: [agi] Four axioms
It seems to me it would be better to say that there is no absolute or
objective good-bad because evaluation of goodness is necessarily
relative to the
- Original Message -
From: "Charles D Hixson" [EMAIL PROTECTED]
Sent: Thursday, June 08, 2006 7:26 PM
Subject: Re: [agi] Four axioms (Was Two draft
papers: AI and existential risk; heuristics and biases)
I think that Axiom 2 needs a bit of
work.
Agreed.
as I read it, it
James: So are you seperating
'undesirable, horrible, or immoral'
from the term of
friendliness?
I am removing the requirement
from friendliness that it match everybody's opinion on
"undesirable, horrible, or immoral" since that is clearly an impossible
undertaking. However, friendly is
From: James Ratcliff To: agi@v2.listbox.com
Sent: Monday, June 12, 2006 3:52 PMSubject: Re: [agi] Re: Four axioms
(WAS Two draft papers . . . .)you mentioned in a couple of
responses the volition of the masses as your overall formula, I am putting a
couple of thoughts together here, and
My big issue is that the system depends on laborious experimentation to
find stable configurations of local parameters that will get all these
processes to happen at once.
the problem is doing that
whilst simultaneously getting the same mechanisms to handle 30 or 40 other
cognitive processes.
Hi,
The problem is that we are using the word interfere differently. Most
of what you are calling interference, I would call interaction and claim as
being absolutely necessary. I understand that using the word interfere in
this way is logical if you think of wave interference but it's
purpose is to optimize the
second process (through experimentation, etc.). But this is a totally
separate case from what I am arguing.
- Original Message -
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, June 14, 2006 11:41 PM
Subject: Re: [agi] How
] Not having trouble with parameters! WAS [Re: How the
Brain Represents Abstract Knowledge]
Mark Waser wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Sent: Thursday, June 15, 2006 12:17 AM
Subject: Re: [agi] How the Brain Represents Abstract Knowledge
You seem to be confusing Novamente with Richard
Novamente is modular software-wise, but very far from modular
cognition-wise
This makes sense, particularly in light of your further explanation about
the effects of replacing the PLN module with AnotherPI module, but I would
think that it should be solvable by being thorough about tagging
Has anyone read Visions Of Mind: Architectures For Cognition And Affect by
Darryl Davis and is willing to comment on it?
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
I am new to AI.
If you're new to AI, then you should be reading the general literature
and getting yourself up to speed, not asking particular general questions
whose answers can't be understood unless you have the necessary base of
knowledge. The reason why virtually no one is
My personal guesstimate is that
what are commonly considered the higher order cognitive functions useway
less than 1% of the total power estimated for the brain (and also, that the
brain does them very inefficiently so a better implementation would use even
less power).
On the other
If somebody out there has some strong reason why the above is
misguided, I'd be interested in hearing it.
VERY few Xeon transistors are used per clock tick. Many, many, MANY more
brain synapses are firing at a time.
- Original Message -
From: Eric Baum [EMAIL PROTECTED]
To:
On a related subject, I argued in What is Thought? that the hard
problem was not processor speed for running the AI, but coding the
Trust me, the speed is. Your biggest problem is memory bandwidth,
actually.
I agree. As I said a couple of days ago, AGI is going to require a massive
amount
How many Xeon transistors per clock tick? Any idea?
I recall estimating .001 of neurons were firing at any given time
(although I no longer recall how I reached that rough guesstimate.)
And remember, the Xeon has a big speed factor.
The Xeon speed factor is just less than 1E7.
Using your
I think that a *lossless* compression of Wikipedia that decompressed to
a non-identical version carrying the same semantic information would require
less than a superhuman AGI and that, instead, it's probably on the critical
path *to* a superhuman AGI.
And by the way, nice snarky tone
to
something that *really* measures something like intelligence?
- Original Message -
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, August 13, 2006 3:11 PM
Subject: Re: [agi] Marcus Hutter's lossless compression of human knowledge
prize
Hi all,
I think
Hi all,
I think that a few important points have been lost or misconstrued in
most of this discussion.
First off, there is a HUGE difference between the compression of
knowledge and the compression of strings. The strings Ben is human., Ben
is a member of the species homo sapiens., Ben is
Hi all,
I think that a few important points have been lost or misconstrued in
most of this discussion.
First off, there is a HUGE difference between the compression of
knowledge and the compression of strings. The strings Ben is human., Ben
is a member of the species homo sapiens.,
States is a place occurs nowhere in enwik8.
So how can we learn this? From phrases like from the United States (which
occurs 55 times), to the United States (265 times), in the United States
(1359 times), etc.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL
to create this knowledge out of nowhere?
- Original Message -
From:
Sampo Etelavuori
To: agi@v2.listbox.com
Sent: Monday, August 14, 2006 7:12
PM
Subject: **SPAM** Re: Sampo: [agi] Marcus
Hutter's lossless compression of human knowledge prize
On 8/14/06, Mark
Wa
Hi Ben,
I agree with everything that you're saying; however, looking at the
specific task:
Create a compressed version (self-extracting archive) of the 100MB file
enwik8 of less than 18MB. More precisely:
a.. Create a Linux or Windows executable archive8.exe of size S L :=
18'324'887 =
I
don't see any point in this debate over lossless vs. lossy
compression
Lets see if I can simplify it.
The stated goal is compressing human
knowledge.
The exact, same knowledge can always be expressed
in a *VERY*large number of different bit strings
Not being able to reproduce
the knowledge it learned via this experience in its
long-term memory...
-- Ben
On 8/15/06, Mark Waser [EMAIL PROTECTED] wrote:
Hi Ben,
I agree with everything that you're saying; however, looking at the
specific task:
Create a compressed version (self-extracting archive) of the 100MB file
en
them. In general, the output distribution will be different than the
true distrubution p(s1), p(s2), so it will be distinguishable from human even
if the compression ratio is ideal.-- Matt Mahoney,
[EMAIL PROTECTED]
-
Original Message From: Mark Waser [EMAIL PROTECTED]To:
]
To: agi@v2.listbox.com
Sent: Tuesday, August 15, 2006 3:16 PM
Subject: **SPAM** Re: Goertzel/Sampo: [agi] Marcus Hutter's lossless
compression of human knowledge prize
On 8/15/06, Mark Waser [EMAIL PROTECTED] wrote:
Ben Conceptually, a better (though still deeply flawed) contest would
** Re: Goetz/Goertzel/Sampo: [agi] Marcus Hutter's lossless
compression of human knowledge prize
On 8/15/06, Mark Waser [EMAIL PROTECTED] wrote:
Actually, instructing the competitors to compress both the OpenCyc corpus
AND then the Wikipedia sample in sequence and measuring the size of both
cold").assuming your compressor uses a good language
model.Now if only we had some test to tell which compressors have the
best language models...
-- Matt Mahoney, [EMAIL PROTECTED]
-
Original Message From: Mark Waser [EMAIL PROTECTED]To:
agi@v2.listbox.comSent: Tuesda
Yellow is the state of reflecting light which is
between two specific frequencies.
Hot is the state of having a temperature above some
set value.
It takes examples to recognize/understand when your
sensory apparatus is reporting one of these states but this is a calibration
issue, not a
o answer natural
language queries, while our smart parsers choke when you misspell a
word. Who is smart and who is dumb? -- Matt Mahoney,
[EMAIL PROTECTED]
- Original Message From: Mark Waser
[EMAIL PROTECTED]To: agi@v2.listbox.comSent: Wednesday,
August 16, 2006 9:17:52
Now
try that on my daughter or any other 3.5 year old. It doesnt work.
:}
Try what? Your daughter has calibrated her
vision and stuck labels on the gauge. What has she learned? That
this range reported by *her* personal vision systemis labeled
yellow.
Now, you want to do this without any
However, a machine with a lossless model will still outperform one with a
lossy model because the lossless model has more knowledge.
PKZip has a lossless model. Are you claiming that it has more knowledge?
More data/information *might* be arguable but certainly not knowledge -- and
PKZip
with judging).
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 25, 2006 7:54 PM
Subject: Re: [agi] Lossy ** lossless compression
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday
away easily.
-- Matt Mahoney, [EMAIL PROTECTED]
-
Original Message From: Mark Waser [EMAIL PROTECTED]To: agi@v2.listbox.comSent: Saturday, August 26, 2006
4:51:07 PMSubject: Re: [agi] Lossy ** lossless compression
Mark suggested putting Wikipedia
However, I think that a lossless model can
reasonably derive this information by observing that p(x, x') is approximately
equal to p(x) or p(x'). In other words, knowing both x and x' does not
tell you any more than x or x' alone, or CDM(x, x') ~ 0.5. I think this is
a reasonable way to
Message -
From: Sampo Etelavuori [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, August 28, 2006 8:56 AM
Subject: **SPAM** Re: [agi] Lossy ** lossless compressi
On 8/28/06, Mark Waser [EMAIL PROTECTED] wrote:
How does a lossless model observe that Jim is extremely fat and James
Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, August 28, 2006 3:37 PM
Subject: Re: [agi] Lossy ** lossless compressi
On 8/28/06, Mark Waser wrote:
How does a lossless model observe that Jim is extremely fat and James
continues to be morbidly obese are approximately equal
I would like to hear from others with this same point
of view, and otherwise from anyone who has a idea that
an open source AGI could be somehow made safe.
While I also don't believe that you can protect your open source AGI
from what if [insert favorite bad guys] use it for nefarious
I just ran across the following
references in Neuro-Evolution (including evolving topologies in neural networks)
and figured that they might be interesting to others on this list:
http://nn.cs.utexas.edu/project-view.php?RECORD_KEY(Projects)=ProjIDProjID(Projects)=14
My position statement is: If in a sense a laptop computer is a Turing
machine, then in the same sense a robot is also a Turing machine.
I think that most people missing the point here . . . . Analog can
always be converted to digital of a specified granularity, simultaneous can
always be
It is entirely possible to build an AI in such a way that the general
course of its behavior is as reliable as the behavior of an Ideal Gas:
can't predict the position and momentum of all its particles, but you sure
can predict such overall characteristics as temperature, pressure and
volume.
Although I understand, in vague terms, what ideaRichard is
attempting to express, I don't seewhy having"massive numbers of weak
constraints" or "large numbers of connections from [the]motivational
system to [the]thinking system." gives any more reason to believe it is
reliably Friendly
So, how to get all this probabilistic commonsense knowledge (which in
humans is mostly unconscious) into the AGI system?
a-- embodied learning
b-- exhaustive education through NLP dialogue in very simple English
c-- exhaustive education through dialogue in some artificial language
like Lojban++
Models
that are simple enough to debug are too simple to
scale.
The contents of a knowledge base for AGI will be beyond our
ability to comprehend.
Given sufficient time, anything
should be able to be understood and debugged. Size alone does not make
something incomprehensible and I
Mark Waser wrote:
Given sufficient time, anything should be able to be understood and
debugged.
Give me *one* counter-example to the above . . . .
Matt Mahoney replied:
Google. You cannot predict the results of a search. It does not help
that you have full access to the Internet
this.
It doesn't matter if you agree with the number 10^9 or not. Whatever the
number, either the AGI stores less information than the brain, in which case
it is not AGI, or it stores more, in which case you can't know everything it
does.
Mark Waser wrote:
I certainly don't buy the mystical approach
in the driver's brain.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:39:14 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser wrote:
Given sufficient time, anything
in the driver's brain.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:39:14 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser wrote:
Given sufficient time, anything
The connection between intelligence and compression is not obvious.
The connection between intelligence and compression *is* obvious -- but
compression, particularly lossless compression, is clearly *NOT*
intelligence.
Intelligence compresses knowledge to ever simpler rules because that is
anthropomorphise the agent, then we say that we are replacing the input with
perceptually indistinguishable data, which is what we typically do when we
compress video or sound.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2
However, it has not yet been as convincingly disproven as the Cyc-type
approach of feeding a AI commonsense knowledge encoded in a formal
language ;-)
Actually, I would describe the Cyc-type approach as feeding an AI common-sense
data which then begs all sorts of questions . . . .
-
: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 11:52 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser [EMAIL PROTECTED]
wrote:
So *prove* to me why information theory forbids transparency of a knowledge
base.
Isn't
.listbox.com
Sent: Thursday, November 16, 2006 1:02 PM
Subject: Re: [agi] One grammar parser URL
whats your definition of diff of data and knowledge then?
Cyc uses a formal language based in logic to describe the things.
James
Mark Waser [EMAIL PROTECTED] wrote:
However, it has not yet
: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:01 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser [EMAIL PROTECTED] wrote:
Give me a counter-example of knowledge that can't be isolated.
Q. Why did you turn left here
of reasoning, you will exhaust the memory
in your brain before you finish.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:16:54 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
these inconsistencies and correcting them with good effeciency.
James Ratcliff
Mark Waser [EMAIL PROTECTED] wrote:
I don't believe it is true that better compression implies higher
intelligence (by these definitions) for every possible agent, environment,
universal Turing machine and pair
is something akin to I don't understand it so it must
be good.
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 1:53 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/29/06, Mark Waser [EMAIL
could understand how it
arrived at a particular solution, then you have failed to create an AI
smarter than yourself.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 1:25:33 PM
Subject: Re
overlooked several thousand examples is pretty
insulting).
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 4:17 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/29/06, Mark Waser [EMAIL
you get really, really lucky in choosing your number of
nodes and your connections. Nature has clearly found a way around this
problem but we do not know this solution yet.)
Mark (going off to be plastered by replies to last night's message)
- Original Message -
From: Mark Waser
]
To: agi@v2.listbox.com
Sent: Wednesday, November 29, 2006 9:36 PM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 11/29/06, Mark Waser [EMAIL PROTECTED] wrote:
I defy you to show me *any* black-box method that has
Well, it really depends on what you mean by too complex for a human
to understand. Do you mean
-- too complex for a single human expert to understand within 1 week of
effort
-- too complex for a team of human experts to understand within 1 year of
effort
-- fundamentally too complex for humans
] A question on the symbol-system hypothesis
On 11/30/06, Mark Waser [EMAIL PROTECTED] wrote:
With many SVD systems, however, the representation is more
vector-like
and *not* conducive to easy translation to human terms. I have two
answers
to these cases. Answer 1 is that it is still easy
Thank you for cross-posting this. Could you please give us more information
on your book?
I must also say that I appreciate the common-sense wisdom and repeated bon
mots that the sky is falling crowd seem to lack.
- Original Message -
From: J. Storrs Hall, PhD. [EMAIL PROTECTED]
On 12/1/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Philip Goetz snidely responded
Some people would call it repeating the same mistakes I already dealt
hypothesis
On 12/2/06, Mark Waser wrote:
My contention is that the pattern that it found was simply not translated
into terms you could understand and/or explained.
Further, and more importantly, the pattern matcher *doesn't* understand
it's
results either and certainly could build upon them
He's arguing with the phrase It is programmed only through evolution.
If I'm wrong and he is not, I certainly am.
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, December 02, 2006 4:26 PM
Subject: Re: Motivational Systems of an AI [WAS
You cannot turn off hunger or pain. You cannot
control your emotions.
Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
at the mercy of your emotions?
Since the synaptic weights cannot be altered by
training (classical or operant conditioning)
Who says that
this.
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 9:17 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 12/2/06, Mark Waser [EMAIL PROTECTED] wrote:
A nice story but it proves absolutely nothing
.
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, December 03, 2006 10:19 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it
and how fast?]
--- Mark Waser [EMAIL PROTECTED] wrote:
You cannot turn off
repeating (summarizing) what I have said
before.
If you want to tear down my argument line by line, please do it privately
because I don't think the rest of the list will be interested.
--- Mark Waser [EMAIL PROTECTED] wrote:
Matt,
Why don't you try addressing my points instead of simply
Ben,
I agree with the vast majority of what I believe that you mean but . . .
1) Just because a system is based on logic (in whatever sense you
want to interpret that phrase) doesn't mean its reasoning can in
practice be traced by humans. As I noted in recent posts,
probabilistic logic
Whereas my view is that nearly all HUMAN decisions are based on so
many entangled variables that the human can't hold them in conscious
comprehension ;-)
We're reaching the point of agreeing to disagree except . . . .
Are you really saying that nearly all of your decisions can't be explained
: Monday, December 04, 2006 10:45 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis
On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:
Philip Goetz gave an example of an intrusion detection system that
learned
information that was not comprehensible to humans. You argued that he
Well, of course they can be explained by me -- but the acronym for
that sort of explanation is BS
I take your point with important caveats (that you allude to). Yes, nearly all
decisions are made as reflexes or pattern-matchings on what is effectively
compiled knowledge; however, it is the
You partition intelligence into
* explanatory, declarative reasoning
* reflexive pattern-matching (simplistic and statistical)
Whereas I think that most of what happens in cognition fits into
neither of these categories.
I think that most unconscious thinking is far more complex than
reflexive
?
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 1:38 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it
and how fast?]
On 12/4/06, Mark Waser [EMAIL PROTECTED] wrote:
Why must you argue
it means much less what it's
implications are . . . .
- Original Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 2:03 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 12/3/06, Mark Waser [EMAIL PROTECTED] wrote
To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.
The first sentence of the proposition was
: William Pearson [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 5:51 PM
Subject: [agi] Addiction was Re: Motivational Systems of an AI
On 04/12/06, Mark Waser [EMAIL PROTECTED] wrote:
Why must you argue with everything I say? Is this not a sensible
statement?
I don't
PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, December 05, 2006 7:03 AM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 12/4/06, Mark Waser wrote:
Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind. The reflexive
is Called The Emotion Machine, it
argues that, contrary to popular conception, emotions aren't distinct from
rational thought; rather, they are simply another way of thinking, one that
computers could perform.
- Original Message -
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
: Tuesday, December 05, 2006 11:17 AM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
BillK [EMAIL PROTECTED] wrote:
On 12/4/06, Mark Waser wrote:
Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser [EMAIL PROTECTED] wrote:
Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).
Sure. Absolutely. I'm perfectly willing to contend
If there's a market for this, then why can't I even buy a thermostat
with a timer on it to turn the temperature down at night and up in the
morning? The most basic home automation, which could have been built
cheaply 30 years ago, is still, if available at all, so rare that I've
never seen it.
, you can build an adapter for it.
Bottom line ... Pei is correct. There will not be a consensus on what
the most
suitable language is for AI.
Regards,
~Aki
On 18-Feb-07, at 11:39 AM, Mark Waser wrote:
What is the best language for AI begs the question -- For which aspect
of AI?
And also
wheels. If the wheel doesn't fit perfectly, you can build an
adapter for it.
Bottom line ... Pei is correct. There will not be a consensus on what
the most
suitable language is for AI.
Regards,
~Aki
On 18-Feb-07, at 11:39 AM, Mark Waser wrote:
What is the best language for AI begs
One reason for picking a language more powerful than the run-of-the-mill
imperative ones (of which virtually all the ones mentioned so far are just
different flavors) is that the can give you access to different paradigms
that will enhance your view of how an AGI should work internally.
Very
My real point is that you don't really need a new dev env for this.
Richard is talking about some *substantial* architecture here -- not
just a development environment but a *lot* of core library routines (as you
later speculate) and functionality that is either currently spread across
on it there.
- Original Message -
From: Russell Wallace
To: agi@v2.listbox.com
Sent: Tuesday, February 20, 2007 3:31 PM
Subject: **SPAM** Re: [agi] Development Environments for AI (a few
non-religious comments!)
On 2/20/07, Mark Waser [EMAIL PROTECTED] wrote:
Realistically, you'll have
1 - 100 of 705 matches
Mail list logo