It probably wouldn't do very well at such a test...no better than Delphi, say.
OTOH, how well do human experts in the field do on such tests?
The question might be, how is it developed from a sophisticated Delphi+filter
system. The answer to that isn't obvious, and probably isn't singular.
On Sunday 15 January 2006 04:41 am, [EMAIL PROTECTED] wrote:
Searching is a part of AI... But is not deep logic like Chess...
Is IBM Deep Blue just a look up machine or really perceiving and logical
reasoning with an output of action.. the next move.
Deep Blue, the Chess Expert, was purely an
A model based approach is necessary, but not sufficient. Equally important
will be using parallax to divide the visual field into objects which move
together. This can be done with one camera, by oscillating it's position,
though two cameras add significantly. Three allow for better
Ben Goertzel wrote:
Hmmm
The inimitable Mentifex wrote:
http://www.blogcharm.com/Singularity/25603/Timetable.html
2006 -- True AI
2007 -- AI Landrush
2009 -- Human-Level AI
2011 -- Cybernetic Economy
2012 -- Superintelligent AI
2012 -- Joint Stewardship of Earth
2012 --
John Scanlon wrote:
Is anyone interested in discussing the use of formal logic as the
foundation for knowledge representation schemes for AI? It's a common
approach, but I think it's the wrong path. Even if you add
probability or fuzzy logic, it's still insufficient for true intelligence.
Mike Dougherty wrote:
On 6/2/06, *Charles D Hixson* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Rule of thumb: First get it working, doing what you want. Then
optimize. When optimizing, first check your algorithms, then
check to
see where time is actually spent
Mark Waser wrote:
What was your operational definition of friendliness, again?
My personal operational definition of friendliness is simply what my
current self would be willing to see implemented as the highest level
goal of an AGI.
Obviously, that includes being robust enough that it
Mark Waser wrote:
..
The first thing that is necessary is to define your goals. It is
my contention that there is no good and no bad (or evil) except in the
context of a goal and that those who believe that there is some
absolute morality out there have been fooled by the unconscious
[EMAIL PROTECTED] wrote:
If your AI was operating on the web it might find itself at a sever
disadvantage with all of those con artist...
Your AI might lose bad...
Friendly does not equal trusting. It does not equal stupid. It does
not equal not being willing to learn from the
Try calculating instead the incoming bits/second stored...now calculate
the required storage space.
When you do that the computer starts looking much less
competitive...today. Calculate the space required to store, without
definitions or attached meanings, all the words in the English
language.
Yan King Yin wrote:
...
2. If you think your method is better, the mechanism underlying your
rule might be more complex than predicate logic. That's kind of strange.
YKY
Not strange at all. The brain had a long evolutionary history before
language was ever created. Languages are attempts
Eric Baum wrote:
Eric Baum wrote:
even if there would be some way to keep modifying the top level to
make it better, one could presumably achieve just as powerful an
ultimate intelligence by keeping it fixed and adding more powerful
lower levels (or maybe better yet, middle levels) or more
Eric Baum wrote:
My apologies for delay in responding. I was busy...
but I think there is a lot of confusion on the list about NP-hardness
still so here goes another attempt. I'm taking portion from a
different thread and changing subject, Eliezer when I get time
I'll try to respond a bit more
Charles D Hixson wrote:
...I think the mistake here is presuming that intelligence is some
particular set of tools that can solve everything. It is my belief
that OTOH intelligence is a framework into which can be slotted a
(perhaps) almost infinite set of tools. Most of them will be special
Yan King Yin wrote:
...
To avoid confusion we can fix it that the probability/NTV associated
with a sentence is always interpreted as the (subjective) probability
of that sentence being true.
So p( all ravens are black ) will become 0 whenever a single
nonblack raven is found.
If, from
Mark Waser wrote:
Hi all,
I think that a few important points have been lost or misconstrued
in most of this discussion.
First off, there is a HUGE difference between the compression of
knowledge and the compression of strings. The strings Ben is
human., Ben is a member of the
Matt Mahoney wrote:
Mark, I didn't get your attachment, the program that tells me if an
arbitrary text string is in canonical form or not. Actually, if it
will make it any easier, I really only need to know if a string is a
canonical representation of Wikipedia.
Oh, wait... there can only
Stephen Reed wrote:
I would appreciate comments regarding additional
constraints, if any, that should be applied to a
traditional open source license to achieve a free but
safe widespread distribution of software that may lead
to AGI.
...
My personal opinion is that the best license is the
Philip Goetz wrote:
On 8/28/06, Stephen Reed [EMAIL PROTECTED] wrote:
An assumption that some may challenge is that AGI
s...
source license retain these benefits yet be safe?
I would rather see a license which made the software free
for non-commercial use, but (unlike the GNU licenses)
Stephen Reed wrote:
...
Rather than cash payments I have in mind a scheme
similar to the pre-world wide web bulletin board
system in which FTP sites had upload and download
ratios. If you wished to benefit from the site by
downloading, you had to maintain a certain level of
contributions via
Philip Goetz wrote:
On 8/30/06, Charles D Hixson [EMAIL PROTECTED] wrote:
... some snipping ...
- Phil
The idea with the GPL is that if you want to also sell the program
commercially, you should additionally make it available under an
alternate license. Some companies have been successful
Philip Goetz wrote:
...
Those companies don't make money off the software. They sell products
and services. The GPL is not successful at enabling people to make
money directly off software. This is critical, because it takes a
large company and a large capital investment to make money selling
Joshua Fox wrote:
I'd like to raise a FAQ: Why is so little AGI research and development
being done?
...
Thanks,
Joshua
What proportion of the work that is being done do you believe you are
aware of? On what basis?
My suspicion is that most people on the track of something new tend to
be
Pei Wang wrote:
We all know that, in a sense, every computer system (hardware plus
software) can be abstractly described as a Turing machine.
Can we say the same for every robot? Why?
Reference to previous publications are also welcome.
Pei
The controller for the robot might be a Turing
John Scanlon wrote:
Ben,
I did read your stuff on Lojban++, and it's the sort of language
I'm talking about. This kind of language lets the computer and the
user meet halfway. The computer can parse the language like any other
computer language, but the terms and constructions are
BillK wrote:
On 11/1/06, Charles D Hixson wrote:
So. Lojban++ might be a good language for humans to communicate to an
AI with, but it would be a lousy language in which to implement that
same AI. But even for this purpose the language needs a verifier to
insure that the correct forms
Richard Loosemore wrote:
...
This is a question directed at this whole thread, about simplifying
language to communicate with an AI system, so we can at least get
something working, and then go from there
This rationale is the very same rationale that drove researchers into
Blocks World
Ben Goertzel wrote:
...
On the other hand, the notions of intelligence and understanding
and so forth being bandied about on this list obviously ARE intended
to capture essential aspects of the commonsense notions that share the
same word with them.
...
Ben
Given that purpose, I propose the
a knowledge test. That's not what I mean.
Maybe we could extract simple facts from wiki, and start creating a
test there, then add in more complicated things.
James
*/Charles D Hixson [EMAIL PROTECTED]/* wrote:
Ben Goertzel wrote:
...
On the other hand, the notions of intelligence
learns what a normal state of being is, and detect deviations.
On 21/11/06, *Charles D Hixson* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Bob Mottram wrote:
On 17/11/06, *Charles D Hixson* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED
Mark Waser wrote:
Hi Bill,
...
If storage and access are the concern, your own argument says that
a sufficiently enhanced human can understand anything and I am at a
loss as to why an above-average human with a computer and computer
skills can't be considered nearly indefinitely enhanced.
Mark Waser wrote:
...
For me, yes, all of those things are good since they are on my list of
goals *unless* the method of accomplishing them steps on a higher goal
OR a collection of goals with greater total weight OR violates one of
my limitations (restrictions).
...
If you put every good
James Ratcliff wrote:
There is a needed distinctintion that must be made here about hunger
as a goal stack motivator.
We CANNOT change the hunger sensation, (short of physical
manipuations, or mind-control stuff) as it is a given sensation that
comes directly from the physical body.
What
you think we should leave it up to a single Constroller to
interpret the signals coming from teh body and form the goals.
In humans it looks to be the one way, but with AGI's it appears it
would/could be another.
James
*/Charles D Hixson [EMAIL PROTECTED]/* wrote:
J...
Goals
Ben Goertzel wrote:
...
According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:
BillK wrote:
...
Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best
BillK wrote:
On 12/5/06, Charles D Hixson wrote:
BillK wrote:
...
No time inversion intended. What I intended to say was that most
(all?) decisions are made subconsciously before the conscious mind
starts its reason / excuse generation process. The conscious mind
pretending to weigh
Philip Goetz wrote:
...
The disagreement here is a side-effect of postmodern thought.
Matt is using evolution as the opposite of devolution, whereas
Eric seems to be using it as meaning change, of any kind, via natural
selection.
We have difficulty because people with political agendas -
Joel Pitt wrote:
...
Some comments/suggestions:
* I think such a project should make the data public domain. Ignore
silly ideas like giving be shares in the knowledge or whatever. It
just complicates things. If the project is really strapped for cash
later, then either use ad revenue or look
YKY (Yan King Yin) wrote:
...
I think a project like this one requires substantial efforts, so
people would need to be paid to do some of the work (programming,
interface design, etc), especially if we want to build a high quality
knowledgebase. If we make it free then a likely outcome is
Benjamin Goertzel wrote:
And, importance levels need to be context-dependent, so that assigning
them requires sophisticated inference in itself...
The problem may not be so serious. Common sense reasoning may
require only
*shallow* inference chains, eg 5 applications of rules. So I'm
Benjamin Goertzel wrote:
Hi,
Possibly this could be approached by partitioning the rule-set into
small chunks of rules that work together, so that one didn't end up
trying everything against everything else. These chunks of rules
might well be context dependent, so that one would use
Philip Goetz wrote:
On 1/17/07, Charles D Hixson [EMAIL PROTECTED] wrote:
It's find to talk about making the data public domain, but that's not
a good idea.
Why not?
Because public domain offers NO protection. If you want something
close to what public domain used to provide, then the MIT
gts wrote:
Hi Ben,
On Extropy-chat, you and I and others were discussing the foundations
of probability theory, in particular the philosophical controversy
surrounding the so-called Principle of Indifference. Probability
theory is of course relevant to AGI because of its bearing on decision
Richard Loosemore wrote:
...
[ASIDE. An example of this. The system is trying to answer the
question Are all ravens black?, but it does not just look to its
collected data about ravens (partly represented by the vector of
numbers inside the raven concept, which are vaguely related to the
Chuck Esterbrook wrote:
On 2/18/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
Mark Waser wrote:
...
I find C++ overly complex while simultaneously lacking well known
productivity boosters including:
* garbage collection
* language level bounds checking
* contracts
* reflection /
Russell Wallace wrote:
On 3/9/07, *Charles D Hixson* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Russell Wallace wrote:
To test whether a program understands a story, start by having it
generate an animated movie of the story
Russell Wallace wrote:
On 3/13/07, *J. Storrs Hall, PhD.* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
But the bottom line problem for using FOPC (or whatever) to
represent the
world is not that it's computationally incapable of it -- it's Turing
complete, after all -- but
Russell Wallace wrote:
On 3/18/07, *Charles D Hixson* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Perhaps it would be best to have, say, four different formats for
different classes of problems (with the understanding that most
problems
are mixed). E.g., some classes
rooftop8000 wrote:
...
I think we should somehow allow people to use all the program languages they
want.
That somehow is the big problem. Most approaches to dealing with it
are...lamentable.
...
You can use closed modules if you have meta-information on
how to use them and what they do.
Chuck Esterbrook wrote:
On 3/20/07, Charles D Hixson [EMAIL PROTECTED] wrote:
rooftop8000 wrote:
...
I think we should somehow allow people to use all the program
languages they want.
That somehow is the big problem. Most approaches to dealing with it
are...lamentable.
...
You can use
Chuck Esterbrook wrote:
On 3/22/07, Charles D Hixson [EMAIL PROTECTED] wrote:
Unfortunately, MS is claiming undefined things as being proprietary. As
such, I intend to stay totally clear of implementations of it's
protocols. Including mono. I am considering jvm, however, as Sun has
now freed
I think someone at UCLA did something similar for lobsters. This was
used as material for an SF story (Lobsters, Charles Stross[sp?])
Jan Mattsson wrote:
Has this approach been successful for any lesser animals? E.g.; has anyone
simulated an insect brain system connected to a simulated
Stripping away a lot of your point here, I just want to point out how
many jokes are memorized fragments. A large part of what is going on
here is using a large database. I'm not disparaging your point about
pattern matching being necessary, but one normally pattern matches and
returns a
Mark Waser wrote:
What is meaning to a computer? Some people would say that no
machine can
know the meaning of text because only humans can understand language.
Nope. I am *NOT* willing to do the Searle thing. Machines will know
the meaning of text (i.e. understand it) when they have a
What would motivate you to put work into an AGI project?
1) A reasonable point of entry into the project
2) The project would need to be FOSS, or at least communally owned.
(FOSS for preference.) I've had a few bad experiences where the project
leader ended up taking everything, and don't
J. Storrs Hall, PhD. wrote:
On Wednesday 02 May 2007 15:08, Charles D Hixson wrote:
Mark Waser wrote:
... Machines will know
the meaning of text (i.e. understand it) when they have a coherent
world model that they ground their usage of text in.
...
But note that in this case
Mark Waser wrote:
The problem of logical reasoning in natural language is a pattern
recognition
problem (like natural language recognition in general). For example:
- Frogs are green. Kermit is a frog. Therefore Kermit is green.
- Cities have tall buildings. New York is a city.
Eric Baum wrote:
Josh On Saturday 16 June 2007 07:20:27 pm Matt Mahoney wrote:
--- Bo Morgan [EMAIL PROTECTED] wrote:
...
...
I claim that it is the very fact that you are making decisions about
whether to supress pain for higher goals that is the reason you are
conscious of pain. Your
Matt Mahoney wrote:
--- J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
...
So you are arguing that RSI is a hard problem? That is my question.
Understanding software to the point where a program could make intelligent
changes to itself seems to require human level intelligence. But could
Edward W. Porter wrote:
So is the following understanding correct?
If you have two statements
Fred is a human
Fred is an animal
And assuming you know nothing more about any of the three
terms in both these
Derek Zahn wrote:
Richard Loosemore:
a...
I often see it assumed that the step between first AGI is built
(which I interpret as a functoning model showing some degree of
generally-intelligent behavior) and god-like powers dominating the
planet is a short one. Is that really likely?
Nobody
a wrote:
Linas Vepstas wrote:
...
The issue is that there's no safety net protecting against avalanches
of unbounded size. The other issue is that its not grains of sand, its
people. My bank-account and my brains can insulate me from small
shocks.
I'd like to have protection against the
to the related issues.
Pei
On 10/8/07, Charles D Hixson [EMAIL PROTECTED] wrote:
Pei Wang wrote:
Charles,
What you said is correct for most formal logics formulating binary
deduction, using model-theoretic semantics. However, Edward was
talking about the categorical logic of NARS, though he
Mike Tintner wrote:
Vladimir: In experience-based learning there are two main problems
relating to
knowledge acquisition: you have to come up with hypotheses and you
have to assess their plausibility. ...you create them based on various
heuristics.
How is this different from narrow AI? It
Mike Tintner wrote:
Charles H:as I understand it, this still wouldn't be an AGI, but merely a
categorizer.
That's my understanding too. There does seem to be a general problem
in the field of AGI, distinguishing AGI from narrow AI -
philosophically. In fact, I don't think I've seen any
Linas Vepstas wrote:
On Sun, Oct 07, 2007 at 12:36:10PM -0700, Charles D Hixson wrote:
Edward W. Porter wrote:
Fred is a human
Fred is an animal
You REALLY can't do good reasoning using formal logic in natural
language...at least
Mark Waser wrote:
Thus, as I understand it, one can view all inheritance statements as
indicating the evidence that one instance or category belongs to, and
thus is “a child of” another category, which includes, and thus can be
viewed as “a parent” of the other.
Yes, that is inheritance as Pei
Mike Tintner wrote:
Charles,
I don't see - no doubt being too stupid - how what you are saying is
going to make a categorizer into more than that - into a system that
can, say, go on to learn various logic's, or how to build a house or
other structures or tell a story - that can be a
Generally, yes, you know more.
In this particular instance we were told the example was all that was known.
Linas Vepstas wrote:
On Wed, Oct 10, 2007 at 01:06:35PM -0700, Charles D Hixson wrote:
For me the sticking point was that we were informed that we didn't know
anything about anything
Consider, however, the case of someone who was not only blind, but also
deaf and incapable of taste, smell, tactile, or goinometric perception.
I would be dubious about the claim that such a person understood
English. I might be dubious about any claim that such a person was
actually
But what you're reporting is the dredging up of a memory. What would be
the symbolism if in response to 4 came the question How do you know
that? For me it's visual (and leads directly into the definition of +
as an amalgamation of two disjunct groupings).
Edward W. Porter wrote:
(second
They may be doing it with the tongue now. A few decades ago it was done
with an electrode mesh on the back. It worked, but the resolution was
pretty low. (IIRC, you don't need to be blind to learn to use this kind
of mapping device.)
Mike Tintner wrote:
All v. interesting. Fascinating
conscious thought, or am I (a) out of touch with my own
conscious processes, and/or (b) weird?
Edward W. Porter
Porter Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Charles D Hixson [mailto:[EMAIL PROTECTED
Grounding requires sensoria of some sort. Not necessarily vision.
Spatial grounding requires sensoria that connect spatially coherent signals.
Vision is one form of spatial grounding, but I believe that goinometric
sensation is even more important...though it definitely needs additional
a wrote:
Are you trying to make an intelligent program or want to launch a
singularity? I think you are trying to do the former, not the latter.
I think you do not have a plan and are thinking out loud. Chatting
in this list is equivalent to thinking out loud. Think it all out
first, before
Let me take issue with one point (most of the rest I'm uninformed about):
Relational databases aren't particularly compact. What they are is
generalizable...and even there...
The most general compact database is a directed graph. Unfortunately,
writing queries for retrieval requires domain
FWIW:
A few years (decades?) ago some researchers took PET scans of people who
were imagining a rectangle rotating (in 3-space, as I remember). They
naturally didn't get much detail, but what they got was consistent with
people applying a rotation algorithm within the visual cortex. This
Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:
...
I think you should read some stories from the 1930's by John W.
Campbell, Jr. Specifically the three stories collectively called The
Story of the Machine. You can find them in
Richard Loosemore wrote:
Charles D Hixson wrote:
Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:
...
I think you should read some stories from the 1930's by John W.
Campbell, Jr. Specifically the three stories collectively
Richard Loosemore wrote:
Charles D Hixson wrote:
Richard Loosemore wrote:
Charles D Hixson wrote:
Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:
...
In parents, sure, those motives exist.
But in an AGI there is no earthly
Matt Mahoney wrote:
--- Linas Vepstas [EMAIL PROTECTED] wrote:
...
It still has a few bugs.
...
(S (NP I)
(VP ate pizza
(PP with
(NP Bob)))
.)
My name is Hannibal Lector.
...
-- Matt Mahoney, [EMAIL PROTECTED]
(Hannibal Lector was a movie cannibal)
Benjamin Goertzel wrote:
Hi,
***
Maybe listing all the projects that have NOT achieved AGI might give
us some
insight.
***
That information is available in numerous published histories, and is
well known to all professional researchers in the field.
...
-- Ben
YKY (Yan King Yin) wrote:
I have the intuition that Levin search may not be the most efficient
way to search programs, because it operates very differently from
human programming. I guess better ways to generate programs can be
achieved by imitating human programming -- using techniques
Bryan Bishop wrote:
On Saturday 10 November 2007 14:10, Charles D Hixson wrote:
Bryan Bishop wrote:
On Saturday 10 November 2007 13:40, Charles D Hixson wrote:
OTOH, to make a go of this would require several people willing to
dedicate a lot of time consistently over a long
Ed Porter wrote:
Richard,
Since hacking is a fairly big, organized crime supported, business in
eastern Europe and Russia, since the potential rewards for it relative to
most jobs in those countries can be huge, and since Russia has a tradition
of excellence in math and science, I would be very
Benjamin Goertzel wrote:
Nearly any AGI component can be used within a narrow AI,
That proves my point [that AGI project can be successfully split
into smaller narrow AI subprojects], right?
Yes, but it's a largely irrelevant point. Because building a narrow-AI
system in an
I think you're making a mistake.
I *do* feel that lots of special purpose AIs are needed as components of
an AGI, but those components don't summate to an AGI. The AGI also
needs a specialized connection structure to regulate interfaces to the
various special purpose AIs (which probably don't
Well...
Have you ever tried to understand the code created by a decompiler?
Especially if the original language that was compiled isn't the one that
you are decompiling into...
I'm not certain that just because we can look at the code of a working
AGI, that we can therefore understand it.
Gary Miller wrote:
...
supercomputer might be v. powerful - for argument's sake, controlling
the internet or the the world's power supplies. But it's still quite a
leap from that to a supercomputer being God. And yet it is clearly a
leap that a large number here have no problem making. So
John G. Rose wrote:
If you took an AGI, before it went singulatarinistic[sic?] and
tortured it…. a lot, ripping into it in every conceivable hellish way,
do you think at some point it would start praying somehow? I’m not
talking about a forced conversion medieval style, I’m just talking
Mark Waser wrote:
Then again, a completely rational AI may believe in Pascal's wager...
Pascal's wager starts with the false assumption that belief in a deity
has no cost.
Pascal's wager starts with a multitude of logical fallacies. So many
that only someone pre-conditioned to believe in the
I find Dawkins less offensive than most theologians. He commits many
fewer logical fallacies. His main one is premature certainty.
The evidence in favor of an external god of any traditional form is,
frankly, a bit worse than unimpressive. It's lots worse. This doesn't
mean that gods don't
John G. Rose wrote:
From: Charles D Hixson [mailto:[EMAIL PROTECTED]
The evidence in favor of an external god of any traditional form is,
frankly, a bit worse than unimpressive. It's lots worse. This doesn't
mean that gods don't exist, merely that they (probably) don't exist in
the hardware
Bruno Frandemiche wrote:
Psyclone AIOS http://www.cmlabs.com/psyclone/™ is a powerful platform
for building complex automation
and autonomous systems
I couldn't seem to find what license that was released under. (The
library was LGPL, which is very nice.)
But without knowing the license,
Richard Loosemore wrote:
Matt Mahoney wrote:
...
Matt,
...
As for your larger point, I continue to vehemently disagree with your
assertion that a singularity will end the human race.
As far as I can see, the most likely outcome of a singularity would be
exactly the opposite. Rather than
Richard Loosemore wrote:
J Storrs Hall, PhD wrote:
On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote:
J Storrs Hall, PhD wrote:
Any system builders here care to give a guess as to how long it
will be
before
a robot, with your system as its controller, can walk into the
Ben Goertzel wrote:
yet I still feel you dismiss the text-mining approach too glibly...
No, but text mining requires a language model that learns while mining. You
can't mine the text first.
Agreed ... and this gets into subtle points. Which aspects of the
language model
need to be
Richard Loosemore wrote:
Mike Tintner wrote:
Eh? Move your hand across the desk. You see that as a series of
snapshots? Move a noisy object across. You don't see a continuous
picture with a continuous soundtrack?
Let me give you an example of how impressive I think the brain's
powers here
Mark Waser wrote:
...
The motivation that is in the system is I want to achieve *my* goals.
The goals that are in the system I deem to be entirely irrelevant
UNLESS they are deliberately and directly contrary to Friendliness. I
am contending that, unless the initial goals are deliberately
1 - 100 of 117 matches
Mail list logo