As of now, we are aware of no non-human friendlies, so the set of excluded
beings will in all likelihood be the empty set.
Eliezer's current vision of Friendliness puts AGIs (who are non-human
friendlies) in the role of excluded beings. That is why I keep hammering
this point.
To answer
Ben,
Can we boot alien off the list? I'm getting awfully tired of his
auto-reply emailing me directly *every* time I post. It is my contention
that this is UnFriendly behavior (wasting my resources without furthering
any true goal of his) and should not be accepted.
Mark
-
Ahah! :-) Upon reading Kaj's excellent reply, I spotted something that I
missed before that grated on Richard (and he even referred to it though I
didn't realize it at the time) . . . .
The Omohundro drives #3 and #4 need to be rephrased from
Drive 3: AIs will want to preserve their utility
The discussions seem to entirely ignore the role of socialization
in human and animal friendliness. We are a large collection of
autonomous agents that are well-matched in skills and abilities.
If we were unfriendly to one another, we might survive as a species,
but we would not live in cities
Pesky premature e-mail problem . . .
The discussions seem to entirely ignore the role of socialization
in human and animal friendliness. We are a large collection of
autonomous agents that are well-matched in skills and abilities.
If we were unfriendly to one another, we might survive as a
Drive 1: AIs will want to self-improve
This one seems fairly straightforward: indeed, for humans
self-improvement seems to be an essential part in achieving pretty
much *any* goal you are not immeaditly capable of achieving. If you
don't know how to do something needed to achieve your goal,
This is not the risk that concerns me. The real risk is that a single,
fully
cooperating system has no evolutionary drive for self improvement.
So we provide an artificial evolutionary drive for the components of society
via a simple economy . . . . as has been suggested numerous times by
It *might* get stuck in bad territory, but can you make an argument why
there is a *significant* chance of that happening?
Not off the top of my head. I'm just playing it better safe than sorry
since, as far as I can tell, there *may* be a significant chance of it
happening.
Also, I'm not
I am in sympathy with some aspects of Mark's position, but I also see a
serious problem running through the whole debate: everyone is making
statements based on unstated assumptions about the motivations of AGI
systems.
Bummer. I thought that I had been clearer about my assumptions. Let me
For instance, a Novamente-based AGI will have an explicit utility
function, but only a percentage of the system's activity will be directly
oriented toward fulfilling this utility function
Some of the system's activity will be spontaneous ... i.e. only
implicitly goal-oriented .. and as such
First off -- yours was a really helpful post. Thank you!
I think that I need to add a word to my initial assumption . . . .
Assumption - The AGI will be an optimizing goal-seeking entity.
There are two main things.
One is that the statement The AGI will be a goal-seeking entity has
many
Note that you are trying to use a technical term in a non-technical
way to fight a non-technical argument. Do you really think that I'm
asserting that virtual environment can be *exactly* as capable as
physical environment?
No, I think that you're asserting that the virtual environment is close
My second point that you omitted from this response doesn't need there
to be universal substrate, which is what I mean. Ditto for
significant resources.
I didn't omit your second point, I covered it as part of the difference
between our views.
You believe that certain tasks/options are
Part 5. The nature of evil or The good, the bad, and the evil
Since we've got the (slightly revised :-) goal of a Friendly individual and the
Friendly society -- Don't act contrary to anyone's goals unless absolutely
necessary -- we now can evaluate actions as good or bad in relation to that
I think here we need to consider A. Maslow's hierarchy of needs. That an
AGI won't have the same needs as a human is, I suppose, obvious, but I
think it's still true that it will have a hierarchy (which isn't
strictly a hierarchy). I.e., it will have a large set of motives, and
which it is
I've just carefully reread Eliezer's CEV
http://www.singinst.org/upload/CEV.html, and I believe your basic idea
is realizable in Eliezer's envisioned system.
The CEV of humanity is only the initial dynamic, and is *intended* to be
replaced with something better.
I completely agree with
Sure! Friendliness is a state which promotes an entity's own goals;
therefore, any entity will generally voluntarily attempt to return to that
(Friendly) state since it is in it's own self-interest to do so.
In my example it's also explicitly in dominant structure's
self-interest to
My impression was that your friendliness-thing was about the strategy
of avoiding being crushed by next big thing that takes over.
My friendliness-thing is that I believe that a sufficiently intelligent
self-interested being who has discovered the f-thing or had the f-thing
explained to it
Why do you believe it likely that Eliezer's CEV of humanity would not
recognize your approach is better and replace CEV1 with your improved
CEV2, if it is actually better?
If it immediately found my approach, I would like to think that it would do
so (recognize that it is better and replace
OK. Sorry for the gap/delay between parts. I've been doing a substantial
rewrite of this section . . . .
Part 4.
Despite all of the debate about how to *cause* Friendly behavior, there's
actually very little debate about what Friendly behavior looks like. Human
beings actually have had the
1) If I physically destroy every other intelligent thing, what is
going to threaten me?
Given the size of the universe, how can you possibly destroy every other
intelligent thing (and be sure that no others ever successfully arise
without you crushing them too)?
Plus, it seems like an
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, March 07, 2008 6:38 PM
Subject: Re: [agi] Recap/Summary/Thesis Statement
--- Mark Waser [EMAIL PROTECTED] wrote:
Huh? Why can't an irreversible dynamic be part of an attractor
This raises another point for me though. In another post (2008-03-06
14:36) you said:
It would *NOT* be Friendly if I have a goal that I not be turned into
computronium even if your clause (which I hereby state that I do)
Yet, if I understand our recent exchange correctly, it is possible for
What is different in my theory is that it handles the case where the
dominant theory turns unfriendly. The core of my thesis is that the
particular Friendliness that I/we are trying to reach is an
attractor --
which means that if the dominant structure starts to turn unfriendly, it
is
Attractor Theory of Friendliness
There exists a describable, reachable, stable attractor in state space that is
sufficiently Friendly to reduce the risks of AGI to acceptable levels
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS
Whether humans conspire to weed out wild carrots impacts whether humans
are
classified as Friendly (or, it would if the wild carrots were sentient).
Why does it matter what word we/they assign to this situation?
My vision of Friendliness places many more constraints on the behavior
How do you propose to make humans Friendly? I assume this would also have
the
effect of ending war, crime, etc.
I don't have such a proposal but an obvious first step is
defining/describing Friendliness and why it might be a good idea for us.
Hopefully then, the attractor takes over.
Attractor Theory of Friendliness
There exists a describable, reachable, stable attractor in state space
that
is sufficiently Friendly to reduce the risks of AGI to acceptable levels
Proof: something will happen resulting in zero or more intelligent agents.
Those agents will be Friendly to
Comments seem to be dying down and disagreement appears to be minimal, so let
me continue . . . .
Part 3.
Fundamentally, what I'm trying to do here is to describe an attractor that will
appeal to any goal-seeking entity (self-interest) and be beneficial to humanity
at the same time
How does an agent know if another agent is Friendly or not, especially if
the
other agent is more intelligent?
An excellent question but I'm afraid that I don't believe that there is an
answer (but, fortunately, I don't believe that this has any effect on my
thesis).
Hmm. Bummer. No new feedback. I wonder if a) I'm still in Well duh land,
b) I'm so totally off the mark that I'm not even worth replying to, or c) I
hope being given enough rope to hang myself. :-)
Since I haven't seen any feedback, I think I'm going to divert to a section
that I'm not
because it IS equivalent to
enlightened self-interest -- but it only works where all entities involved
are Friendly.
PART 3 will answer part of What is Friendly behavior? by answering What is
in the set of horrible nasty thing[s]?.
- Original Message -
From: Mark Waser
To: agi@v2
Or should we not worry about the problem because the more intelligent
agent is
more likely to win the fight? My concern is that evolution could favor
unfriendly behavior, just as it has with humans.
I don't believe that evolution favors unfriendly behavior. I believe that
evolution is
My concern is what happens if a UFAI attacks a FAI. The UFAI has the goal
of
killing the FAI. Should the FAI show empathy by helping the UFAI achieve
its
goal?
Hopefully this concern was answered by my last post but . . . .
Being Friendly *certainly* doesn't mean fatally overriding your
Mark, how do you intend to handle the friendliness obligations of the AI
towards vastly different levels of intelligence (above the threshold, of
course)?
Ah. An excellent opportunity for continuation of my previous post rebutting
my personal conversion to computronium . . . .
First off,
I wonder if this is a substantive difference with Eliezer's position
though, since one might argue that 'humanity' means 'the [sufficiently
intelligent and sufficiently ...] thinking being' rather than 'homo
sapiens sapiens', and the former would of course include SAIs and
intelligent alien
Would it be Friendly to turn you into computronium if your memories were
preserved and the newfound computational power was used to make you
immortal
in a a simulated world of your choosing, for example, one without
suffering,
or where you had a magic genie or super powers or enhanced
I think this one is a package deal fallacy. I can't see how whether
humans conspire to weed out wild carrots or not will affect decisions
made by future AGI overlords. ;-)
Whether humans conspire to weed out wild carrots impacts whether humans are
classified as Friendly (or, it would if the
Would an acceptable response be to reprogram the goals of the UFAI to make
it
friendly?
Yes -- but with the minimal possible changes to do so (and preferably done
by enforcing Friendliness and allowing the AI to resolve what to change to
resolve integrity with Friendliness -- i.e. don't mess
And more generally, how is this all to be quantified? Does your paper go
into the math?
All I'm trying to establish and get agreement on at this point are the
absolutes. There is no math at this point because it would be premature and
distracting.
but, a great question . . . . :-
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we
can't.
Why Matt, thank you for such a wonderful opening . . . . :-)
Friendliness *CAN* be defined. Furthermore, it is my contention that
1. How will the AI determine what is in the set of horrible nasty
thing[s] that would make *us* unhappy? I guess this is related to how you
will define the attractor precisely.
2. Preventing the extinction of the human race is pretty clear today, but
*human race* will become increasingly
But the question is whether the internal knowledge representation of the AGI
needs to allow ambiguities, or should we use an ambiguity-free
representation. It seems that the latter choice is better.
An excellent point. But what if the representation is natural language with
pointers to
UGH!
My point is only that it is obvious that we are heading towards something
really quickly, with unstoppable inertia, and unless some world tyrant
crushed all freedoms and prevented everyone from doing what they are doing,
there is no way that it is not going to happen.
Most people on
Our attractions to others - why we choose them as friends or lovers - are
actually v. complex.
The example of Love at first sight proves that your statement is not
universally true.
You seem to have an awful lot of unfounded beliefs that you persist in
believing as facts.
- Original
I think Ben's text mining approach has one big flaw: it can only reason
about existing knowledge, but cannot generate new ideas using words /
concepts
There is a substantial amount of literature that claims that *humans* can't
generate new ideas de novo either -- and that they can only
Water does not always run downhill, sometimes it runs uphill.
But never without a reason.
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, February 20, 2008 9:47 AM
Subject: Re: [agi] would anyone want to use a commonsense KB?
C is
http://www.seedmagazine.com/news/2007/07/rise_of_roboethics.php
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
All of these rules have exception or implicit condition. If you
treat them as default rules, you run into multiple extension
problem, which has no domain-independent solution in binary logic ---
read http://www.cogsci.indiana.edu/pub/wang.reference_classes.ps for
details.
Pei,
Do you have a
Mark, my point is that while in the past evolution did the choosing,
now it's *we* who decide,
But the *we* who is deciding was formed by evolution. Why do you do
*anything*? I've heard that there are four basic goals that drive every
decision: safety, feeling good, looking good, and being
I know that you can do stuff like this with Microsoft's new SilverLight.
For example, http://www.devx.com/dotnet/Article/36544
- Original Message -
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, January 30, 2008 12:44 PM
Subject: [agi] Request for Help
?
- Original Message -
From: Vladimir Nesov [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, January 30, 2008 2:14 PM
Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide
On Jan 29, 2008 10:28 PM, Mark Waser [EMAIL PROTECTED] wrote:
Ethics only becomes snarled when one
Ethics only becomes snarled when one is unwilling to decide/declare what the
goal of life is.
Extrapolated Volition comes down to a homunculus depending upon the definition
of wiser or saner.
Evolution has decided what the goal of life is . . . . but most are unwilling
to accept it (in part
http://www.msnbc.msn.com/id/18684016/?GT1=9951
I don't get it. It says that flies movie in accordance with a
non-flat distribution instead of a flat distribution. That has
nothing to do with free will. The writers assume that non-flat
distribution = free will.
You need to read more fully
For example, hunger is an emotion, but the
desire for money to buy food is not
Hunger is a sensation, not an emotion.
The sensation is unpleasant and you have a hard-coded goal to get rid of it.
Further, desires tread pretty close to the line of emotions if not actually
crossing over . . . .
One of the things that I quickly discovered when first working on my
convert it all to Basic English project is that the simplest words
(prepositions and the simplest verbs in particular) are the biggest problem
because they have so many different (though obscurely related) meanings (not
to
In our rule encoding approach, we will need about 5000 mapping rules to
map
syntactic parses of commonsense sentences into term logic relationships.
Our
inference engine will then generalize these into hundreds of thousands
or millions
of specialized rules.
How would your rules handle the on
Hey Ben,
Any chance of instituting some sort of moderation on this list?
- Original Message -
From: Ed Porter [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, December 11, 2007 10:18 AM
Subject: RE: [agi] AGI and Deity
Mike:
MIKE TINTNER# Science's autistic,
THE KEY POINT I WAS TRYING TO GET ACROSS WAS ABOUT NOT HAVING TO
EXPLICITLY DEAL WITH 500K TUPLES
And I asked -- Do you believe that this is some sort of huge conceptual
breakthrough?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options,
Ed,
Get a grip. Try to write with complete words in complete sentences
(unless discreted means a combination of excreted and discredited -- which
works for me :-).
I'm not coming back for a second swing. I'm still pursuing the first
one. You just aren't oriented well enough to
With regard to your questions below, If you actually took the time to
read
my prior responses, I think you will see I have substantially answered
them.
No, Ed. I don't see that at all. All I see is you refusing to answer them
even when I repeatedly ask them. That's why I asked them again.
Interesting. Since I am interested in parsing, I read Collin's paper. It's a
solid piece of work (though with the stated error percentages, I don't believe
that it really proves anything worthwhile at all) -- but your
over-interpretations of it are ridiculous.
You claim that It is actually
ED PORTER= The 500K dimensions were mentioned several times in a
lecture Collins gave at MIT about his parse. This was probably 5 years ago
so I am not 100% sure the number was 500K, but I am about 90% sure that was
the number used, and 100% sure the number was well over 100K.
OK. I'll
to match it is potentially capability of
matching it against any of its dimensions.
Ed Porter
-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 05, 2007 3:07 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI
HeavySarcasmWow. Is that what dot products are?/HeavySarcasm
You're confusing all sorts of related concepts with a really garbled
vocabulary.
Let's do this with some concrete 10-D geometry . . . . Vector A runs from
(0,0,0,0,0,0,0,0,0,0) to (1, 1, 0,0,0,0,0,0,0,0). Vector B runs from
:58PM -0500, Mark Waser wrote:
So perhaps the AGI question is, what is the difference between
a know-it-all mechano-librarian, and a sentient being?
I wasn't assuming a mechano-librarian. I was assuming a human that could
(and might be trained to) do some initial translation of the question and
some
, 2007 11:36 AM, Mark Waser [EMAIL PROTECTED] wrote:
I am extremely confident of Novamente's memory design regarding
declarative and procedural knowledge. Tweaking the system for optimal
representation of episodic knowledge may require some more thought.
Granted -- the memory design
I'm more interested at this stage in analogies like
-- btw seeking food and seeking understanding
-- between getting an object out of a hole and getting an object out of a
pocket, or a guarded room
Why would one need to introduce advanced scientific concepts to an
early-stage AGI? I don't
I don't know at what point you'll be blocked from answering by confidentiality
concerns but I'll ask a few questions you hopefully can answer like:
1.. How is the information input and stored in your system (i.e. Is it more
like simple formal assertions with a restricted syntax and/or language
, November 12, 2007 2:57 PM
Subject: Re: [agi] What best evidence for fast AI?
On Nov 12, 2007 2:51 PM, Mark Waser [EMAIL PROTECTED] wrote:
I don't know at what point you'll be blocked from answering by
confidentiality concerns
I can't say much more than I will do in this email, due
I'm going to try to put some words into Richard's mouth here since I'm
curious to see how close I am . . . . (while radically changing the words).
I think that Richard is not arguing about the possibility of Novamente-type
solutions as much as he is arguing about the predictability of
. . . . :-)which is why I figured I'd run this
out there and see how he reacted.:-)
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Monday, November 12, 2007 5:14 PM
Subject: Re: [agi] What best evidence for fast AI?
On Nov 12, 2007 5:02 PM, Mark
of user dissatisfaction.
Mark
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Monday, November 12, 2007 7:10 PM
Subject: Re: [agi] What best evidence for fast AI?
On Nov 12, 2007 6:56 PM, Mark Waser [EMAIL PROTECTED] wrote
at 06:56:51PM -0500, Mark Waser wrote:
It will happily include irrelevant facts
Which immediately makes it *not* relevant to my point.
Please read my e-mails more carefully before you hop on with ignorant
flames.
I read your emails, and, mixed in with some insightful and highly
relevent
I would bet that merging two KB's obtained by mining natural
language would work a lot better than merging two KB's
like Cyc and SUMO that were artificially created by humans.
I think that this phrasing confuses the issue. It is the structure of the
final KR scheme, not how the initial KBs
my inclination has been to see progress as very slow toward an
explicitly-coded AI, and so to guess that the whole brain emulation approach
would succeed first
Why are you not considering a seed/learning AGI?
- Original Message -
From: Robin Hanson
To: agi@v2.listbox.com
Looks like they were just simulating eight million neurons with up to
6.3k synapses each. How's that necessarily a mouse simulation, anyway?
It really isn't because the individual neuron behavior is so *vastly*
simplified. It is, however, a necessary first step and likely to teach us
*a
If I see garbage being peddled as if it were science, I will call it
garbage.
Amen. The political correctness of forgiving people for espousing total
BS is the primary cause of many egregious things going on for far, *far* too
long.
-
This list is sponsored by AGIRI:
/22/07, Mark Waser [EMAIL PROTECTED] wrote:
If I see garbage being peddled as if it were science, I will call it
garbage.
Amen. The political correctness of forgiving people for espousing total
BS is the primary cause of many egregious things going on for far, *far*
too
long
-- I think Granger's cog-sci speculations, while oversimplified and surely
wrong in parts, contain important hints at the truth (and in my prior email
I tried to indicate how)
-- Richard OTOH, seems to consider Granger's cog-sci speculations total
garbage
This is a significant difference
Arthur,
There was no censorship. We all saw that message go by. We all just
ignored it. Take a hint.
- Original Message -
From: A. T. Murray [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, October 22, 2007 10:35 AM
Subject: [agi] Re: Bogus Neuroscience [...]
On Oct
So, one way to summarize my view of the paper is
-- The neuroscience part of Granger's paper tells how these
library-functions may be implemented in the brain
-- The cog-sci part consists partly of
- a) the hypothesis that these library-functions are available to
cognitive programs
I think we've beaten this horse to death . . . . :-)
However, he has some interesting ideas about the connections between
cognitive primitives and neurological structures/dynamics. Connections of
this nature are IMO cog sci rather than just neurosci. At least, that
is consistent with
What I'd like is a mathematical estimate of why a graphic or image (or any
form of physical map) is a vastly - if not infinitely - more efficient way
to store information than a set of symbols.
Yo troll . . . . a graphic or image is *not* a vastly - if not infinitely -
more efficient way to
fit, it's also about as efficient as a Turing
machine. So this isn't an argument that you REALLY can't use a relational
db for all of your representations, but rather that it's a really bad
idea.)
Mark Waser wrote:
But how much information is in a map, and how much in the relationship
database
approximating in shape the object
being visualized. (This doesn't say anything about how the information is
stored.)
Mark Waser wrote:
Another way of putting my question/ point is that a picture (or map) of
your face is surely a more efficient, informational way to store your
face than any set
:-).
- Original Message -
From: a [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, October 13, 2007 8:51 AM
Subject: Re: [agi] The Grounding of Maths
Mark Waser wrote:
Only from your side. Science looks at facts. I have the irrefutable
fact of intelligent blind people. You have
Enjoying trolling, Ben?:-)
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Friday, October 12, 2007 9:55 AM
Subject: Re: [agi] Do the inference rules.. P.S.
On 10/12/07, Mike Tintner [EMAIL PROTECTED] wrote:
Ben,
No.
Visualspatial intelligence is required for almost anything.
I'm sorry. This is all pure, unadulterated BS. You need spatial
intelligence (i.e. a world model). You do NOT need visual anything. The
only way in which you need visual is if you contort it's meaning until it
effectively means
intelligence that is necessary but
since I think that vision can emulate it maybe I can argue that it is vision
that is necessary.
- Original Message -
From: a [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, October 12, 2007 5:38 PM
Subject: Re: [agi] The Grounding of Maths
Mark Waser
Look at the article and it mentions spatial and vision are interrelated:
No. It clearly spells out that vision requires spatial processing -- and
says *NOTHING* about the converse.
Dude, you're a broken record. Intelligence requires spatial. Vision
requires spatial. Intelligence does
] The Grounding of Maths
Mark Waser wrote:
You have shown me *ZERO* evidence that vision is required for
intelligence and blind from birth individuals provide virtually proof
positive that vision is not necessary for intelligence. How can you
continue to argue the converse?
It is my solid opinion
Concepts cannot be grounded without vision.
So . . . . explain how people who are blind from birth are functionally
intelligent.
It is impossible to completely understand natural language without
vision.
So . . . . you believe that blind-from-birth people don't completely
understand
I agree . . . . there are far too many people spouting off without a clue
without allowing them to spout off off-topic as well . . . .
- Original Message -
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, October 11, 2007 4:44 PM
Subject: [agi] Re:
.listbox.com
Sent: Thursday, October 11, 2007 5:24 PM
Subject: Re: [agi] Do the inference rules.. P.S.
Mark Waser wrote:
Concepts cannot be grounded without vision.
So . . . . explain how people who are blind from birth are functionally
intelligent.
It is impossible to completely understand
).
Why can't echo-location lead to spatial perception without vision? Why
can't touch?
- Original Message -
From: a [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, October 11, 2007 5:54 PM
Subject: Re: [agi] Do the inference rules.. P.S.
Mark Waser wrote:
I'll buy internal
It looks to me as if NARS can be modeled by a prototype based language
with operators for is an ancestor of and is a descendant of.
I don't believe that this is the case at all. NARS correctly handles
cases where entities co-occur or where one entity implies another only due
to other
, but also from other readers.
Edward W. Porter
Porter Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 09, 2007 9:46 AM
to be beyond what I have read.
Edward W. Porter
Porter Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 09, 2007 12:47 PM
From: William Pearson [EMAIL PROTECTED]
Laptops aren't TMs.
Please read the wiki entry to see that my laptop isn't a TM.
But your laptop can certainly implement/simulate a Turing Machine (which was
the obvious point of the post(s) that you replied to).
Seriously, people, can't we lose all
301 - 400 of 705 matches
Mail list logo