Hi Richard,
I'll de-lurk here to say that I find this email to be utterly reasonable, and
that's with my crackpot detectors going off a lot lately, no offense to you of
course.
I do disagree that complexity is not its own science. I'm not wedded to the
idea, like the folks you profile in
Hi Ben,
I don't think the flaw you have identified matters to the main thrust of
Richard's argument - and if you haven't summarized Richard's position
precisely, you have summarized mine. :-]
You're saying the flaw in that position is that prediction of complex networks
might merely be a
--- On Mon, 6/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
but I don't agree that predicting **which** AGI designs can lead
to the emergent properties corresponding to general intelligence,
is pragmatically impossible to do in an analytical and rational way ...
OK, I grant you that you may be
On Mon, Jun 30, 2008 at 3:17 AM, Terren Suydam
[EMAIL PROTECTED] wrote:
--- On Mon, 6/30/08, Ben Goertzel
[EMAIL PROTECTED] wrote:
but I don't agree that predicting **which**
AGI designs can lead
to the emergent properties corresponding to
general intelligence,
is pragmatically
As far as I can tell, all you've done is give the irreducibility a name:
statistical mechanics. You haven't explained how the arrow of time emerges
from the local level to the global. Or, maybe I just don't understand it... can
you dumb it down for me?
Terren
--- On Mon, 6/30/08, Lukasz
Hi William,
A Von Neumann computer is just a machine. It's only purpose is to compute. When
you get into higher-level purpose, you have to go up a level to the stuff being
computed. Even then, the purpose is in the mind of the programmer. The only way
to talk coherently about purpose within
Hi Mike,
Evidently I didn't communicate that so clearly because I agree with you 100%.
Terren
--- On Mon, 6/30/08, Mike Tintner [EMAIL PROTECTED] wrote:
Terren:One of the basic threads of scientific progress is
the ceaseless
denigration of the idea that there is something special
about
Hi Will,
--- On Mon, 6/30/08, William Pearson [EMAIL PROTECTED] wrote:
The only way to talk coherently about purpose within
the computation is to simulate self-organized, embodied
systems.
I don't think you are quite getting my system. If you
had a bunch of
programs that did the
PM, Terren Suydam
[EMAIL PROTECTED] wrote:
Ben,
I agree, an evolved design has limits too, but the key
difference between a contrived design and one that is
allowed to evolve is that the evolved critter's
intelligence is grounded in the context of its own
'experience', whereas
Will,
I think the original issue was about purpose. In your system, since a human is
the one determining which programs are performing the best, the purpose is
defined in the mind of the human. Beyond that, it certainly sounds as if it is
a self-organizing system.
Terren
--- On Tue,
Hi Mike,
My points about the pitfalls of theorizing about intelligence apply to any and
all humans who would attempt it - meaning, it's not necessary to characterize
AI folks in one way or another. There are any number of aspects of intelligence
we could highlight that pose a challenge to
Nevertheless, generalities among different instances of complex systems have
been identified, see for instance:
http://en.wikipedia.org/wiki/Feigenbaum_constants
Terren
--- On Tue, 7/1/08, Russell Wallace [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
My scepticism comes mostly from my
--- On Tue, 7/1/08, John G. Rose [EMAIL PROTECTED] wrote:
BUT there are some
circuits I believe, can't think of any offhand, where
the opposite is true.
It just kind of works based on based on complex subsystems
interoperational
functionality and it was discovered, not designed
Mike,
This is going too far. We can reconstruct to a considerable
extent how humans think about problems - their conscious thoughts.
Why is it going too far? I agree with you that we can reconstruct thinking, to
a point. I notice you didn't say we can completely reconstruct how humans
Mike,
That's a rather weak reply. I'm open to the possibility that my ideas are
incorrect or need improvement, but calling what I said nonsense without further
justification is just hand waving.
Unless you mean this as your justification:
Your conscious, inner thoughts are not that different
Will,
My plan is go for 3) Usefulness. Cognition is useful from
an
evolutionary point of view, if we try to create systems
that are
useful in the same situations (social, building world
models), then we
might one day stumble upon cognition.
Sure, that's a valid approach for creating
That may be true, but it misses the point I was making, which was a response to
Richard's lament about the seeming lack of any generality from one complex
system to the next. The fact that Feigenbaum's constants describe complex
systems of different kinds is remarkable because it suggests an
Will,
Remember when I said that a purpose is not the same thing
as a goal?
The purpose that the system might be said to have embedded
is
attempting to maximise a certain signal. This purpose
presupposes no
ontology. The fact that this signal is attached to a human
means the
system as a
Will,
--- On Fri, 7/4/08, William Pearson [EMAIL PROTECTED] wrote:
Does the following make sense? The purpose embedded within
the system
will be try and make the system not decrease in its ability
to receive
some abstract number.
The way I connect up the abstract number to the real world
Will,
--- On Fri, 7/11/08, William Pearson [EMAIL PROTECTED] wrote:
Purpose and goal are not intrinsic to systems.
I agree this is true with designed systems. The designed system is ultimately
an extension of the designer's mind, wherein lies the purpose. Of course, as
you note, the system
Will,
--- On Tue, 7/15/08, William Pearson [EMAIL PROTECTED] wrote:
And I would also say of evolved systems. My fingers purpose
could
equally well be said to be for picking ticks out of the
hair of my kin
or for touch typing. E.g. why do I keep my fingernails
short, so that
they do not
Brad,
--- On Wed, 7/30/08, Brad Paulsen [EMAIL PROTECTED] wrote:
As to your cheerleader, she's just made my kill-list.
The only thing worse than
someone who slings unsupported opinions around like
they're facts, is someone
who slings someone else's unsupported opinions around
like
Just to throw my 2 cents in here. The short version: if you want to improve the
list, look to yourself. Don't rely on moderation.
If you have something worth posting, post it without fear of rude responses. If
people are rude, don't be rude back. Resist the urge to fire off the quick
reply
Harry,
Count me in the camp that views grounding as the essential problem of
traditional AI approaches, at least as it relates to AGI. An embodied AI [*],
in which the only informational inputs to the AI come via so-called sensory
modalities, is the only way I can see for an AI to arrive at
won't bother to define ongoing experience unless someone asks me to, at the
risk of putting people to sleep.
Terren
--- On Mon, 8/4/08, Harry Chesley [EMAIL PROTECTED] wrote:
Terren Suydam wrote:
...
Without an internal
sense of meaning, symbols passed to the AI are simply
arbitrary
The Chinese Room argument counters only the assertion that the computational
mechanism that manipulates symbols is capable of understanding. But in more
sophisticated approaches to AGI, the computational mechanism is not the agent,
it's merely a platform.
Take the OpenCog design. See in
/#4.3 could
also be
taken to apply to your response, but I won't quote that
one.
Sincerely,
Abram Demski
On Tue, Aug 5, 2008 at 1:50 PM, Terren Suydam
[EMAIL PROTECTED] wrote:
The Chinese Room argument counters only the assertion
that the computational mechanism that manipulates
Abram,
If that's your response then we don't actually agree.
I agree that the Chinese Room does not disprove strong AI, but I think it is a
valid critique for purely logical or non-grounded approaches. Why do you think
the critique fails on that level? Anyone else who rejects the Chinese
be comfortable with
all of the above.
Terren
Terren Suydam wrote:
Abram,
If that's your response then we don't actually
agree.
I agree that the Chinese Room does not disprove strong
AI, but I think it is a valid critique for purely logical or
non-grounded approaches. Why do you think
.
On 8/6/08, Terren Suydam [EMAIL PROTECTED] wrote:
Hi Valentina,
I think the distinction you draw between the two kinds of understanding is
illusory. Mutual human experience is also an emergent phenomenon. Anyway,
that's not the point of the Chinese Room argument, which doesn't say
PROTECTED] wrote:
Ok, I really don't see how it proves that then. In my view, the book could be
replaced with a chinese-english translator and the same exact outcome will be
given. Both are using their static knowledge for this process, not experience.
On 8/6/08, Terren Suydam [EMAIL PROTECTED
Hi Abram,
Sorry, your message did slip through the cracks. I intended to respond
earlier... here goes.
--- On Wed, 8/6/08, Abram Demski [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
I explained somewhat in my first reply to this thread.
Basically, as I
understand you, you are saying
Harry,
--- On Wed, 8/6/08, Harry Chesley [EMAIL PROTECTED] wrote:
But it's a preaching to the choir argument: Is there
anything more to
the argument than the intuition that automatic manipulation
cannot
create understanding? I think it can, though I have yet to
show it.
The burden is on
. However, it does not do so by
itself, and
in my opinion it would be clearer to come up with a
different argument
rather than fixing that one.
-Abram Demski
On Wed, Aug 6, 2008 at 1:44 PM, Terren Suydam
[EMAIL PROTECTED] wrote:
Hi Abram,
Sorry, your message did slip through
She's not asking about the kind of embodiment, she's asking what's the use of a
non-embodied AGI. Your quotation, dealing as it does with low-level input, is
about embodied AGI.
--- On Fri, 8/22/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Thanks Vlad, I read all that stuff plus other Eliezer
[EMAIL PROTECTED] wrote:
On Fri, Aug 22, 2008 at 5:35 PM, Terren Suydam
[EMAIL PROTECTED] wrote:
She's not asking about the kind of embodiment,
she's asking what's the use of a non-embodied AGI.
Your quotation, dealing as it does with low-level input, is
about embodied AGI.
I
Eric,
You lower the quality of this list with comments like that. It's the kind of
thing that got people wondering a month ago whether moderation is necessary on
this list. If we're all adults, moderation shouldn't be necessary.
Jim, do us all a favor and don't respond to that, as tempting as
about constantly when going places or discussing anything
is the
quality of discourse.
On 8/23/08, Terren Suydam [EMAIL PROTECTED]
wrote:
Eric,
You lower the quality of this list with comments like
that. It's the kind of
thing that got people wondering a month ago whether
that processes the specified goals and
knowledge dovetail with the constructions that emerge from the embodied senses?
Ben, any thoughts on that?
Terren
--- On Sat, 8/23/08, Terren Suydam [EMAIL PROTECTED] wrote:
Yeah, that's where the misunderstanding is... low
level input is too fuzzy a concept
comments below...
--- On Sat, 8/23/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
The last post by Eliezer provides handy imagery for this
point (
http://www.overcomingbias.com/2008/08/mirrors-and-pai.html
). You
can't have an AI of perfect emptiness, without any
goals at all,
because it
--- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
What do you mean by does not structure? What do
you mean by fully or
not fully embodied?
I've already discussed what I mean by embodiment in a previous post, the one
that immediately preceded the post you initially responded to.
Actually, kittens play because it's fun. Evolution has equipped them with the
rewarding sense of fun because it optimizes their fitness as hunters. But
kittens are adaptation executors, evolution is the fitness optimizer. It's a
subtle but important distinction.
See
Hi Vlad,
Thanks for taking the time to read my article and pose excellent questions. My
attempts at answers below.
--- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
What is the point of building general intelligence if all
it does
I'm not saying play isn't adaptive. I'm saying that kittens play not because
they're optimizing their fitness, but because they're intrinsically motivated
to (it feels good). The reason it feels good has nothing to do with the kitten,
but with the evolutionary process that designed that
Hi Mike,
As may be obvious by now, I'm not that interested in designing cognition. I'm
interested in designing simulations in which intelligent behavior emerges.
But the way you're using the word 'adapt', in a cognitive sense of playing with
goals, is different from the way I was using
Saying
that a particular cat instance hunts because it feels good
is not very explanatory
Even if I granted that, saying that a particular cat plays to increase its
hunting skills is incorrect. It's an important distinction because by analogy
we must talk about particular AGI instances.
Hi Will,
I don't doubt that provable-friendliness is possible within limited,
well-defined domains that can be explicitly defined and hard-coded. I know
chess programs will never try to kill me.
I don't believe however that you can prove friendliness within a framework that
has the
If an AGI played because it recognized that it would improve its skills in some
domain, then I wouldn't call that play, I'd call it practice. Those are
overlapping but distinct concepts.
Play, as distinct from pactice, is its own reward - the reward felt by a
kitten. The spirit of Mike's
Hi Mike,
Comments below...
--- On Mon, 8/25/08, Mike Tintner [EMAIL PROTECTED] wrote:
Two questions: 1) how do you propose that your simulations
will avoid the
kind of criticisms you've been making of other systems
of being too guided
by programmers' intentions? How can you set up a
considerate or
inocuous. But I don't
know
On 8/25/08, Terren Suydam [EMAIL PROTECTED]
wrote:
Hi Will,
I don't doubt that provable-friendliness is
possible within limited,
well-defined domains that can be explicitly defined
and hard-coded. I know
chess programs will never try to kill me
Hi Johnathon,
I disagree, play without rules can certainly be fun. Running just to run,
jumping just to jump. Play doesn't have to be a game, per se. It's simply a
purposeless expression of the joy of being alive. It turns out of course that
play is helpful for achieving certain goals that we
Hi David,
Any amount of guidance in such a simulation (e.g. to help avoid so many
of the useless
eddies in a fully open-ended simulation) amounts to
designed cognition.
No, it amounts to guided evolution. The difference between a designed
simulation and a designed cognition is the focus on
That's a fair criticism. I did explain what I mean by embodiment in a previous
post, and what I mean by autonomy in the article of mine I referenced. But I do
recognize that in both cases there is still some ambiguity, so I will withdraw
the question until I can formulate it in more concise
Are you saying Friendliness is not context-dependent? I guess I'm struggling
to understand what a conceptual dynamics would mean that isn't dependent on
context. The AGI has to act, and at the end of the day, its actions are our
only true measure of its Friendliness. So I'm not sure what it
I don't think it's necessary to be self-aware to do self-modifications.
Self-awareness implies that the entity has a model of the world that separates
self from other, but this kind of distinction is not necessary to do
self-modifications. It could act on itself without the awareness that it
If Friendliness is an algorithm, it ought to be a simple matter to express what
the goal of the algorithm is. How would you define Friendliness, Vlad?
--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
It is expressed in individual decisions, but it isn't
these decisions
, Terren Suydam
[EMAIL PROTECTED] wrote:
If Friendliness is an algorithm, it ought to be a
simple matter to express
what the goal of the algorithm is. How would you
define Friendliness, Vlad?
Algorithm doesn't need to be simple. The actual
Friendly AI that
started to incorporate
It doesn't matter what I do with the question. It only matters what an AGI does
with it.
I'm challenging you to demonstrate how Friendliness could possibly be specified
in the formal manner that is required to *guarantee* that an AI whose goals
derive from that specification would actually
--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
But what is safe, and how to improve safety? This is a
complex goal
for complex environment, and naturally any solution to this
goal is
going to be very intelligent. Arbitrary intelligence is not
safe
(fatal, really), but what is
It doesn't matter what I do with the question. It
only matters what an AGI does with it.
AGI doesn't do anything with the question, you do. You
answer the
question by implementing Friendly AI. FAI is the answer to
the
question.
The question is: how could one specify Friendliness in
--- On Wed, 8/27/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
One of the main motivations for the fast development of
Friendly AI is
that it can be allowed to develop superintelligence to
police the
human space from global catastrophes like Unfriendly AI,
which
includes as a special case a
--- On Thu, 8/28/08, Mark Waser [EMAIL PROTECTED] wrote:
Actually, I *do* define good and ethics not only in
evolutionary terms but
as being driven by evolution. Unlike most people, I
believe that ethics is
*entirely* driven by what is best evolutionarily while not
believing at all
in
Hi Jiri,
Comments below...
--- On Thu, 8/28/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
That's difficult to reconcile if you don't
believe embodiment is all that important.
Not really. We might be qualia-driven, but for our AGIs
it's perfectly
ok (and only natural) to be driven by given
)
from others.
- Original Message -
From: Terren Suydam [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 5:03 PM
Subject: Re: AGI goals (was Re: Information theoretic
approaches to AGI (was
Re: [agi] The Necessity of Embodiment))
--- On Thu, 8/28
Jiri,
I think where you're coming from is a perspective that doesn't consider or
doesn't care about the prospect of a conscious intelligence, an awake being
capable of self reflection and free will (or at least the illusion of it).
I don't think any kind of algorithmic approach, which is to
--- On Fri, 8/29/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
I don't see why an un-embodied system couldn't
successfully use the
concept of self in its models. It's just another
concept, except that
it's linked to real features of the system.
To an unembodied agent, the concept of self is
--- On Fri, 8/29/08, Mark Waser [EMAIL PROTECTED] wrote:
Saying that ethics is entirely driven by evolution is NOT
the same as saying
that evolution always results in ethics. Ethics is
computationally/cognitively expensive to successfully
implement (because a
stupid implementation gets
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
You start with what is right? and end with
Friendly AI, you don't
start with Friendly AI and close the circular
argument. This doesn't
answer the question, but it defines Friendly AI and thus
Friendly AI
(in terms of right).
In
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Won't work, Moore's law is ticking, and one day a
morally arbitrary
self-improving optimization will go FOOM. We have to try.
I wish I had a response to that. I wish I could believe it was even possible.
To me, this is like saying
I agree with that to the extent that theoretical advances could address the
philosophical objections I am making. But until those are dealt with,
experimentation is a waste of time and money.
If I was talking about how to build faster-than-lightspeed travel, you would
want to know how I plan
comments below...
[BG]
Hi,
Your philosophical objections aren't really objections to my perspective, so
far as I have understood so far...
[TS]
Agreed. They're to the Eliezer perspective that Vlad is arguing for.
[BG]
I don't plan to hardwire beneficialness (by which I may not mean precisely
Hi Ben,
My own feeling is that computation is just the latest in a series of technical
metaphors that we apply in service of understanding how the universe works.
Like the others before it, it captures some valuable aspects and leaves out
others. It leaves me wondering: what future metaphors
] wrote:
From: Vladimir Nesov [EMAIL PROTECTED]
Subject: [agi] What is Friendly AI?
To: agi@v2.listbox.com
Date: Saturday, August 30, 2008, 1:53 PM
On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam
[EMAIL PROTECTED] wrote:
--- On Sat, 8/30/08, Vladimir Nesov
[EMAIL PROTECTED] wrote:
You
]
Subject: Re: [agi] What is Friendly AI?
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 5:04 PM
On Thu, Sep 4, 2008 at 12:46 AM, Terren Suydam
[EMAIL PROTECTED] wrote:
Hi Vlad,
Thanks for the response. It seems that you're
advocating an incremental
approach *towards* FAI
]
Subject: Re: [agi] What is Friendly AI?
To: agi@v2.listbox.com
Date: Wednesday, September 3, 2008, 6:11 PM
On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam
[EMAIL PROTECTED] wrote:
I'm asserting that if you had an FAI in the sense
you've described, it wouldn't
be possible in principle
Hi Mike,
I see two ways to answer your question. One is along the lines that Jaron
Lanier has proposed - the idea of software interfaces that are fuzzy. So rather
than function calls that take a specific set of well defined arguments,
software components talk somehow in 'patterns' such that
Mike,
There's nothing particularly creative about keyboards. The creativity comes
from what uses the keyboard. Maybe that was your point, but if so the
digression about a keyboard is just confusing.
In terms of a metaphor, I'm not sure I understand your point about
organizers. It seems to me
Hi Ben,
You may have stated this explicitly in the past, but I just want to clarify -
you seem to be suggesting that a phenomenological self is important if not
critical to the actualization of general intelligence. Is this your belief, and
if so, can you provide a brief justification of
Mike,
Thanks for the reference to Dennis Noble, he sounds very interesting and his
views on Systems Biology as expressed on his Wikipedia page are perfectly in
line with my own thoughts and biases.
I agree in spirit with your basic criticisms regarding current AI and
creativity. However, it
OK, I'll bite: what's nondeterministic programming if not a contradiction?
--- On Thu, 9/4/08, Mike Tintner [EMAIL PROTECTED] wrote:
Nah. One word (though it would take too long here to
explain) ;
nondeterministic programming.
---
agi
Hi Mike, comments below...
--- On Fri, 9/5/08, Mike Tintner [EMAIL PROTECTED] wrote:
Again - v. briefly - it's a reality - nondeterministic
programming is a
reality, so there's no material, mechanistic, software
problem in getting a
machine to decide either way.
This is inherently
/Organiser
To: agi@v2.listbox.com
Date: Sunday, September 7, 2008, 11:44 AM
On Friday 05 September 2008, Terren Suydam wrote:
So, Mike, is free will:
1) an illusion based on some kind of unpredictable,
complex but
*deterministic* interaction of physical components 2)
the result
Hi Mike,
Good summary. I think your point of view is valuable in the sense of helping
engineers in AGI to see what they may be missing. And your call for technical
AI folks to take up the mantle of more artistic modes of intelligence is also
important.
But it's empty, for you've
Hi Mike,
It's not so much the *kind* of programming that I or anyone else could
recommend, it's just the general skill of programming - getting used to
thinking in terms of, how exactly do I solve this problem - what model or
procedure do I create? How do you specify something so completely
Hi all,
Came across this article called Pencils and Politics. Though a bit of a
tangent, it's the clearest explanation of self-organization in economics I've
encountered.
http://www.newsweek.com/id/158752
I send this along because it's a great example of how systems that
self-organize can
Once again, I'm not saying that modeling an economy is all that's necessary to
explain intelligence. I'm not even saying it's a necessary condition of it.
What I am saying is that it looks very likely that the brain/mind is
self-organized, and for those of us looking to biological intelligence
Vlad,
At this point, we ought to acknowledge that we just have different approaches.
You're trying to hit a very small target accurately and precisely. I'm not.
It's not important to me the precise details of how a self-organizing system
would actually self-organize, what form that would take
Hi Will,
Such an interesting example in light of a recent paper, which deals with
measuring the difference between activation of the visual cortex and blood flow
to the area, depending on whether the stimulus was subjectively invisible. If
the result can be trusted, it shows that blood flow
Hey Bryan,
Not really familiar with apt-get. How is it a complex system? It looks like
it's just a software installation tool.
Terren
--- On Tue, 9/16/08, Bryan Bishop [EMAIL PROTECTED] wrote:
Have you considered looking into the social dynamics
allowed by apt-get
before? It's a
OK, how's that different from the collaboration inherent in any human project?
Can you just explain your viewpoint?
--- On Tue, 9/16/08, Bryan Bishop [EMAIL PROTECTED] wrote:
On Tuesday 16 September 2008, Terren Suydam wrote:
Not really familiar with apt-get. How is it a
complex system
that model.
Terren
--- On Wed, 9/17/08, Bryan Bishop [EMAIL PROTECTED] wrote:
From: Bryan Bishop [EMAIL PROTECTED]
Subject: Re: [agi] self organization
To: agi@v2.listbox.com
Date: Wednesday, September 17, 2008, 3:23 PM
On Wednesday 17 September 2008, Terren Suydam wrote:
OK, how's that different
Interestingly, Helen Keller's story provides a compelling example of what it
means for a symbol to go from ungrounded to grounded. Specifically, the moment
at the water pump when she realized that the word water being spelled into
her hand corresponded with her experience of water - that
Hi Ben,
If Richard Loosemore is half-right, how is he half-wrong?
Terren
--- On Mon, 9/29/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From: Ben Goertzel [EMAIL PROTECTED]
Subject: Re: [agi] Dangerous Knowledge
To: agi@v2.listbox.com
Date: Monday, September 29, 2008, 6:50 PM
I mean that a
publicized yet ... but
it does already
address this particular issue...)
ben
On Tue, Sep 30, 2008 at 12:23 PM, Terren Suydam [EMAIL PROTECTED] wrote:
Hi Ben,
If Richard Loosemore is half-right, how is he half-wrong?
Terren
--- On Mon, 9/29/08, Ben Goertzel [EMAIL PROTECTED] wrote:
From
Hi Ben,
I wonder if you've read Bohm's Thought as a System, or if you've been
influenced by Niklas Luhmann on any level.
Terren
--- On Fri, 10/10/08, Ben Goertzel [EMAIL PROTECTED] wrote:
There is a sense in which social groups are mindplexes: they have
mind-ness on the collective level, as
, though at the time I read it, I'd already
encountered most of the same ideas elsewhere...
Luhmann: nope, never encountered his work...
ben
On Fri, Oct 10, 2008 at 10:26 AM, Terren Suydam [EMAIL PROTECTED] wrote:
Hi Ben,
I wonder if you've read Bohm's Thought as a System, or if you've been
Mike,
Autopoieisis is a basic building block of my philosophy of life and of
cognition as well. I see life as: doing work to maintain an internal
self-organization. It requires a boundary in which the entropy inside the
boundary is kept lower than the entropy outside. Cognition is autopoieitic
Well, identity is not a great choice of word, because it implies a static
nature. As far as I understand it, Maturana et al simply meant, that which
distinguishes the thing from its environment, in terms of its
self-organization. The nature of that self-organization is dynamic, always
systems
are capable of giving rise to...
-- Ben G
On Fri, Oct 10, 2008 at 11:19 AM, Terren Suydam [EMAIL PROTECTED] wrote:
Yeah, that book is really good. Bohm was one of the great ones.
Luhmann may have been the first to seriously suggest/defend the idea that
social systems are not just
Hi Colin,
Are there other forums or email lists associated with some of the other AI
communities you mention? I've looked briefly but in vain ... would appreciate
any helpful pointers.
Thanks,
Terren
--- On Tue, 10/14/08, Colin Hales [EMAIL PROTECTED] wrote:
From: Colin Hales [EMAIL
1 - 100 of 133 matches
Mail list logo