On Tue, Oct 7, 2008 at 11:33 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
I was trying to find a way so we can collaborate on one project, but
people don't seem to like the virtual credit idea.
No, no we don't :-)
Why not?
---
agi
Archives:
As has been said previously, there have been AI projects in the past
which tried this credits or shares route which turned out to be very
unsuccessful. The problem with issuing credits is that, rightly or
wrongly, an expectation of short term financial reward is built up in
the minds of some
YKY...
About your proposed credits system ...
One comment I'd make is: it's not easy to estimate the pragmatic workability
of a hypothetical way of organizing peoples' efforts, via abstract logical
and economic considerations. Psychology and culture play into the matter,
in
complex ways.
This
2008/10/10 YKY (Yan King Yin) [EMAIL PROTECTED]:
On Tue, Oct 7, 2008 at 11:33 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
I was trying to find a way so we can collaborate on one project, but
people don't seem to like the virtual credit idea.
No, no we don't :-)
Why not?
As has been
Hi Ben,
I wonder if you've read Bohm's Thought as a System, or if you've been
influenced by Niklas Luhmann on any level.
Terren
--- On Fri, 10/10/08, Ben Goertzel [EMAIL PROTECTED] wrote:
There is a sense in which social groups are mindplexes: they have
mind-ness on the collective level, as
Bohm: Yes ... a great book, though at the time I read it, I'd already
encountered most of the same ideas elsewhere...
Luhmann: nope, never encountered his work...
ben
On Fri, Oct 10, 2008 at 10:26 AM, Terren Suydam [EMAIL PROTECTED] wrote:
Hi Ben,
I wonder if you've read Bohm's Thought as
Yeah, that book is really good. Bohm was one of the great ones.
Luhmann may have been the first to seriously suggest/defend the idea that
social systems are not just concepts but real ontological entities. Luhmann
took Maturana/Verala's autopoieisis and extended that to social systems.
Which
Terren:autopoieisis. I wonder what your thoughts are about it?
Does anyone have any idea how to translate that biological principle into
building a machine, or software? Do you or anyone else have any idea what it
might entail? The only thing I can think of that comes anywhere close is the
Mike,
Autopoieisis is a basic building block of my philosophy of life and of
cognition as well. I see life as: doing work to maintain an internal
self-organization. It requires a boundary in which the entropy inside the
boundary is kept lower than the entropy outside. Cognition is autopoieitic
Terren,
Thanks for reply. I think I have some idea, no doubt confused, about how you
want to evolve a system. But the big deal re autopoiesis for me - correct me -
is the capacity of a living system to *maintain its identity* despite
considerable disturbances. That can be both in the
I think autopoiesis is an important concept, which has been underappreciated
in AI because its original advocates (Varela especially) tied it in with
anti-computationalism
Varela liked to contrast autopoietic systems with computational ones
OTOH, I think of autopoiesis as a emergent property
Well, identity is not a great choice of word, because it implies a static
nature. As far as I understand it, Maturana et al simply meant, that which
distinguishes the thing from its environment, in terms of its
self-organization. The nature of that self-organization is dynamic, always
Agreed. Yet, as far as I can tell, Novamente/OCP aren't designed to allow this
autopoiesis to emerge. Although some emergence is implicit in the design, there
is not a clear boundary between the internal organization and the external
environment. For example, a truly autopoietic system would
Ben said:
Maybe the reason people don't know what you mean, is that your manner
of phrasing the issue is so unusual?
Could you elaborate the problem you refer to, perhaps using some examples?
It's easier to explain how an AGI design would deal with a certain
example situation or issue, than how it
If my impression of these discussions
is accurate, if the partisan arguments for logic, probability or
neural networks and the like are really arguments for choosing one or
the other as a preponderant decision process, then it is my opinion
that the discussants are missing the major problem.
I have updated my AGI proposal to include a cost estimate, funding model, and
specific protocol details. Any comments are appreciated.
A Proposed Design for Distributed Artificial General Intelligence
Version 2.0
By Matt Mahoney, Oct. 10, 2008
Abstract
This document describes a proposed
Terren,
Yes, I think you're taking your ideas too far. Manifestly living organisms are
not continually changing structurally - we have consistent bodies and brains
with a rough plan to them, which is how we hold together so well, It is only
in computer programs and models, that you can have
On Wed, Oct 8, 2008 at 5:15 PM, Abram Demski [EMAIL PROTECTED] wrote:
Given those three assumptions, plus the NARS formula for revision,
there is (I think) only one possible formula relating the NARS
variables 'f' and 'w' to the value of 'par': the probability density
function p(par | w, f) =
Abram,
I finally read your long post...
The basic idea is to treat NARS truth values as representations of a
statement's likelihood rather than its probability. The likelihood of
a statement given evidence is the probability of the evidence given
the statement. Unlike probabilities,
On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
In particular, the result that NARS induction and abduction each
depend on **only one** of their premise truth values ...
Ben,
I'm sure you know it in your mind, but this simple description will
make some people think that
Sorry Pei, you are right, I sloppily mis-stated!
What I should have said was:
the result that the NARS induction and abduction *strength* formulas
each depend on **only one** of their premise truth values ...
Anyway, my point in that particular post was not to say that NARS is either
good or
Ben,
I agree with what you said in the previous email.
However, since we already touched this point in the second time, there
may be people wondering what the difference between NARS and PLN
really is.
Again let me use an example to explain why the truth-value function of
abduction/induction
Of course, this is only one among very many differences btw PLN and NARS,
but I agree it's an interesting one.
I've got other stuff to do today, but I'll try to find time to answer this
email
carefully over the weekend.
ben
On Fri, Oct 10, 2008 at 5:38 PM, Pei Wang [EMAIL PROTECTED] wrote:
Ben,
Strength? If you mean weight or confidence, this is not so. As Pei
corrected, it is the *frequency* that depends on only one of the two.
The strength depends on both.
And, that is one feature of NARS that I don't find strange. It can be
explained OK by the formula I previously proposed and
Pei,
You agree that the abduction and induction strength formulas only
rely on one of the two premises?
Is there some variable called strength that I missed?
--Abram
On Fri, Oct 10, 2008 at 5:38 PM, Pei Wang [EMAIL PROTECTED] wrote:
Ben,
I agree with what you said in the previous email.
Abram,
Ben's strength is my frequency.
Pei
On Fri, Oct 10, 2008 at 5:49 PM, Abram Demski [EMAIL PROTECTED] wrote:
Pei,
You agree that the abduction and induction strength formulas only
rely on one of the two premises?
Is there some variable called strength that I missed?
--Abram
On
Ah.
On Fri, Oct 10, 2008 at 5:51 PM, Pei Wang [EMAIL PROTECTED] wrote:
Abram,
Ben's strength is my frequency.
Pei
On Fri, Oct 10, 2008 at 5:49 PM, Abram Demski [EMAIL PROTECTED] wrote:
Pei,
You agree that the abduction and induction strength formulas only
rely on one of the two
I meant frequency, sorry
Strength is a term Pei used for frequency in some old sicsussions...
If I were taking more the approach Ben suggests, that is, making
reasonable-sounding assumptions and then working forward rather than
assuming NARS and working backward, I would have kept the
On Fri, Oct 10, 2008 at 5:52 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
I meant frequency, sorry
Strength is a term Pei used for frequency in some old sicsussions...
Another correction: strength is never used in any NARS publication.
It was used in some Webmind documents, though I guess it
On Fri, Oct 10, 2008 at 6:01 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 5:52 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
I meant frequency, sorry
Strength is a term Pei used for frequency in some old sicsussions...
Another correction: strength is never used in any NARS
Ben,
Maybe your memory is correct --- we use strength in Webmind to keep
some distance from NARS.
Anyway, I don't like that term because it can be easily interpreted in
several ways, while the reason I don't like probability is just the
opposite --- it has a widely accepted interpretation, which
Pei,
I finally took a moment to actually read your email...
However, the negative evidence of one conclusion is no evidence of the
other conclusion. For example, Swallows are birds and Swallows are
NOT swimmers suggests Birds are NOT swimmers, but says nothing
about whether Swimmers are
Ben,
I see your position.
Let's go back to the example. If the only relevant domain knowledge
PLN has is Swallows are birds and Swallows are
NOT swimmers, will the system assigns the same lower-than-default
probability to Birds are swimmers and Swimmers are birds? Again,
I only need a
Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this would
be the case...
(Of course, this kind of example is cognitively misleading, because if the
only knowledge
the system has is Swallows are birds and Swallows are NOT swimmers then
it doesn't
really know that the terms
On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this would
be the case...
(Of course, this kind of example is cognitively misleading, because if the
only knowledge
the system has is Swallows are birds
On Fri, Oct 10, 2008 at 8:29 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this
would
be the case...
(Of course, this kind of example is cognitively
On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Given those three assumptions, plus the NARS formula for revision,
there is (I think) only one possible formula relating the NARS
variables 'f' and 'w' to the value of 'par': the probability density
function p(par | w, f)
This seems loosely related to the ideas in 5.10.6 of the PLN book, Truth
Value Arithmetic ...
ben
On Fri, Oct 10, 2008 at 9:04 PM, Abram Demski [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 4:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Given those three assumptions, plus the NARS
On Fri, Oct 10, 2008 at 8:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
[. . .]
Yes, in principle, PLN will behave in Hempel's confirmation paradox in
a similar way to other Bayesian systems.
I do find this counterintuitive, personally, and I spent a while trying to
work
around it ... but
By the way, thanks for all the comments... I'll probably shift gears
as you both suggest, if I choose to continue further.
--Abram
On Fri, Oct 10, 2008 at 10:02 PM, Abram Demski [EMAIL PROTECTED] wrote:
On Fri, Oct 10, 2008 at 8:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
[. . .]
Yes, in
Abram,
Anyway, perhaps I can try to shed some light on the broader exchange?
My route has been to understand A is B as not P(A|B), but instead
P(A is X | B is X) plus the extensional equivalent... under this
light, the negative evidence presented by two statements B is C and
A is not C
Pei, Ben G. and Abram,
Oh, man, is this stuff GOOD! This is the real nitty-gritty of the AGI
matter. How does your approach handle counter-evidence? How does your
approach deal with insufficient evidence? (Those are rhetorical questions,
by the way -- I don't want to influence the course
42 matches
Mail list logo