Hi Ben and others,
After some more thinking, I decide to try the virtual credit approach afterall.
Last time Ben's argument was that the virtual credit method confuses
for-profit and charity emotions in people. At that time it sounded
convincing, but after some thinking I realized that it is
John G. Rose wrote:
Has anyone done some analysis on cloud computing, in particular the
recent trend and coming out of clouds with multiple startup efforts in
this space? And their relationship to AGI type applications?
Or is this phenomena just geared to web server farm resource
2008/10/29 Samantha Atkins [EMAIL PROTECTED]:
John G. Rose wrote:
Has anyone done some analysis on cloud computing, in particular the recent
trend and coming out of clouds with multiple startup efforts in this space?
And their relationship to AGI type applications?
Beware of putting too
On Wed, Oct 29, 2008 at 4:04 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
Last time Ben's argument was that the virtual credit method confuses
for-profit and charity emotions in people. At that time it sounded
convincing, but after some thinking I realized that it is actually
completely
However, it does seem clear that the integers (for instance) is not an
entity with *scientific* meaning, if you accept my formalization of science
in the blog entry I recently posted...
Huh? Integers are a class (which I would argue is an entity) that is I would
argue is well-defined and
(1) Simplicity (in conclusions, hypothesis, theories, etc.) is preferred.
(2) The preference to simplicity does not need a reason or justification.
(3) Simplicity is preferred because it is correlated with correctness.
I agree with (1), but not (2) and (3).
I concur but would add that (4)
On Wed, Oct 29, 2008 at 6:34 PM, Trent Waddington
Don't forget my argument..
I don't recall hearing an argument from you. All your replies to me
are rather rude one liners.
YKY
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS
I don't recall hearing an argument from you. All your replies
to me are rather rude one liners.
As opposed to everyone else, who either doesn't reply to you or
humors you.
Get over yourself.
Trent
Hi Trent,
Your last two emails to YKY were rude and unhelpful. If you felt a burning
Trent,
A comment in my role as list administrator:
Let's keep the discussion on the level of ideas not people, please.
No ad hominem attacks such as You're a gas bag, etc.
thanks
ben g
On Wed, Oct 29, 2008 at 6:34 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
On Wed, Oct 29, 2008 at 4:04
YKY,
I'm certainly not opposed to you trying a virtual-credits system. My
prediction
is that it won't work out well, but my predictions are not always right. I
just
want to clarify two things:
1)
There is *really* nothing unethical about OpenCog's setup. However, if
we need to discuss that in
On Wed, Oct 29, 2008 at 11:11 PM, Benjamin Johnston
[EMAIL PROTECTED] wrote:
Your last two emails to YKY were rude and unhelpful. If you felt a burning
desire to express yourself rudely, you could have done so by emailing him
privately.
I'm publicly telling him to piss off. I *could* have
On Wed, Oct 29, 2008 at 11:29 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Trent,
A comment in my role as list administrator:
Let's keep the discussion on the level of ideas not people, please.
No ad hominem attacks such as You're a gas bag, etc.
If he's free to talk about virtual credits I
So meh, if you want to go ahead with your virtual credit absurdity,
you're free to do so, but I'm also free to call you an idiot.
Trent
Not on this list, please If you feel the need to tell him that, tell
him
by private email.
You are free to tell him you think it's a foolish idea
but we never need arbitrarily large integers in any particular case, we only
need integers going up to the size of the universe ;-)
On Wed, Oct 29, 2008 at 7:24 AM, Mark Waser [EMAIL PROTECTED] wrote:
However, it does seem clear that the integers (for instance) is not
an entity with
If he's free to talk about virtual credits I should be free to talk
about how stupid his virtual credit idea
Yes
is and, by extension, he is.
No ...
Look, I am not any kind of expert on list management or social
tact, I'm just applying extremely basic rules of human politeness here
In
Ben,
Thanks, that writeup did help me understand your viewpoint. However, I
don't completely unserstand/agree with the argument (one of the two,
not both!). My comments to that effect are posted on your blog.
About the earlier question...
(Mark) So Ben, how would you answer Abram's question So
To rephrase. Do you think there is a truth of the matter concerning
formally undecidable statements about numbers?
--Abram
That all depends on what the meaning of is, is ... ;-)
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS
but we never need arbitrarily large integers in any particular case, we only
need integers going up to the size of the universe ;-)
But measured in which units? For any given integer, I can come up with (invent
:-) a unit of measurement that requires a larger/greater number than that
sorry, I should have been more precise. There is some K so that we never
need integers with algorithmic information exceeding K.
On Wed, Oct 29, 2008 at 10:32 AM, Mark Waser [EMAIL PROTECTED] wrote:
but we never need arbitrarily large integers in any particular case,
we only need integers
Here's another slant . . . .
I really liked Pei's phrasing (which I consider to be the heart of
Constructivism: The Epistemology :-)
Generally speaking, I'm not
building some system that learns about the world, in the sense that
there is a correct way to describe the world waiting to be
sorry, I should have been more precise. There is some K so that we never
need integers with algorithmic information exceeding K.
Ah . . . . but is K predictable? Or do we need all the integers above it as
a safety margin? :-)
(What is the meaning of need? :-)
The inductive proof to
Ben,
So, for example, if I describe a Turing machine whose halting I prove
formally undecidable by the axioms of peano arithmetic (translating
the Turing machine's operation into numerical terms, of course), and
then I ask you, is this Turing machine non-halting, then would you
answer, That
On Wed, Oct 29, 2008 at 11:19 AM, Abram Demski [EMAIL PROTECTED]wrote:
Ben,
So, for example, if I describe a Turing machine whose halting I prove
formally undecidable by the axioms of peano arithmetic (translating
the Turing machine's operation into numerical terms, of course), and
then I
From: Bob Mottram [mailto:[EMAIL PROTECTED]
Beware of putting too much stuff into the cloud. Especially in the
current economic climate clouds could disappear without notice (i.e.
unrecoverable data loss). Also, depending upon terms and conditions
any data which you put into the cloud may
Hutter proved (3), although as a general principle it was already a well
established practice in machine learning. Also, I agree with (4) but this
is not the primary reason to prefer simplicity.
Hutter *defined* the measure of correctness using simplicity as a component.
Of course, they're
Pei,
My understanding is that when you reason from data, you often want the
ability to extrapolate, which requires some sort of assumptions about the
type of mathematical model to be used. How do you deal with that in NARS?
Ed Porter
-Original Message-
From: Pei Wang [mailto:[EMAIL
Ed,
When NARS extrapolates its past experience to the current and the
future, it is indeed based on the assumption that its future
experience will be similar to its past experience (otherwise any
prediction will be equally valid), however it does not assume the
world can be captured by any
But, NARS as an overall software system will perform more effectively
(i.e., learn more rapidly) in
some environments than in others, for a variety of reasons. There are many
biases built into the NARS architecture in various ways ... it's just not
obvious
to spell out what they are, because the
Ben,
I never claimed that NARS is not based on assumptions (or call them
biases), but only on truths. It surely is, and many of the
assumptions are my beliefs and intuitions, which I cannot convince
other people to accept very soon.
However, it does not mean that all assumptions are equally
However, it does not mean that all assumptions are equally acceptable,
or as soon as something is called a assumption, the author will be
released from the duty of justifying it.
Hume argued that at the basis of any approach to induction, there will
necessarily lie some assumption that is
Ben,
It goes back to what justification we are talking about. To prove
it is a strong version, and to show supporting evidence is a weak
version. Hume pointed out that induction cannot be justified in the
sense that there is no way to guarantee that all inductive conclusions
will be confirmed.
I
Ben,
OK, that is a pretty good answer. I don't think I have any questions
left about your philosophy :).
Some comments, though.
hmmm... you're saying the halting is provable in some more powerful
axiom system but not in Peano arithmetic?
Yea, it would be provable in whatever formal system I
But the question is what does this mean about any actual computer,
or any actual physical object -- which we can only communicate about
clearly
insofar as it can be boiled down to a finite dataset.
What it means to me is that Any actual computer will not halt (with a
correct output)
Ben,
The difference can I think be best illustrated with two hypothetical
AGIs. Both are supposed to be learning that computers are
approximately Turing machines. The first, made by you, interprets
this constructively (let's say relative to PA). The second, made by
me, interprets this classically
I guess I don't see how cloud computing is materially different from
open source in so much as we see the sharing of resources and also now
increased availability, no need to buy so much hardware at the outset.
But it seems more a case of convenience.
So what does that have to do with AGI? I
Ben,
No, I wasn't intending any weird chips.
For me, the most important way in which you are a constructivist is
that you think AIXI is the ideal that finite intelligence should
approach.
--Abram
On Wed, Oct 29, 2008 at 2:33 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
OK ... but are both of
OK it's just a Compact Genetic Algorithm -- genetic drift kind of
stuff. Nice read, but very simple (subsumed by any serious EDA). It
says you can do simple pattern mining by just looking at the
distribution, without complex statistics.
On Wed, Oct 29, 2008 at 8:13 PM, Lukasz Stafiniak [EMAIL
On Wed, Oct 29, 2008 at 4:47 PM, Abram Demski [EMAIL PROTECTED] wrote:
Ben,
No, I wasn't intending any weird chips.
For me, the most important way in which you are a constructivist is
that you think AIXI is the ideal that finite intelligence should
approach.
Hmmm... I'm not sure I
Hi all,
I wanted to let you know that Gino Yu and I are co-organizing a Workshop on
Machine
Consciousness, which will be held in Hong Kong in June 2008: see
http://novamente.net/machinecs/index.html
for details.
It is colocated with a larger, interdisciplinary conference on consciousness
--- On Tue, 10/28/08, Pei Wang [EMAIL PROTECTED] wrote:
Whenever someone prove something outside mathematics, it is always
based on certain assumptions. If the assumptions are not well
justified, there is no strong reason for people to accept the
conclusion, even though the proof process is
--- On Wed, 10/29/08, Mark Waser [EMAIL PROTECTED] wrote:
Hutter *defined* the measure of correctness using
simplicity as a component.
Of course, they're correlated when you do such a thing.
That's not a proof,
that's an assumption.
Hutter defined the measure of correctness as the
41 matches
Mail list logo