On 11/2/07, Eliezer S. Yudkowsky wrote:
I didn't ask whether it's possible. I'm quite aware that it's
possible. I'm asking if this is what you want for yourself. Not what
you think that you ought to logically want, but what you really want.
Is this what you lived for? Is this the most
Jiri Jelinek wrote:
Ok, seriously, what's the best possible future for mankind you can imagine?
In other words, where do we want our cool AGIs to get us? I mean
ultimately. What is it at the end as far as you can see?
That's a very personal question, don't you think?
Even the parts I'm
On Fri, Nov 02, 2007 at 12:06:05PM -0400, Jiri Jelinek wrote:
On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Natural language is a fundamental part of the knowledge
base, not something you can add on later.
I disagree. You can start with a KB that contains concepts retrieved
On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Natural language is a fundamental part of the knowledge
base, not something you can add on later.
I disagree. You can start with a KB that contains concepts retrieved
from a well structured non-NL input format only, get the thinking
On Fri, Nov 02, 2007 at 12:41:16PM -0400, Jiri Jelinek wrote:
On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
if you could have anything you wanted, is this the end you
would wish for yourself, more than anything else?
Yes. But don't forget I would also have AGI
On Fri, Nov 02, 2007 at 11:27:08AM +0300, Vladimir Nesov wrote:
Linas,
Yes, you probably can code all the patterns you need. But it's only
the tip of the iceberg: problem is that for those 1M rules there are
also thousands that are being constantly generated, assessed and
discarded.
On Fri, Nov 02, 2007 at 01:19:19AM -0400, Jiri Jelinek wrote:
Or do we know anything better?
I sure do. But ask me again, when I'm smarter, and have had more time to
think about the question.
--linas
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change
On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
I'm asking if this is what you want for yourself.
Then you could read just the first word from my previous response: YES
if you could have anything you wanted, is this the end you
would wish for yourself, more than anything
Jiri Jelinek wrote:
On Nov 2, 2007 4:54 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more
Jiri,
You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more 'complex' thing and can also act as a goal in itself. You can say
that AGIs will be
Linas,
Yes, you probably can code all the patterns you need. But it's only
the tip of the iceberg: problem is that for those 1M rules there are
also thousands that are being constantly generated, assessed and
discarded. Knowledge formation happens all the time and adapts those
1M rules to
Can humans keep superintelligences under control -- can
superintelligence-augmented humans compete
Richard Loosemore (RL) wrote the following on Fri 11/2/2007 11:15 AM,
in response to a post by Matt Mahoney.
My comments are preceded by ED
RL This is the worst possible summary of the situation,
On Fri, Nov 02, 2007 at 09:01:42AM -0700, Charles D Hixson wrote:
To me this point seems only partially valid. 1M hand coded rules seems
excessive, but there should be some number (100? 1000?) of hand-coded
rules (not unchangeable!) that it can start from. An absolute minimum
would seem
Linas, BillK
It might currently be hard to accept for association-based human
minds, but things like roses, power-over-others, being worshiped
or loved are just waste of time with indirect feeling triggers
(assuming the nearly-unlimited ability to optimize).
Regards,
Jiri Jelinek
On Nov 2, 2007
On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
But learning problem isn't changed by it. And if you solve the
learning problem, you don't need any scaffolding.
But you won't know how to solve the learning problem until you try.
--linas
-
This list is sponsored by AGIRI:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Example 4: Each successive generation gets smarter, faster, and less
dependent on human cooperation. Absolutely not true. If humans take
advantage of the ability to enhance their own intelligence up to the
same level as the AGI systems, the
On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
But learning problem isn't changed by it. And if you solve the
learning problem, you don't need any
On Fri, Nov 02, 2007 at 12:56:14PM -0700, Matt Mahoney wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Natural language is a fundamental part of the knowledge
base, not something you can add on later.
I disagree. You can
On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
But learning problem isn't changed by it. And if you solve
On Sat, Nov 03, 2007 at 12:06:48AM +0300, Vladimir Nesov wrote:
On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov
--- Linas Vepstas [EMAIL PROTECTED] wrote:
On Fri, Nov 02, 2007 at 12:56:14PM -0700, Matt Mahoney wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Natural language is a fundamental part of the knowledge
base, not
On Sat, Nov 03, 2007 at 12:15:29AM +0300, Vladimir Nesov wrote:
I personally don't see how this appearance-building is going to help,
so the question for me is not 'why can't it succeed?', but 'why do it
at all?'.
Because absolutely no one has proposed anything better?
--linas
-
This
Linas,
I mainly tried to show that you are in fact not moving your system
forward learning-wise by attaching a chatbot facade to it. That My
scaffolding learns is an overstatement in this context.
You should probably move in the direction of NARS, it seems
fundamental enough to be near the mark.
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Although it is possible to fully integrate NL into AGI, such an endeavor
may not be the highest priority at this moment. It can give the AGI better
linguistic abilities, such as understanding human-made texts or speeches,
even poetry, but I
On Nov 2, 2007 2:35 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Could you please provide one specific example of a human goal which
isn't feeling-based?
It depends on what you mean by 'based' and 'goal'. Does any choice
qualify as a goal? For example, if I choose to write certain word in
25 matches
Mail list logo