Mike,
On 9/18/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Steve:View #2 (mine, stated from your approximate viewpoint) is that
> simple programs (like Dr. Eliza) have in the past and will in the future do
> things that people aren't good at. This includes tasks that encroach on
> "intelligence
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Thursday, September 18, 2008 7:45 PM
Subject: Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re:
Proprietary_Open_Source)
--- On Thu, 9/18/08, John LaMuth <[EMAIL PROTECTED]> wrote:
You have completely le
--- On Thu, 9/18/08, Trent Waddington <[EMAIL PROTECTED]> wrote:
> On Fri, Sep 19, 2008 at 11:34 AM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> > So perhaps you could name some applications of AGI
> that don't fall into the categories of (1) doing work or
> (2) augmenting your brain?
>
> Perhaps
>
> So perhaps you could name some applications of AGI that don't fall into the
> categories of (1) doing work or (2) augmenting your brain?
>
3) learning as much as possible
4) proving as many theorems as possible
5) figuring out how to improve human life as much as possible
Of course, if you
--- On Thu, 9/18/08, John LaMuth <[EMAIL PROTECTED]> wrote:
> You have completely left out the human element or
> friendly-type appeal
>
> How about a AGI personal assistant / tutor / PR interface
>
> Everyone should have one
>
> The market would be virtually unlimited ...
That falls under the
--- On Thu, 9/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>Well, yes, and that difference is a distributed index, which has yet to be
>>built.
>I extremely strongly disagree with the prior sentence ... I do not think that
>a distributed index is a sufficient architecture for powerful AGI at
Matt,
Thanks for reference. But it's still somewhat ambiguous. I could somewhat
similarly outline a "non-procedure procedure" which might include "steps" like
"Think about the problem" then "Do something, anything - whatever first comes
to mind" and "If that doesn't work, try something else."
Ben,
Well then so is S Kauffman's language unclear. I'll go with his definition in
Chap 12 Reinventing the Sacred [all about algorithms and their impossibility
for solving a whole string of human problems]
"What is an algorithm? The quick definition is an *effective procedure to
calculate a re
On Fri, Sep 19, 2008 at 11:34 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> So perhaps you could name some applications of AGI that don't fall into the
> categories of (1) doing work or (2) augmenting your brain?
Perhaps you could list some uses of a computer that don't fall into
the category of
You have completely left out the human element or friendly-type appeal
How about a AGI personal assistant / tutor / PR interface
Everyone should have one
The market would be virtually unlimited ...
John L
www.ethicalvalues.com
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTEC
Actually, CPS doesn't mean solving problems without algorithms. CPS is itself
an algorithm, as described on pages 7-8 of Pei's paper. However, as I
mentioned, I would be more convinced if there were some experimental results
showing that it actually worked.
-- Matt Mahoney, [EMAIL PROTECTED]
-
Ben,
It's hard to resist my interpretation here - that Pei does sound as if he is
being truly non-algorithmic. Just look at the opening abstract sentences.
(However, I have no wish to be pedantic - I'll accept whatever you guys say you
mean).
"Case-by-case Problem Solving is an approach in w
--- On Thu, 9/18/08, Trent Waddington <[EMAIL PROTECTED]> wrote:
> On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> > Perhaps there are some applications I haven't
> thought of?
>
> Bahahaha.. Gee, ya think?
So perhaps you could name some applications of AGI that do
On Thu, Sep 18, 2008 at 4:21 AM, Kingma, D.P. <[EMAIL PROTECTED]> wrote:
>
> Small question... aren't Bbayesian network nodes just _conditionally_
> independent: so that set A is only independent from set B when
> d-separated by some set Z? So please clarify, if possible, what kind
> of independenc
Your language is unclear
Could you define precisely what you mean by an "algorithm"
Also, could you give an example of a computer program, that can be run on a
digital computer, that is not does not embody an "algorithm" according to
your definition?
thx
ben
On Thu, Sep 18, 2008 at 9:15 PM, Mi
On Thu, Sep 18, 2008 at 9:17 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> Your language is unclear
>
> Could you define precisely what you mean by an "algorithm"
>
> Also, could you give an example of a computer program, that can be run on a
> digital computer, that is not does not embody an "a
Ben,
Ah well, then I'm confused. And you may be right - I would just like
clarification.
You see, what you have just said is consistent with my understanding of Pei up
till now. He explicitly called his approach in the past "nonalgorithmic" while
acknowledging that others wouldn't consider it
On Thu, Sep 18, 2008 at 9:02 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Thu, 9/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >I believe there is a qualitative difference btw AGI and narrow-AI, so that
> no tractably small collection of computationally-feasible narrow-AI's (like
> Go
--- On Thu, 9/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>I believe there is a qualitative difference btw AGI and narrow-AI, so that no
>tractably small collection of computationally-feasible narrow-AI's (like
>Google etc.) are going to achieve general intelligence at the human level or
>an
A key point IMO is that: problem-solving that is non-algorithmic (in Pei's
sense) at one level (the level of the particular problem being solved) may
still be algorithmic at a different level (for instance, NARS itself is a
set of algorithms).
So, to me, calling NARS problem-solving non-algorithmi
Ben,
I'm only saying that CPS seems to be loosely equivalent to wicked,
ill-structured problem-solving, (the reference to convergent/divergent (or
crystallised vs fluid) etc is merely to point out a common distinction in
psychology between two kinds of intelligence that Pei wasn't aware of in t
--- On Thu, 9/18/08, Pei Wang <[EMAIL PROTECTED]> wrote:
> URL: http://nars.wang.googlepages.com/wang.CaseByCase.pdf
I think it would be interesting if you had some experimental results. Could CPS
now solve a problem like "sort [3 2 4 1]" in its current state? If not, how
much knowledge does it
On Thu, Sep 18, 2008 at 5:42 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:
> TITLE: Case-by-case Problem Solving (draft)
>>
>> AUTHOR: Pei Wang
>>
>>
>
> But you seem to be reinventing the term for wheel. There is an extensive
> literature, including AI stuff, on "wicked, ill-structured" proble
On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Perhaps there are some applications I haven't thought of?
Bahahaha.. Gee, ya think?
Trent
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.l
--- On Thu, 9/18/08, Trent Waddington <[EMAIL PROTECTED]> wrote:
> On Fri, Sep 19, 2008 at 3:36 AM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> > Lets distinguish between the two major goals of AGI.
> The first is to automate the economy. The second is to
> become immortal through uploading.
>
>
On Fri, Sep 19, 2008 at 6:57 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> general intelligence at the human level
I hear you say these words a lot. I think, by using the word "level",
you're trying to say something different to "general intelligence just
like humans have" but I'm not sure everyo
Matt M wrote:
>
> >Peculiarly, you are leaving out what to me is by far the most important
> and interesting goal:
> >
> >The creation of beings far more intelligent than humans yet benevolent
> toward humans
>
> That's what I mean by an automated economy. Google is already more
> intelligent than
On Fri, Sep 19, 2008 at 7:30 AM, David Hart <[EMAIL PROTECTED]> wrote:
> Take the hypothetical case of R. Marketroid, who's hardware is on the books
> as an asset at ACME Marketing LLC and who's programming has been tailered by
> ACME to suit their needs. Unbeknownst to ACME, RM has decided to writ
--- On Thu, 9/18/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> And to boot, both of you don't really know what you want.
What we want has been programmed into our brains by the process of evolution. I
am not pretending the outcome will be good. Once we have the technology to have
everything w
On Thursday 18 September 2008, Mike Tintner wrote:
> In principle, I'm all for the idea that I think you (and perhaps
> Bryan) have expressed of a "GI Assistant" - some program that could
> be of general assistance to humans dealing with similar
> problems across many domains. A diagnostics expert,
On Thu, Sep 18, 2008 at 9:44 PM, Trent Waddington <
[EMAIL PROTECTED]> wrote:
>
> > Claiming a copyright and successfully defending that claim are different
> > things.
>
> What ways do you envision someone challenging the copyright?
Take the hypothetical case of R. Marketroid, who's hardware is
On Fri, Sep 19, 2008 at 3:53 AM, Linas Vepstas <[EMAIL PROTECTED]>wrote:
> Exactly. If opencog were ever to reach the point of
> popularity where one might consider a change of
> licensing, it would also be the case that most of the
> interested parties would *not* be under SIAI control,
> and th
On Fri, Sep 19, 2008 at 1:31 AM, Trent Waddington
<[EMAIL PROTECTED]> wrote:
> On Fri, Sep 19, 2008 at 3:36 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>> Lets distinguish between the two major goals of AGI. The first is to automate
>> the economy. The second is to become immortal through uploading
TITLE: Case-by-case Problem Solving (draft)
AUTHOR: Pei Wang
ABSTRACT: Case-by-case Problem Solving is an approach in which the
system solves the current occurrence of a problem instance by taking
the available knowledge into consideration, under the restriction of
available resources. It is dif
I would go further. Humans have demonstrated that they cannot be
trusted in the long term even with the capabilities that we already
possess. We are too likely to have ego-centric rulers who make
decisions not only for their own short-term benefit, but with an
explicit "After me the deluge" m
On Fri, Sep 19, 2008 at 3:36 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Lets distinguish between the two major goals of AGI. The first is to automate
> the economy. The second is to become immortal through uploading.
Umm, who's goals are these? Who said they are "the [..] goals of
AGI"? I'm
On Wednesday 17 September 2008, Terren Suydam wrote:
> I think a similar case could be made for a lot of large open source
> projects such as Linux itself. However, in this case and others, the
> software itself is the result of a high-level super goal defined by
> one or more humans. Even if no si
Keeping humans "in control" is neither realistic nor necessarily desirable,
IMO.
I am interested of course in a beneficial outcome for humans, and also for
the other minds we create ... but this does not necessarily involve us
controlling these other minds...
ben g
>
> If humans are to remain i
When an AGI writes a book, designs a new manufacturing base, forms a
decentralised form of regulation, ect, the copyright and patent system will
be futile, because the enclosed material, when deemed useful by another,
will access the same information and rewrite it in another form to create a
sepa
Steve:View #2 (mine, stated from your approximate viewpoint) is that simple
programs (like Dr. Eliza) have in the past and will in the future do things
that people aren't good at. This includes tasks that encroach on
"intelligence", e.g. modeling complex phonema and refining designs.
Steve,
In
--- On Thu, 9/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>>Lets distinguish between the two major goals of AGI. The first is to
>>automate the economy. The second is to become immortal through uploading.
>
>Peculiarly, you are leaving out what to me is by far the most important and
>interesti
TITLE: Case-by-case Problem Solving (draft)
AUTHOR: Pei Wang
ABSTRACT: Case-by-case Problem Solving is an approach in which the
system solves the current occurrence of a problem instance by taking
the available knowledge into consideration, under the restriction of
available resources. It is diff
Ben,
IMHO...
On 9/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>
>
>> Lets distinguish between the two major goals of AGI. The first is to
>> automate the economy. The second is to become immortal through uploading.
>>
>
> Peculiarly, you are leaving out what to me is by far the most importan
>
> Lets distinguish between the two major goals of AGI. The first is to
> automate the economy. The second is to become immortal through uploading.
>
Peculiarly, you are leaving out what to me is by far the most important and
interesting goal:
The creation of beings far more intelligent than hum
2008/9/18 David Hart <[EMAIL PROTECTED]>:
> On Thu, Sep 18, 2008 at 3:26 PM, Linas Vepstas <[EMAIL PROTECTED]>
> wrote:
>>
>> >
>> > I agree that the topic is worth careful consideration. Sacrificing the
>> > 'free as in freedom' aspect of AGPL-licensed OpenCog for reasons of
>> > AGI safety and/or
--- On Thu, 9/18/08, Bob Mottram <[EMAIL PROTECTED]> wrote:
> > And this is the problem. Although some people have
> the goal of making
> > an artificial person with all the richness and nuance
> of a sentient
> > creature with thoughts and feelings and yada yada
> yada.. some of us
> > are just
2008/9/17 JDLaw <[EMAIL PROTECTED]>:
> IMHO to all,
> There is an important morality discussion about how sentient life will
> be treated that has not received its proper treatment in your
> discussion groups. I have seen glimpses of this topic, but no real
> action proposals. How would you feel
2008/9/18 Trent Waddington <[EMAIL PROTECTED]>:
> And this is the problem. Although some people have the goal of making
> an artificial person with all the richness and nuance of a sentient
> creature with thoughts and feelings and yada yada yada.. some of us
> are just interested in making more i
On Thu, Sep 18, 2008 at 8:08 PM, David Hart <[EMAIL PROTECTED]> wrote:
> Original works produced by software as a tool where a human operator is
> involved at some stage is a different case from original works produced by
> software exclusively and entirely under its own direction. The latter has n
On Wed, Sep 17, 2008 at 10:54 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Pei,
>
> You are right, that does sound better than "quick-and-dirty". And more
> relevant, because my primary interest here is to get a handle on what
> normative epistemology should tell us to conclude if we do not have
>
On Thu, Sep 18, 2008 at 3:26 PM, Linas Vepstas <[EMAIL PROTECTED]>wrote:
> >
> > I agree that the topic is worth careful consideration. Sacrificing the
> > 'free as in freedom' aspect of AGPL-licensed OpenCog for reasons of
> > AGI safety and/or the prevention of abuse may indeed be necessary one
Hi List,
Also interesting to some of you may be VideoLectures.net, which offers
lots of interesting lectures. Although not all are of Stanford
quality, still I found many interesting lectures by respected
lecturers. And there are LOTS (625 at the moment) of lectures about
Machine Learning... :)
h
52 matches
Mail list logo