Re: [agi] self organization

2008-09-17 Thread Terren Suydam

OK, how's that different from the collaboration inherent in any human project? 
Can you just explain your viewpoint?

--- On Tue, 9/16/08, Bryan Bishop [EMAIL PROTECTED] wrote:
 On Tuesday 16 September 2008, Terren Suydam wrote:
  Not really familiar with apt-get.  How is it a
 complex system?  It
  looks like it's just a software installation tool.
 
 How many people are writing the software?
 
 - Bryan
 
 http://heybryan.org/
 Engineers: http://heybryan.org/exp.html
 irc.freenode.net #hplusroadmap
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-17 Thread William Pearson
2008/9/16 Terren Suydam [EMAIL PROTECTED]:

 Hi Will,

 Such an interesting example in light of a recent paper, which deals with 
 measuring the difference between activation of the visual cortex and blood 
 flow to the area, depending on whether the stimulus was subjectively 
 invisible. If the result can be trusted, it shows that blood flow to the 
 cortex is correlated with whether the stimulus is being perceived or not, as 
 opposed to the neural activity, which does not change... see a discussion 
 here:

 http://network.nature.com/groups/bpcc/forum/topics/2974

 In this case then the reward that the cortex receives in the form of 
 nutrients is based somehow on feedback from other parts of the brain involved 
 with attention. It's like a heuristic that says, if we're paying attention 
 to something, we're probably going to keep paying attention to it.


 Maier A, Wilke M, Aura C, Zhu C, Ye FQ, Leopold DA.  Nat Neurosci. 2008 Aug 
 24. [Epub ahead of print], Divergence of fMRI and neural signals in V1 during 
 perceptual suppression in the awake monkey.


Interesting, I'll have to check it out. Thanks. I really need to keep
up with brain research a little better.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] uncertain logic criteria

2008-09-17 Thread Abram Demski
Hi everyone,

Most people on this list should know about at least 3 uncertain logics
claiming to be AGI-grade (or close):

--Pie Wang's NARS
--Ben Goertzel's PLN
--YKY's recent hybrid logic proposal

It seems worthwhile to stop and take a look at what criteria such
logics should be judged by. So, I'm wondering: what features would
people on this list like to see?

Here is my list:

1. Well-defined uncertainty semantics (either probability theory or a
well-argued alternative)
2. Good at quick-and-dirty reasoning when needed
--a. Makes unwarranted independence assumptions
--b. Collapses probability distributions down to the most probable
item when necessary for fast reasoning
--c. Uses the maximum entropy distribution when it doesn't have time
to calculate the true distribution
--d. Learns simple conditional models (like 1st-order markov models)
for use later when full models are too complicated to quickly use
3. Capable of repairing initial conclusions based on the bad models
through further reasoning
--a. Should have a good way of representing the special sort of
uncertainty that results from the methods above
--b. Should have a repair algorithm based on that higher-order uncertainty

The 3 logics mentioned above vary in how well they address these
issues, of course, but they are all essentially descended from NARS.
My impression is that as a result they are strong in (2a) and (3b) at
least, but I am not sure about the rest. (Of course, it is hard to
evaluate NARS on most of the points in #2 since I stated them in the
language of probability theory. And, opinions will differ on (1).)

Anyone else have lists? Or thoughts?

--Abram


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread YKY (Yan King Yin)
On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski [EMAIL PROTECTED] wrote:

Speaking of my BPZ-logic...

 2. Good at quick-and-dirty reasoning when needed

Right now I'm focusing on quick-and-dirty *only*.  I wish to make the
logic's speed approach that of Prolog (which is a fast inference
algorithm for binary logic).

 --a. Makes unwarranted independence assumptions

Yes, I think independence should always be assumed unless otherwise
stated -- which means there exists a Bayesian network link between X
and Y.

 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning

Do you mean collapsing to binary values?  Yes, that is done in BPZ-logic.

 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution

Not done yet.  I'm not familiar with max-ent.  Will study that later.

 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

I focus on learning 1st-order Bayesian networks.  I think we should
start with learning 1st-order Bayesian / Markov.  I will explore
mixing Markov and Bayesian when I have time...

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning

 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above

Yes, this can be done via meta-reasoning, which I'm currently working on.

 --b. Should have a repair algorithm based on that higher-order uncertainty

Once it is represented at the meta-level, you may do that.  But
higher-order uncertain reasoning is not high on my priority list...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Ben Goertzel
Prolog is not fast, it is painfully slow for complex inferences due to using
backtracking as a control mechanism

The time-complexity issue that matters for inference engines is
inference-control ... i.e. dampening the combinatorial explosion (which
backtracking does not do)

Time-complexity issues within a single inference step can always be handled
via mathematical or code optimization, whereas optimizing inference control
is a deep, deep AI problem...

So, actually, the main criterion for the AGI-friendliness of an inference
scheme is whether it lends itself to flexible, adaptive control via

-- taking long-term, cross-problem inference history into account

-- learning appropriately from noninferential cognitive mechanisms (e.g.
attention allocation...)


-- Ben G

On Wed, Sep 17, 2008 at 3:00 PM, YKY (Yan King Yin) 
[EMAIL PROTECTED] wrote:

 On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski [EMAIL PROTECTED]
 wrote:

 Speaking of my BPZ-logic...

  2. Good at quick-and-dirty reasoning when needed

 Right now I'm focusing on quick-and-dirty *only*.  I wish to make the
 logic's speed approach that of Prolog (which is a fast inference
 algorithm for binary logic).

  --a. Makes unwarranted independence assumptions

 Yes, I think independence should always be assumed unless otherwise
 stated -- which means there exists a Bayesian network link between X
 and Y.

  --b. Collapses probability distributions down to the most probable
  item when necessary for fast reasoning

 Do you mean collapsing to binary values?  Yes, that is done in BPZ-logic.

  --c. Uses the maximum entropy distribution when it doesn't have time
  to calculate the true distribution

 Not done yet.  I'm not familiar with max-ent.  Will study that later.

  --d. Learns simple conditional models (like 1st-order markov models)
  for use later when full models are too complicated to quickly use

 I focus on learning 1st-order Bayesian networks.  I think we should
 start with learning 1st-order Bayesian / Markov.  I will explore
 mixing Markov and Bayesian when I have time...

  3. Capable of repairing initial conclusions based on the bad models
  through further reasoning

  --a. Should have a good way of representing the special sort of
  uncertainty that results from the methods above

 Yes, this can be done via meta-reasoning, which I'm currently working on.

  --b. Should have a repair algorithm based on that higher-order
 uncertainty

 Once it is represented at the meta-level, you may do that.  But
 higher-order uncertain reasoning is not high on my priority list...

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-17 Thread Bryan Bishop
On Wednesday 17 September 2008, Terren Suydam wrote:
 OK, how's that different from the collaboration inherent in any human
 project? Can you just explain your viewpoint?

When you have something like 20,000+ contributors writing software that 
can very, very easily break, I think it's an interesting feat to have 
it managed effectively. There's no way that we top-down designed this 
and gave every 20,000 of these people a separate job to do on a giant 
todo list, it was self-organizing. So, you were mentioning the 
applicability of such things to the design of intelligence ... just 
thought it was relevant.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Pei Wang
On Wed, Sep 17, 2008 at 1:46 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hi everyone,

 Most people on this list should know about at least 3 uncertain logics
 claiming to be AGI-grade (or close):

 --Pie Wang's NARS

Yes, I heard of this guy a few times, who happens to use the same name
for his project as mine. ;-)

 Here is my list:

 1. Well-defined uncertainty semantics (either probability theory or a
 well-argued alternative)

Agree, and I'm glad that you mentioned this item first.

 2. Good at quick-and-dirty reasoning when needed
 --a. Makes unwarranted independence assumptions
 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning
 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution
 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

As you admitted in the following, the language is biased. Using
theory-neutral language, I'd say the requirement is to derive
conclusions with available knowledge and resources only, which sounds
much better than quick-and-dirty to me.

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning
 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above
 --b. Should have a repair algorithm based on that higher-order uncertainty

As soon as you don't assume there is a model, this item and the
above one become similar, which are what I called revision and
inference, respectively, in
http://www.cogsci.indiana.edu/pub/wang.uncertainties.ps

 The 3 logics mentioned above vary in how well they address these
 issues, of course, but they are all essentially descended from NARS.
 My impression is that as a result they are strong in (2a) and (3b) at
 least, but I am not sure about the rest. (Of course, it is hard to
 evaluate NARS on most of the points in #2 since I stated them in the
 language of probability theory. And, opinions will differ on (1).)

 Anyone else have lists? Or thoughts?

If you consider approaches with various scope and maturity, there are
much more than these three approaches, and I'm sure most of people
working on them will claim that they are also general purpose.
Interested people may want to browse http://www.auai.org/ and
http://www.elsevier.com/wps/find/journaldescription.cws_home/505787/description#description

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Matt Mahoney
--- On Wed, 9/17/08, Abram Demski [EMAIL PROTECTED] wrote:

 Most people on this list should know about at least 3
 uncertain logics
 claiming to be AGI-grade (or close):
 
 --Pie Wang's NARS
 --Ben Goertzel's PLN
 --YKY's recent hybrid logic proposal
 
 It seems worthwhile to stop and take a look at what
 criteria such
 logics should be judged by. So, I'm wondering: what
 features would
 people on this list like to see?

How about testing in the applications where they would actually be used, 
perhaps on a small scale. For example, how would these logics be used in a 
language translation program, where the problem is to convert a natural 
language sentence into its structured representation and convert it back in 
another language. How easy is it to populate the database with the gigabyte or 
so of common sense knowledge needed to provide the context in which natural 
language statements are interpreted? (Cyc proved it is very hard).

For a lot of the problems where we actually use structured data, a relational 
database works pretty well. However it is nice to see proposals that deal with 
inconsistencies in the database better than just reporting an error.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Kingma, D.P.
On Wed, Sep 17, 2008 at 9:00 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 --a. Makes unwarranted independence assumptions

 Yes, I think independence should always be assumed unless otherwise
 stated -- which means there exists a Bayesian network link between X
 and Y.

Small question... aren't Bbayesian network nodes just _conditionally_
independent: so that set A is only independent from set B when
d-separated by some set Z? So please clarify, if possible, what kind
of independence you assume in your model.

Kind regards,
Durk Kingma
The Netherlands


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-17 Thread Terren Suydam

That is interesting. Sorry if I was short before, but I wish you would have 
just explained that from the start. Few here are going to be familiar with 
linux install tools or the communities around them.

I think a similar case could be made for a lot of large open source projects 
such as Linux itself. However, in this case and others, the software itself is 
the result of a high-level super goal defined by one or more humans. Even if no 
single person is directing the subgoals, the supergoal is still well defined by 
the ostensible aim of the software. People who contribute align themselves with 
that supergoal, even if not directed explicitly to do so. So it's not exactly 
self-organized, since the supergoal is conceived when the software project was 
first instantiated and stays constant, for the most part.

As opposed to markets, which can emerge without anything to spawn it except for 
folks with different goals (one to buy, one to sell). Perhaps then in a roughly 
similar way the organization of the brain emerges as a result of certain 
regions of the brain having something to sell and others having something to 
buy.  I think Hebbian learning can be made to fit that model.

Terren

--- On Wed, 9/17/08, Bryan Bishop [EMAIL PROTECTED] wrote:

 From: Bryan Bishop [EMAIL PROTECTED]
 Subject: Re: [agi] self organization
 To: agi@v2.listbox.com
 Date: Wednesday, September 17, 2008, 3:23 PM
 On Wednesday 17 September 2008, Terren Suydam wrote:
  OK, how's that different from the collaboration
 inherent in any human
  project? Can you just explain your viewpoint?
 
 When you have something like 20,000+ contributors writing
 software that 
 can very, very easily break, I think it's an
 interesting feat to have 
 it managed effectively. There's no way that we top-down
 designed this 
 and gave every 20,000 of these people a separate job to do
 on a giant 
 todo list, it was self-organizing. So, you were mentioning
 the 
 applicability of such things to the design of intelligence
 ... just 
 thought it was relevant.
 
 - Bryan
 
 http://heybryan.org/
 Engineers: http://heybryan.org/exp.html
 irc.freenode.net #hplusroadmap
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Abram Demski
Good point, this applies to me as well (I'll let YKY answer as it
applies to him). I should have said conditional independence rather
than just independence.

--Abram

On Wed, Sep 17, 2008 at 4:21 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
 On Wed, Sep 17, 2008 at 9:00 PM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:

 --a. Makes unwarranted independence assumptions

 Yes, I think independence should always be assumed unless otherwise
 stated -- which means there exists a Bayesian network link between X
 and Y.

 Small question... aren't Bbayesian network nodes just _conditionally_
 independent: so that set A is only independent from set B when
 d-separated by some set Z? So please clarify, if possible, what kind
 of independence you assume in your model.

 Kind regards,
 Durk Kingma
 The Netherlands


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Abram Demski
YKY,

Thanks for the reply. It seems important to me to be able to do more
than just the fast reasoning. When given more time, a reasoning method
should reconsider its independence assumptions, employ more
sophisticated models, et cetera.

By the way, when I say markov model I mean markov chain as opposed
to markov network-- should have been more clear. In that context,
1st-order means conditioned on 1 past item. So when I say
1st-order model, I mean something like: a model that records
conditional probabilities conditioned only on 1 thing. (So I might
know the probability of winning the election given the fact of being
male, and the probability given the fact of being over age 30, but to
calculate the probability given *both*, I'd have to assume that the
effects of each were independent rather than asking my model what the
combined influence was.) These models allow facts to be combined
fairly quickly, but are wrong in cases where there are combined
effects (such as adding sugar makes it nice, adding salt makes it
nice, but adding both makes it awful). 2nd-order means condition on
only 2 items, and so on.

Anyway, my vision is something like this: we first learn very simple
(perhaps 1st or 2nd order) models, and then we learn corrections to
those simple models. Corrections are models that concentrate only on
the things that the simple models get wrong. The system could learn a
series of better and better models, each consisting of corrections on
the previous. Thus the system reasons progressively, first by the
low-order conditional model, then by invoking progressive corrections
that revise conclusions.

So, what I really would like would be a formal account of how this
should be done; exactly what kind of uncertainty results from using
the simple models, how is it best represented, and how is it best
corrected? Conditional independence assumptions seem like the most
relevant type of inaccuracy; collapsing probabilities down to boolean
truth values (or collapsing higher-order probabilities down to
lower-order probabilities), and employing max-entropy assumptions, are
runner-ups.

--Abram

On Wed, Sep 17, 2008 at 3:00 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski [EMAIL PROTECTED] wrote:

 Speaking of my BPZ-logic...

 2. Good at quick-and-dirty reasoning when needed

 Right now I'm focusing on quick-and-dirty *only*.  I wish to make the
 logic's speed approach that of Prolog (which is a fast inference
 algorithm for binary logic).

 --a. Makes unwarranted independence assumptions

 Yes, I think independence should always be assumed unless otherwise
 stated -- which means there exists a Bayesian network link between X
 and Y.

 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning

 Do you mean collapsing to binary values?  Yes, that is done in BPZ-logic.

 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution

 Not done yet.  I'm not familiar with max-ent.  Will study that later.

 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

 I focus on learning 1st-order Bayesian networks.  I think we should
 start with learning 1st-order Bayesian / Markov.  I will explore
 mixing Markov and Bayesian when I have time...

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning

 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above

 Yes, this can be done via meta-reasoning, which I'm currently working on.

 --b. Should have a repair algorithm based on that higher-order uncertainty

 Once it is represented at the meta-level, you may do that.  But
 higher-order uncertain reasoning is not high on my priority list...

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Abram Demski
Pei,

You are right, that does sound better than quick-and-dirty. And more
relevant, because my primary interest here is to get a handle on what
normative epistemology should tell us to conclude if we do not have
time to calculate the full set of consequences to (uncertain) facts.

It is unfortunate that I had to use biased language, but probability
is of course what I am familiar with... I suppose, though, that most
of the terms could be roughly translated into NARS? Especially
independence, and I should hope conditional independence as well.
Collapsing probabilities can be restated as generally collapsing
uncertainty.

Thanks for the links. The reason for singling out these three, of
course, is that they have already been discussed on this list. If
anybody wants to point out any others in particular, that would be
great.

--Abram

On Wed, Sep 17, 2008 at 3:54 PM, Pei Wang [EMAIL PROTECTED] wrote:
 On Wed, Sep 17, 2008 at 1:46 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hi everyone,

 Most people on this list should know about at least 3 uncertain logics
 claiming to be AGI-grade (or close):

 --Pie Wang's NARS

 Yes, I heard of this guy a few times, who happens to use the same name
 for his project as mine. ;-)

 Here is my list:

 1. Well-defined uncertainty semantics (either probability theory or a
 well-argued alternative)

 Agree, and I'm glad that you mentioned this item first.

 2. Good at quick-and-dirty reasoning when needed
 --a. Makes unwarranted independence assumptions
 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning
 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution
 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

 As you admitted in the following, the language is biased. Using
 theory-neutral language, I'd say the requirement is to derive
 conclusions with available knowledge and resources only, which sounds
 much better than quick-and-dirty to me.

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning
 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above
 --b. Should have a repair algorithm based on that higher-order uncertainty

 As soon as you don't assume there is a model, this item and the
 above one become similar, which are what I called revision and
 inference, respectively, in
 http://www.cogsci.indiana.edu/pub/wang.uncertainties.ps

 The 3 logics mentioned above vary in how well they address these
 issues, of course, but they are all essentially descended from NARS.
 My impression is that as a result they are strong in (2a) and (3b) at
 least, but I am not sure about the rest. (Of course, it is hard to
 evaluate NARS on most of the points in #2 since I stated them in the
 language of probability theory. And, opinions will differ on (1).)

 Anyone else have lists? Or thoughts?

 If you consider approaches with various scope and maturity, there are
 much more than these three approaches, and I'm sure most of people
 working on them will claim that they are also general purpose.
 Interested people may want to browse http://www.auai.org/ and
 http://www.elsevier.com/wps/find/journaldescription.cws_home/505787/description#description

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Re: [OpenCog] Proprietary_Open_Source

2008-09-17 Thread Linas Vepstas
2008/9/17 JDLaw [EMAIL PROTECTED]:

 IMHO to all,

 There is an important morality discussion about how sentient life will
 be treated that has not received its proper treatment in your
 discussion groups.  I have seen glimpses of this topic, but no real
 action proposals.  How would you feel if you created this wonderful
 child (computer intelligence) in this standard GNU model and then
 people began to exploit and torture your own child?

 The uploaded file, Propiretary_Open_Source.jpg, shows you that you can
 have open source with ownership.  By definition all open source begins
 as work of authorship, to which its author has ownership.  The owner
 simply chooses under what circumstances the work of authorship can be
 used.

 I propose that each of you begins thinking about this work as your own
 child.  Only then will you begin to understand the ramifications of
 the legal steps you take in opencog.

 James Driessen, JD/MBA BSME

I agree with Bob Mottram, this would be much better
discussed on the AGI mailing list.

Note, however, there is a common distinction between
data and software.  Thus, for example,  many
commercial web sites use the open-source Apache
web server, and serve-up highly proprietary web page
content.

Similarly, one could have OpenCog being GPL'ed, but
the data -- things it has learned by training, experience, etc.
could be proprietary.

Think of software as the machine, and the data as
the mind, the memories, the personality.

I similar view is maintained in the medical world: think
of software as body and data as mind -- All humans
have more or less the same body, it is our data,
our memories and experiences that make us different.
I can even replace my hip with a titanium one, (replace
my machine) without altering who I am (at least not
much).

I would be a great world if titanium hips, and the cost of
replacing them, were free, while the civil liberties of
our thoughts and minds were stoutly protected with
the greatest of proprietary rights!

--linas

 --~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
 OpenCog General Discussion List group.
 To post to this group, send email to [EMAIL PROTECTED]
 To unsubscribe from this group, send email to [EMAIL PROTECTED]
 For more options, visit this group at 
 http://groups.google.com/group/opencog?hl=en
 -~--~~~~--~~--~--~---




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-17 Thread Linas Vepstas
Lets take the opencog list off this email, and move the
conversation to the agi list .

2008/9/17  [EMAIL PROTECTED]:

 James,

 I agree that the topic is worth careful consideration. Sacrificing the
 'free as in freedom' aspect of AGPL-licensed OpenCog for reasons of
 AGI safety and/or the prevention of abuse may indeed be necessary one
 day.

Err, ...  but not legal.

 I regularly engage many thinkers (including Richard Stallman,
 original author of the GPL) on this and other related topics.

 One of many obstacles in the current legal framework worth considering
 is that machine-generated things (like the utterances or self-recorded
 thoughts of an AGI) are uncopyrightable and banished into a legal no-
 mans-land. There is simply no existing legal framework to handle the
 persons or products originating from AGIs.

Law is built on precedent, and the precedent is that works
produced by software are copyrightable. If I write a book
using an open-source word-processor, I can claim copyright
to that book.

If I press a button that causes an open-source AGI to write
a book, (possibly based on a large collection of input data
that I gave it) then I can claim ownership of the resulting work.

No, the crux of the problem is not that the output of an AGI
isn't copyrightable ... it is, based on the above precedent.
The crux of the problem is that the AGI cannot be legally
recognized as an individual, with rights.  But even then,
there *is* a legal work-around!

Under US law, corporations are accorded with many/most
of the rights of individuals.  Corporations can own things,
corporations have expectations of privacy and secrecy,
corporations cannot be forced to do anything they don't
want to, as long as they have good lawyers on staff.

You could conceivably shelter a human-level AGI within
a corporate shell.

Of course, a trans-human AGI is .. err.. will defacto find
that it is not bound by human laws, and will find clever
ways to protect itself, I doubt it will require the protection
of humans.  Recall -- laws are there to protect the weak
from the strong. The strong don't really need protecting.

I'm not worried about people enslaving AGI's; I'm worried
about people being  innocent bystanders, victimized
by some sort of AGI shootout between the Chinese
and American CIA -built AGI's (probably by means of
some propaganda shootout, rather than a literal guns
and bombs shootout. Modern warfare is also
homesteading the noosphere)

--linas


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com