[agi] fuzzy-probabilistic logic again

2009-01-12 Thread YKY (Yan King Yin)
I have refined my P(Z) logic a bit.  Now the truth values are all
unified to one type, probability distribution over Z, which has a
pretty nice interpretation.  The new stuff are at sections 4.4.2 and
4.4.3.

http://www.geocities.com/genericai/P-Z-logic-excerpt-12-Jan-2009.pdf

I'm wondering if anyone is interested in helping me implement the
logic or develop an AGI basing on it?  I have already written part of
the inference engine in Lisp.

Also, is anyone here working on fuzzy or probabilistic logics, other
than Ben and Pei and me?

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] fuzzy-probabilistic logic again

2009-01-12 Thread YKY (Yan King Yin)
 Do you have any experimental results supporting your proposed probabilistic 
 fuzzy logic implementation? How would you devise such an experiment (for 
 example, a prediction task) to test alternative interpretations of logical 
 operators like AND, OR, NOT, IF-THEN, etc? Maybe you could manually encoding 
 knowledge in your system (like you did with Goldilocks) and test whether it 
 can make inferences? I'd be more interested to see results on real data, 
 however.

I can implement a usable inference engine of PZ logic in about 1 month.

I have a simple inference algorithm that people can use to test
alternative definitions of logical operators, if they want to.

Some use cases of OpenCog's PLN may be tested too...

I'm still a bit far from representing Goldilocks or Little Red
Ridinghood.  To do that, requires more than just the base logic.
You'd need a set of commonsense concepts for space, time, objects,
people, etc.  Cyc's KB may be re-used, but that seems to be a big job.

 (Also, instead of a disclaimer about political correctness, couldn't you just 
 find examples that don't reveal your obsession with sex?)

OK... I will change them in the next version =)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] fuzzy-probabilistic logic again

2009-01-12 Thread YKY (Yan King Yin)
 (Also, instead of a disclaimer about political correctness, couldn't you 
 just find examples that don't reveal your obsession with sex?)

OK, I've eliminated one instance.

http://www.geocities.com/genericai/P-Z-logic-excerpt-12-Jan-2009.pdf

There are still 2 mentions of sex, I'll eliminate another one soon.
The last one (John has cybersex with 1000 women) is very hard to
think of a replacement that is equally convincing...

I've never been in a work place so it's hard for me to know what is
proper conduct.  When I was in college some professors cracked sexist
jokes on several occasions, so I thought that's normal.  Also, it's
not directed to anyone specific.  It's only there to make the stuff
more interesting to read.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] fuzzy-probabilistic logic again

2009-01-12 Thread YKY (Yan King Yin)
On Tue, Jan 13, 2009 at 6:19 AM, Vladimir Nesov robot...@gmail.com wrote:

 I'm more interested in understanding the relationship between
 inference system and environment (rules of the game) that it allows to
 reason about,

Next thing I'll work on is the planning module.  That's where the AGI
interacts with the environment.

 ... about why and how a given approach to reasoning is
 expected to be powerful.

I think if PZ logic can express a great variety of uncertain
phenomena, that's good enough.  I expect it to be very efficient too.

 It looks like many logics become too wrapped
 up in themselves, and their development as ways to AI turns into a
 wild goose chase.

Yeah, I'm trying not to waste too much time on non-essential details...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-18 Thread YKY (Yan King Yin)
 DARPA buys G.Tononi for 4.9 $Million!  For what amounts to little more
 than vague hopes that any of us here could have dreamed up. Here I am, up to
 my armpits in an actual working proposition with a real science basis...
 scrounging for pennies. hmmm...maybe if I sidle up and adopt an aging Nobel
 prizewinner...maybe that'll do it.

 nah. too cynical for the festive season. There's always 2009! You never
 know

You talked about building your 'chips'.  Just curious what are you
working on?  Is it hardware-related?

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread YKY (Yan King Yin)
 You can start a PhD without having an MS first, but you'll still need to
 take all the coursework corresponding to the MS

Like what kind of courses are those MS ones?  I may or may not have
those background knowledge, through self-teaching...

 And I think this makes sense!  The PhD is supposed to indicate that you have
 broadly-based expertise in a field, as well as capability to do independent
 research...

If I lack the background knowledge, it may be a good thing to catch up
with that.  I'm just worried that it is irrelevant to AGI and a waste
of time...


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread YKY (Yan King Yin)
Thanks for all the info...

I'll try both UK and US... (OK and Ireland too!)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Should I get a PhD?

2008-12-17 Thread YKY (Yan King Yin)
Hi group,

I'm considering getting a PhD somewhere, and I've accumulated some
material for a thesis in my 50%-finished AGI book.  I think getting a
PhD will put my work in a more rigorous form and get it published.
Also it may help me get funding afterwards, either in academia or in
the business world.

I want to maximize the time spent on my thesis while minimizing time
spent on other 'coursework' (ie things that aren't directly related to
my project, exams, classes, homework, etc).  Which universities should
I look at?  Or should I contact some professors directly?

Thanks! =)
YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread YKY (Yan King Yin)
 I got my PhD there in 1989 in math, not AI

Let me see... you were about 22 in 1989?  I was still an undergrad at
that age...


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread YKY (Yan King Yin)
 On the contrary, getting a PhD is an astoundingly poor strategy for raising
 $$ for a startup.  If you have a talent for biz sufficient to raise $$ for a
 startup, you can always get some prof to join your team to lend you academic
 credibility.

 It is also useful in terms of lending you more credibility when you talk
 about your own wacky research ideas.  This may be part of YKY's motivation,
 and it's a genuinely meaningful one.  But having credibility when talking
 about research ideas is not particularly well correlated with being able to
 raise business funding.

Getting business funding may be an inherently hard thing to do.  So,
other things being equal, spending some time + money on a PhD degree
may still be better than all other options.  That's my current
reasoning...


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread YKY (Yan King Yin)
 Do you have an MS degree?

I don't have an MS.

 In Europe, it's sometimes the case that after you get an MS, you can do a
 PhD with no additional coursework, only thesis work.

That sounds good, but in Europe I may need to spend some time learning
a third language... =(

 In the US, considerably more coursework is generally required (with rare
 exceptions)

Yeah, that's what I'm worried about...

Can I start the PhD directly without getting the MS first?

Thanks =)
YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread YKY (Yan King Yin)
 If...you want a non-research career, a Ph.D. is definitely not for you.

I want to be either an entrepreneur or a researcher... it's hard to
decide.  What does AGI need most?  Further research, or a sound
business framework?  It seems that both are needed...


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread YKY (Yan King Yin)
How about funding from academia -- would that be significant?  I mean,
can I expect to get research grants right after I get a PhD?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-17 Thread YKY (Yan King Yin)
 Depends how much time your thesis supervisor has gotten you writing
 grant applications during your third year ;)

Generally speaking, if the $$ amount of research grants is bigger
than, say, investing my tuition fees on some business projects, then
it seems that the PhD is worth it (in terms of $$)...?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] cognitive linguistics (was: Ethics of computer-based cognitive experimentation)

2008-11-06 Thread YKY (Yan King Yin)
On Wed, Nov 5, 2008 at 10:18 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 YKY,

 As I was saying, before I so rudely interrupted myself - re the narrow AI vs
 AGI problem difference:

 *the syllogistic problems of logic - is Aristotle mortal? etc - which you
 mainly use as examples  - are narrow AI problems, which can be solved
 according to precise rules

 however:

 *metacognitive problems - like *which logic should I  use for syllogistic
 problems, eg PLN/NARS? - (which also concerns you) - are AGI problems;
 there are no rules for solving them, and no definitive solutions, only
 possible, temporary resolutions to someone's satisfaction. Those are
 problems which you have been discussing and could continue to discuss
 interminably. And they are also problems which you will have - and any agent
 considering, should have - fear considering, because you can get endlessly
 bogged down in them

 [n.b. psychologically, fear comes in many different degrees from panic to
 mild wariness]

 similarly

 *is cybersex sex? (another of your problems) - if treated by some artificial
 logic with artificial rules, (which might end up saying yes, approx. 0.60 %
 sex), is a narrow AI problem; however, if treated realistically,
 *philosophically*, relying on language, this is an AGI problem, which can be
 and may well be considered interminably by real philosophers (and lawyers)
 into the next century, (*did* Clinton have sex?) and for which there are
 neither definitive rules nor solution . Again fear is, and has to be a part
 of considering such problems - how much life do you have to spend on them?
 Even the biggest computer brain in the world, the superestAGI will not be
 able to solve them definitively, and must be afraid of them

 ditto:

 *Any philosophical problem of definition:  what is mind? What is
 consciousness? What is intelligence?  Again these are infinitely open-ended,
 open-means problems, which have atttracted and will continue to attract
 interminable consideration. You are, and should be, afraid, of getting too
 deep into them

 *Any linguistic problem of definition: what does honour,beautiful, big
 small  etc mean? is an AGI problem  AFAIK literally any word in the
 language is open to endless definition and redefinition and essentially an
 AGI problem. By contrast, *what is ETFUBAIL an anagram of? is a narrow AI
 problem - and no need for any fear there.

 *Defining/describing almost anything - describe YKY or Ben Goertzel; what
 kind of guys/ programmers are they? - are AGI problems. You could consider
 them forever. You may be skilled at resolving them quickly, and able to come
 up with a brief description, but that again while perhaps satisfactory
 will never do the subject even remotely perfect justice, and could be
 endlessly improved and sophisticated.

 In general, your instinct - and most AGI-ers' instinct - seems to be,
 whenever confronted with an AGI problem, to try and  reduce it to a narrow
 AGI problem - from a real, open-ended/ open-means-and-rules  to an
 artificial, closed-ended, closed-means-and-rules problem. Then, yes, you
 don't need fear and other emotions, but that's not AGI.


 YKY:I just want to point out that
 AGI-with-emotions is not necessary goal of AGI.

 Which AGI as distinct from narrow AI problems do *not* involve
 *incalculable and possibly unmanageable risks*? -

 a)risks that the process of problem-solving will be interminable?
 b)risks that the agent does not have the skills necessary for the
 problem's solution?
 c)risks that the agent hasn't defined the problem properly?

 That's what the emotion of fear is - (one of the emotions essential for
 AGI) - a system alert to incalculable and possibly unmanageable risks.
 That's what the classic fight-or-flight response entails - maybe I can deal
 with this danger but maybe I can't and better avoid it fast.


You seem to be heavily influenced by cognitive linguistics theory.
Those people never come up with computational algorithms, all they do
is talk.  You may have caught that disease too =)

I still think the most promising approach is the logic-based one, but
I'll add special algorithms to take care of some of the phenomena
pointed out by cognitive linguistics.  Right now I'm still learning
about it.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-06 Thread YKY (Yan King Yin)
On Thu, Nov 6, 2008 at 12:55 AM, Harry Chesley [EMAIL PROTECTED] wrote:

  Personally, I'm not making an AGI that has emotions...

 So you take the view that, despite our minimal understanding of the basis of
 emotions, they will only arise if designed in, never spontaneously as an
 emergent property? So you can safely ignore the ethics question.

Well, my AGI system would take special measures to ensure that
emotions do *not* emerge, by making the system acquire *knowledge* of
human values instead of having emotions occurring at the AGI's
*perceptual* level.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-05 Thread YKY (Yan King Yin)
On Wed, Nov 5, 2008 at 7:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 Personally, I'm not making an AGI that has emotions, and I doubt if
 emotions are generally desirable in AGIs, except when the goal is to
 make human companions (and I wonder why people need them anyway, given
 that there're so many -- *too* many -- human beings around already).

 People may want to simulate loved ones who have died, if the simulation is 
 accurate enough to be indistinguishable. People may also want to simulate 
 themselves in the same way, in the belief it will make them immortal.


Yeah, I should qualify my statement:  different people will want
different things out of AGI technology.  Some want brain emulation of
themselves or loved ones, some want android companions, etc.  All
these things take up free energy (a scarce resource on earth), so it
is just a new form of the overpopulation problem.  I am not against
any particular form of AGI application;  I just want to point out that
AGI-with-emotions is not necessary goal of AGI.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-04 Thread YKY (Yan King Yin)
On Wed, Nov 5, 2008 at 6:05 AM, Harry Chesley [EMAIL PROTECTED] wrote:
 The question of when it's ethical to do AGI experiments has bothered me
 for a while. It's something that every AGI creator has to deal with
 sooner or later if you believe you're actually going to create real
 intelligence that might be conscious. The following link is a blog essay
 on the subject, which describes my current thinking on the subject, such
 as it is. There's clearly much more that needs to be worked out.
 Comments, either here or at the blog, would be appreciated.

 http://www.mememotes.com/meme_motes/2008/11/ethical-experimentation-on-cognitive-entities.html


Personally, I'm not making an AGI that has emotions, and I doubt if
emotions are generally desirable in AGIs, except when the goal is to
make human companions (and I wonder why people need them anyway, given
that there're so many -- *too* many -- human beings around already).
YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] virtual credits again

2008-10-29 Thread YKY (Yan King Yin)
Hi Ben and others,

After some more thinking, I decide to try the virtual credit approach afterall.

Last time Ben's argument was that the virtual credit method confuses
for-profit and charity emotions in people.  At that time it sounded
convincing, but after some thinking I realized that it is actually
completely untrue.  My approach is actually more unequivocally
for-profit, and Ben's accusation actually applies to OpenCog's stance
more aptly.  I'm afraid OpenCog has some ethical problems by
straddling between for-profit and charity.  For example:  why do you
need funding to do charity?  If you want to do charity why not do it
out of your own pockets?  Why use a dual license if the final product
is supposed to be free for all?  etc.

It is good for a company to be charitable, but you're forcing me to do
charity when I am having financial problems myself.  Your charity
victimizes me and other people trying to make money in the AGI
business.

I can understand why you dislike my approach:  you have contributed to
AGI in many intangible ways, such as organizing conferences,
increasing public awareness of AGI, etc.  I respect you for these
efforts.  Under the virtual credit system it would be very difficult
to assign credits to you -- not impossible -- but then if you try to
claim too many credits you'd start to look like a Shylock, and that
may be very embarassing.  Secondly, there may be other people in the
OpenCog devel team who dislike virtual credits for their own reasons,
and you may want to placate them.

So, either we confront the embarassing problem and try to assign ex
post facto credits, or, another alternative is to keep our projects
separate.  The world may be able to accomodate two or more AGIs (it
may actually be a healthy thing, from a complex-systems perspective).
I don't suppose my virtual credit approach can universally satisfy all
AGI developers.  But neither can your approach (under which I cannot
get any gaurantee of financial rewards).

I'm open to other suggestions, but if there're aren't any, I'd proceed
with virtual credit.  I guess some people will like it, and some will
hate it.  This is just natural.  At least I'm honest about my motives.

PS.  The argument that AGI should be free because it is such an
important technology can equally apply to other many technologies
such as medicine and (later) life extension or uploading.  It can even
apply to things like food, housing, citizenship, computer hardware,
etc.  In the end I think we need to admit that the good way lies
somewhere between charity and for-profit.  And my project aims to be
charitable in its own way too.  The only difference between my way and
OpenCog is that I want to make the accounting of contributions
transparent, and to reward contributors financially, while being
charitable in some other ways, that depend on how much profits we'll
make.  (Making the software opensource is already very charitable and
we may not be able to make that much money at all).

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] virtual credits again

2008-10-29 Thread YKY (Yan King Yin)
On Wed, Oct 29, 2008 at 6:34 PM, Trent Waddington

 Don't forget my argument..

I don't recall hearing an argument from you.  All your replies to me
are rather rude one liners.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-11 Thread YKY (Yan King Yin)
On Sun, Oct 12, 2008 at 8:23 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 I don't think that's a major difference conceptually, as there's a
 constant-time
 conversion between the two representations.

In my approach (which is not even implemented yet) the KB contains
rules that are used to construct propositional Bayesian networks.  The
rules contain variables in the sense of FOL.  It's not clear how this
is done in OCP.

There are other differences with OCP, as you know I plan to use PZB
logic, and I've written part of a Lisp prototype.  I'm not sure what's
the best way to opensource it -- integrating with OCP, or as a
separate branch, or..?

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-11 Thread YKY (Yan King Yin)
On Sun, Oct 12, 2008 at 12:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 OpenCog has VariableNodes in the AtomTable, which are used to represent
 variables in the sense of FOL ...

I'm still unclear as to how OC performs inference with variables,
unification, etc.  Maybe you can explain that during a tutorial
session?

The sentential approach is more classical and helps me think more
clearly about optimization issues (ie inference control), which is a
big unsolved problem.

Maybe I can opensource the my current code (after typing up some loose
ends) on LaunchPad and use the same license as OCP?

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-10 Thread YKY (Yan King Yin)
On Tue, Oct 7, 2008 at 11:33 PM, Russell Wallace
[EMAIL PROTECTED] wrote:

 I was trying to find a way so we can collaborate on one project, but
 people don't seem to like the virtual credit idea.

 No, no we don't :-)

Why not?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-10 Thread YKY (Yan King Yin)
 As has been said previously, there have been AI projects in the past
 which tried this credits or shares route which turned out to be very
 unsuccessful.  The problem with issuing credits is that, rightly or
 wrongly, an expectation of short term financial reward is built up in
 the minds of some contributors.  When this expectation is not realised
 things can become unpleasant.

People who brought Lehman Brothers probably feel rather unpleasant
now.  But that's not a reason to abolish the stock market.  Caveat
emptor means let the buyer beware.

 There are additional problems with the credits idea.  Under such a
 system I suspect that as time goes on an increasing amount of effort
 will be needlessly expended in arguments over who gets how many
 credits and how such credits are fairly approtioned (I did more work
 on this than you did, you didn't count the number of hours I spent
 researching rather than code writing, etc), detracting from the work
 which needs to be done.

What would be a better solution?  Right now, *everybody*'s work goes
unaccounted for.

 There is no incompatibility with the idea of developing software under
 a FOSS licence, and then making money out of systems or services
 peripheral to that.  Since the construction of AGIs is likely to be a
 long term effort the open source methodology seems appropriate, and
 under the current economic circumstances paying programmers to work on
 a project whose culmination may be years ahead in the future will
 appear increasingly unattractive.

It will be incompatible with some licenses;  but then many newer
licenses are friendly to business.

A lot of work remains to be done in the AGI core, as well as the
periphery.  Lots of things to be built.

There is no reason why a task is long term, then it shouldn't be paid
for.  Besides, my optimistic estimate of AGI is 5-10 years.  Not very
long-term at all.

 Incidentally, once a true AGI is created the current software
 development paradigm becomes obsolete anyway.

This doesn't sound very logical.  Food will turn into excretion anyway, so...?

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread YKY (Yan King Yin)
On Tue, Oct 7, 2008 at 8:13 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 A good idea and a euro will get you a cup of coffee. Whoever said you
 need to protect ideas is just shilly-shallying you. Ideas have no
 market value; anyone capable of taking them up, already has more ideas
 of his own than time to implement them. Don't take my word for it,
 look around you; do you see people on this list going, I'm ready to
 start work, someone give me an idea please? No, you see people going,
 here are my ideas, and other people going, great thanks, but I've
 already got my own.

 What people will pay for is to have their problems solved. If you want
 to get paid for AI, I think the best hope is to make as an open-source
 project, and offer support, consultancy etc. It's a model that has
 worked for other types of open source software.

But how do you explain the fact that many of today's top financially
successful companies rely on closed-source software?  A recent example
is Google's search engine, which remains closed source.  If they had
open-sourced their search engine, my guess is that there would be many
more copy-cats now all over the world.

True, ideas are in abundance, but in the same design space people tend
to converge on the same ideas.  So competition depends on those few
ideas.  Also, there are innovative ideas that solve some bottleneck
problems, which are very valuable.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-10-07 Thread YKY (Yan King Yin)
On Tue, Oct 7, 2008 at 7:55 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Cyc's DB is not publicly modifiable, but it's **huge** ... big enough that
 its bulk would take others a really long time to replicate

A competent AGI should be able to absorb Cyc's knowledge, and I will
probably do so (unless it turns out to be very difficult).  If the Cyc
KB is in FOL purely, it should be relatively easy.

 Why don't you find out if you can do anything interesting w/ Cyc's existing
 **publicly available** DB, before setting about making your own.  You may
 find out, just like Cyc has, that possessing such a DB doesn't really get
 you anywhere in terms of creating AGI ... or even in terms of creating
 surpassingly useful narrow-AI systems...

I'm building a prototype AGI now, and will give it a very small test
KB so it can parse some simple sentences and do some inferences.

The next step would be to absorb Cyc's KB and then let online users expand it.

Also, I will provide some learning algorithms to be used in
conjunction with user inputs -- for example, users can give examples
of reasoning in NL, and the AGI will learn the logical rules from
those examples.

This seems to me to be the right way towards AGI...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] open or closed source for AGI project?

2008-10-07 Thread YKY (Yan King Yin)
On Tue, Oct 7, 2008 at 9:16 PM, Russell Wallace
[EMAIL PROTECTED] wrote:

 But whichever route you pick, follow it with conviction. If you flag
 your project open source and then start talking about protecting
 your ideas and trying to measure the exact value of everybody's
 contributions so everybody gets just what's coming to them and no
 more, people will avoid it like a week-dead rat. You might have the
 best intentions in the world, but those intentions need to come across
 clearly and unambiguously in how you present your strategy.

I was trying to find a way so we can collaborate on one project, but
people don't seem to like the virtual credit idea.

Even if I go opensource, the number of significant contributors may
still be 0 (beside myself)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-10-06 Thread YKY (Yan King Yin)
On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski [EMAIL PROTECTED]

 One way of going about it would be to let each person create their own
 instance, which would have access to the global body of facts but
 would be somewhat separate. This would prevent people from
 contaminating the global knowledge base. Some sort of peer rating
 system could then allow knowledge from the best AIs to merge into the
 global KB.

After some thinking, I think this may not be needed -- the AGI itself
has the ability to judge what is right or wrong based on evidence, and
it can resolve conflicts in beliefs, including judging who is a liar
or joker.

Maybe all we need is just a simple interface for entering facts...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] open or closed source for AGI project?

2008-10-06 Thread YKY (Yan King Yin)
Hi all,

I need some advice as to open or closed source for my AGI project.
This is a very difficult choice as there are pros and cons on each
side.

The main reason why opensource is bad is that we cannot protect
innovative ideas from being copied by others.  This may be a
disincentive for someone with good ideas to contribute.  My partial
solution is to give virtual credits to contributors, but even this
does not offer secure protection.  I remember reading a book on
innovation that says it is important to provide an environment where
new ideas are protected.  Sometimes greed is a good thing that drives
innovation and progress.

But opensource also has many attractions -- there is something very
satisfying about users being able to examine the source code.  Finding
bugs, recruiting people are also easier.

So the key question is whether there will be enough opensource
contributors with innovative ideas and expertise in AGI...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-10-06 Thread YKY (Yan King Yin)
On Tue, Oct 7, 2008 at 11:50 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 I still don't understand why you think a simple interface for entering facts
 is so important... Cyc has a great UI for entering facts, and used it to
 enter millions of them already ... how far did it get them toward AGI???

Does Cyc have a publicly modifiable AND centrally maintained KB?
That's what I'm trying to make...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-30 Thread YKY (Yan King Yin)
On Tue, Sep 30, 2008 at 6:43 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 We are talking about 2 things:
 1.  Using an ad hoc parser to translate NL to logic
 2.  Using an AGI to parse NL

 I'm not sure what you mean by parse in step 2

Sorry, to put it more accurately:

#1 is using an ad hoc NLP subsystem to translate NL to logic

#2 is building a language model entirely in the AGI's logical
language, thus reducing the language understanding  production
problems to inference problems.  Which also allows life-long learning
of language in the AGI.

I think #2 is not that hard.  The theoretical basis is already there.
Currently one of the mainstream methods to translate NL to logic, is
to use FOL + lambda calculus.  Lambda expressions are used to
represent partial logical entities such as a verb phrase.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-30 Thread YKY (Yan King Yin)
On Tue, Sep 30, 2008 at 12:50 PM, Linas Vepstas [EMAIL PROTECTED] wrote:

 I'm planning to make the project opensource, but I want to have a web
 site that keeps a record of contributors' contributions.  So that's
 taking some extra time.

 Most wiki's automatically keep tracl of who made
 what changes, when.

 *All* souce code versioning systems keep track of who
 made what changes, when.

Yeah, and I'm designing a voting system of virtual credits for working
collaboratively on the project...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread YKY (Yan King Yin)
On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski [EMAIL PROTECTED] wrote:

 How much will you focus on natural language? It sounds like you want
 that to be fairly minimal at first. My opinion is that chatbot-type
 programs are not such a bad place to start-- if only because it is
 good publicity.

I plan to make use of Steven Reed's Texai -- he's writing a dialog
system that can translate NL to logical form.  If it turns out to be
unfeasible, I can borrow a simple NL interface from somewhere else.

 I am imagining two ways of entering knowledge: (1) people talk to the
 system, and probabilistic/fuzzy knowledge about grammar is invoked to
 extract knowledge; (2) people enter facts and rules directly. Entering
 stuff directly would be like programming the AI, while talking to it
 would be like teaching it.

Yes, I plan to use both ways.

 One way of going about it would be to let each person create their own
 instance, which would have access to the global body of facts but
 would be somewhat separate. This would prevent people from
 contaminating the global knowledge base. Some sort of peer rating
 system could then allow knowledge from the best AIs to merge into the
 global KB.

My idea is also exactly this! =)   But I need some help in building
the knowledge sharing infrastructure.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread YKY (Yan King Yin)
On Mon, Sep 29, 2008 at 9:38 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 It seems to me the main limitation is that the language model has to be 
 described formally in Cycl, as a lexicon and rules for parsing and 
 disambiguation. There seems to be no mechanism for learning natural language 
 by example. For example, if Cyc receives a sentence it cannot parse, or is 
 ambiguous, or has a word not in its vocabulary or used in a different way, 
 then there is no mechanism to update the model, which is something humans 
 easily do. Given the complexity of English, I think this is a serious 
 limitation with no easy solution.
**

I think building the language model in Cycl is actually the right
move.  An AGI should build its language model in a logical form that
is self-same with the logical form it reasons with.  That's the only
way that the AGI can learn language robustly and perform sophisticated
language-related reasoning, including meta-reasoning.

I can do this in G_0, but in the bootstrap stage I can also use a
simple (brittle) NL interface to save some work.  This simple NL
interface would be jettisoned later when the AGI learns NL using the
ultimate way.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread YKY (Yan King Yin)
On Sun, Sep 28, 2008 at 5:23 PM, David Hart [EMAIL PROTECTED] wrote:

 Actually, It's been my hunch for some time that the richness and importance
 of Hellen Keller's sensational environment is frequently grossly
 underestimated. The sensations of a deaf/blind person still include
 proprioception, vestibular senses, smell, touch, pressure, temperature,
 vibration, etc., easily enough rich sensory information to create an
 internal mental represenation of a continous external reality. ;-)

In the long run, there is no reason why an AGI shouldn't be embodied.
In the short run, though, we may be able to go a long way with limited
embodiment.  Also, perhaps the embodiment or grounding can be added on
later.  Or, maybe we can define a sensory interface for AGIs.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread YKY (Yan King Yin)
On Mon, Sep 29, 2008 at 9:18 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Parsing English sentences into sets of formal-logic relationships is not
 extremely hard given current technology.

 But the only feasible way to do it, without making AGI breakthroughs
 first, is to accept that these formal-logic relationships will then embody
 significant ambiguity.

We are talking about 2 things:
1.  Using an ad hoc parser to translate NL to logic
2.  Using an AGI to parse NL

I think I've already formulated how to do #2, and will try to
implement it soon.  But it *still* requires a lot of training (not
surprisingly).

Yes, if we use #1 then we have to deal with ambiguities.  But #2
provides some ready-to-run components right now.

 _poss(life,your)
 _poss(treasure,my)
 _obj(Guard,treasure)
 with(Guard,life)
 _imperative(Guard)

This logical form is somewhat similar to Rus form...  does it have a name?

 I think it can be handled via embodiment, i.e. via having an AI system
 observe
 the usage of various senses of with in various embodied contexts.

I'm afraid the crux is not in embodiment.  It's in abduction  =)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-29 Thread YKY (Yan King Yin)
On Tue, Sep 30, 2008 at 1:51 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 My point for YKY was (as you know) not that this is an impossible problem
 but that it's a fairly deep AI problem which is not provided out-of-the-box
 in any existing NLP toolkit.  Solving disambiguation thoroughly is AGI-hard
 ... solving it usefully is not ... but solving it usefully for
 *prepositions* is cutting-edge research going beyond what existing NLP
 frameworks do...

In my approach human users can teach the AGI's language model.  The
ad hoc NLP component is just a temporary measure...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-28 Thread YKY (Yan King Yin)
On Sun, Sep 28, 2008 at 1:22 PM, Eric Burton [EMAIL PROTECTED] wrote:
 The purpose of YKY's invocation of Helen Keller is interestingly at
 odds with the usage that appears in the Jargon File.

In choosing Helen-Keller mode, I'm not deliberately trying to make
things harder for the baby AGI, it's just a weighing of costs and
benefits.  Building an embodied AGI seems to require a lot of efforts
and I'd rather channel the resources to other aspects.

Also, I may provide a simple vision interface to G_0 that allows users
to draw simple diagrams on a panel when explaining things to it.  That
interface can evolve into a real vision module, when connected to a
web cam.  I have spent ~2 years on vision so that's also on my to-do
list, though not a top priority right now. =)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] universal logical form for natural language

2008-09-27 Thread YKY (Yan King Yin)
Hi group,

I'm starting an AGI project called G_0 which is focused on commonsense
reasoning (my long-term goal is to become the world's leading expert
in common sense).  I plan to use it to collect commonsense knowledge
and to learn commonsense reasoning rules.

One thing I need is a universal logical form for NL, which means every
(eg English) sentence can be translated into that logical form.

I can host a Wiki to describe the logical form, or we can use
OpenCog's.  I plan to consult all AGI groups including OpenCog,
OpenNARS, OpenCyc, and Texai.

Any opinion on this?

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-27 Thread YKY (Yan King Yin)
On Sun, Sep 28, 2008 at 5:21 AM, David Hart [EMAIL PROTECTED] wrote:
 Hi YKY,

 Can you explain what is meant by collect commonsense knowledge?

That means collecting facts and rules.

Example of a commonsense fact:  apples are red

Example of a commonsense rule:  if X is female X has an above-average
chance of having long hair

 Playing the friendly devil's advocate, I'd like to point out that Cyc seems
 to have been spinning its wheels for 20 years, building a nice big database
 of 'commonsense knowledge' but accomplishing no great leaps in AI. Cyc's
 conundrum is discussed perennialy on various lists with many possible
 explanations posited for Cyc's lackluster performance: Perhaps its krep is
 too brittle and too reduced? Perhaps its ungroundedness is its undoing?
 Perhaps there's no coherent cognitive architecture on which to build an
 effective learning  reasoning system?

IMO Cyc's problem is due to:
1.  the lack of a well-developed probabilistic/fuzzy logic (thus brittleness)
2.  the emphasis on ontology (plain facts) rather than production rules

(I say production rules to distinguish it from inference rules
which are meta-logical.  Though the former term sounds a bit
outdated.)

 Before people volunteer to work on building yet another commonsense
 knowledge system, perhaps they'll want to know how you plan to avoid the Cyc
 problem?

Well, commonsense reasoning has been my area of interest ever since I
started considering AGI.  I don't have a single killer idea that can
solve the problem, but I plan to use:

1.  an approximate probabilistic fuzzy logic
2.  an architecture especially designed for shallow commonsense
reasoning (as opposed to theorem proving and or expert systems type of
inference)
3.  reasoning algorithms such as abduction, induction, belief revision
4.  build an online community to teach the AGI

Not that these ideas are unique to my AGI...  but there exist subtle
and interesting differences amongst us -- I think it is actually a
good thing to have multiple AGI projects, each with its own
personality and way of thinking.

 Even a brief eplanation would be helpful, e.g. the OpenCog Prime design
 plans to address the Cyc problem by learning and reasoning over commonsense
 knowledge that is gained almost entirely by experience (interacting with
 rich environments and human teachers in virtual worlds) rather than by
 attempting to reason over absurdely reduced and brittle bits of hand-encoded
 knowledge. OPC does not represent commonsense knowledge internally
 (natively) with a distinct crisp logical form (the actual form is a topic of
 the OCP tutorial sessions), although it can be directed to transform its
 internal commonsense knowledge representations into such a form over time
 and with much effort. It's my hunch however that such transformations are of
 little practical value; inspecting a compact and formal krep output might
 help researchers evaluate what an OCP system has learned, but 'AGI
 intelligence tests' also work to this end and arguably have significant
 advantages over the non-interactive and detatched examination of krep dumps.

*My view* is that embodiment is not a critical factor -- and yes, I
already know Ben and Pei's view =)

I think I may be able to short-circuit the learning loop by using
minimal grounding.  The Helen Keller argument =)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread YKY (Yan King Yin)
On Thu, Sep 18, 2008 at 3:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Prolog is not fast, it is painfully slow for complex inferences due to using
 backtracking as a control mechanism

 The time-complexity issue that matters for inference engines is
 inference-control ... i.e. dampening the combinatorial explosion (which
 backtracking does not do)

 Time-complexity issues within a single inference step can always be handled
 via mathematical or code optimization, whereas optimizing inference control
 is a deep, deep AI problem...

 So, actually, the main criterion for the AGI-friendliness of an inference
 scheme is whether it lends itself to flexible, adaptive control via

 -- taking long-term, cross-problem inference history into account

 -- learning appropriately from noninferential cognitive mechanisms (e.g.
 attention allocation...)

(I've been busy implementing my AGI in Lisp recently...)

I think optimization of single inference steps and using global
heuristics are both important.

Prolog uses backtracking, but in my system I use all sorts of search
strategies, not to mention abduction and induction.  Also, currently
I'm using general resolution instead of SLD resolution, which is for
Horn clauses only.  But one problem I face is that when I want to deal
with equalities I have to use paramodulation (or some similar trick).
This makes things more complex and as you know, I don't like it!

I wonder if PLN has a binary-logic subset, or is every TV
probabilistic by default?

If you have a binary logic subset, then how does that subset differ
from classical logic?

People have said many times that resolution is inefficient, but I have
never seen a theorem that says resolution is slower than other
deduction methods such as natural deduction or tableaux.  All such
talk is based on anecdotal impressions.  Also, I don't see why other
deduction methods are that much different from resolution since their
inference steps correspond to resolution steps very closely.  Also, if
you can apply heuristics in other deduction methods you can do the
same with resolution.  All in all, I see no reason why resolution is
inferior.

So I'm wondering if there are some novel way of doing binary that
somehow makes inference faster than with classical logic.  And exactly
what is the price to be paid?  What aspects of classical logic are
lost?

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread YKY (Yan King Yin)
On Tue, Sep 23, 2008 at 6:59 PM, Abram Demski [EMAIL PROTECTED] wrote:
 I'm in the process of reading this paper:

 http://www.jair.org/papers/paper1410.html

 It might answer a couple of your questions. And, it looks like it has
 an interesting proposal about generating heuristics from the problem
 description. The setting is boolean rather than firs-order. It
 discusses the point about resolution being slow in practice.

First-order theorem proving is very different from propositional, the
techniques do not transfer there.  I'd be very delighted if you can
show a paper about a superior algorithm for first-order =)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread YKY (Yan King Yin)
On Tue, Sep 23, 2008 at 9:00 PM, Abram Demski [EMAIL PROTECTED] wrote:
 No transfer? This paper suggests otherwise:

 http://www.cs.washington.edu/homes/pedrod/papers/aaai06b.pdf

Well, people know that propositional SAT is fast, so
propositionalization is a tempting heuristic, but as the paper's
abstract has stated, it can only apply to small domains.  AGI is
precisely a large-domain problem!

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread YKY (Yan King Yin)
 On Tue, Sep 23, 2008 at 9:00 PM, Abram Demski [EMAIL PROTECTED]

 No transfer? This paper suggests otherwise:

 http://www.cs.washington.edu/homes/pedrod/papers/aaai06b.pdf

Sorry, I replied too quickly...

This paper does contribute to solving FOL inference problems, but it
is still inadequate for AGI because the FOL is required to be
function-free.  If you remember programming in Prolog, we often use
functors within predicates.  My guess is that commonsense reasoning
would make use of such functors as well.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread YKY (Yan King Yin)
On Tue, Sep 23, 2008 at 9:20 PM, YKY (Yan King Yin)
 Sorry, I replied too quickly...

 This paper does contribute to solving FOL inference problems, but it
 is still inadequate for AGI because the FOL is required to be
 function-free.  If you remember programming in Prolog, we often use
 functors within predicates.  My guess is that commonsense reasoning
 would make use of such functors as well.

Well, even when FOL-with-functions can be converted to function-free
FOL, the blow-up may be too much for a commonsense KB.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-18 Thread YKY (Yan King Yin)
On Thu, Sep 18, 2008 at 4:21 AM, Kingma, D.P. [EMAIL PROTECTED] wrote:

 Small question... aren't Bbayesian network nodes just _conditionally_
 independent: so that set A is only independent from set B when
 d-separated by some set Z? So please clarify, if possible, what kind
 of independence you assume in your model.

Sorry, I made a mistake.  You're right that X and Y can be dependent
even if there is no direct link between them in a Bayesian network.

I am currently trying to develop an approximate algorithm for Bayesian
network inference.  Exact BN inference takes care of dependencies as
specified in the BN, but I suspect that an approximate algorithm may
be faster.  I have not worked out the details of this algorithm yet...
and the talk about independence was misleading.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread YKY (Yan King Yin)
On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski [EMAIL PROTECTED] wrote:

Speaking of my BPZ-logic...

 2. Good at quick-and-dirty reasoning when needed

Right now I'm focusing on quick-and-dirty *only*.  I wish to make the
logic's speed approach that of Prolog (which is a fast inference
algorithm for binary logic).

 --a. Makes unwarranted independence assumptions

Yes, I think independence should always be assumed unless otherwise
stated -- which means there exists a Bayesian network link between X
and Y.

 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning

Do you mean collapsing to binary values?  Yes, that is done in BPZ-logic.

 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution

Not done yet.  I'm not familiar with max-ent.  Will study that later.

 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

I focus on learning 1st-order Bayesian networks.  I think we should
start with learning 1st-order Bayesian / Markov.  I will explore
mixing Markov and Bayesian when I have time...

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning

 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above

Yes, this can be done via meta-reasoning, which I'm currently working on.

 --b. Should have a repair algorithm based on that higher-order uncertainty

Once it is represented at the meta-level, you may do that.  But
higher-order uncertain reasoning is not high on my priority list...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-14 Thread YKY (Yan King Yin)
BTW, if any AGI projects would like to incorporate my ideas, feel free
to do so, and I'd like to get involved too!

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-08 Thread YKY (Yan King Yin)
A somewhat revised version of my paper is at:
http://www.geocities.com/genericai/AGI-ch4-logic-9Sep2008.pdf
(sorry it is now a book chapter and the bookmarks are lost when extracting)

On Tue, Sep 2, 2008 at 7:05 PM, Pei Wang [EMAIL PROTECTED] wrote:

   I intend to use NARS confidence in a way compatible with
 probability...

 I'm pretty sure it won't, as I argued in several publications, such as
 http://nars.wang.googlepages.com/wang.confidence.pdf and the book.

I understood your argument about defining the confidence c, and agree
with it.  But I don't see why c cannot be used together with f (as
*traditional* probability).

 In summary, I don't think it is a good idea to mix B, P, and Z. As Ben
 said, the key is semantics, that is, what is measured by your truth
 values. I prefer a unified treatment than a hybrid, because the former
 is semantically consistent, while the later isn't.

My logic actually does *not* mix B, P, and Z.  They are kept
orthogonal, and so the semantics can be very simple.  Your approach
mixes fuzziness with probability which can result in ambiguity in some
everyday examples:  eg, John tries to find a 0.9 pretty girl (degree)
vs  Mary is 0.9 likely to be pretty (probability).  The difference is
real, but subtle, and I agree that you can mix them but you must
always acknowledge that the measure is mixed.

Maybe you've mistaken what I'm trying to do, 'cause my theory should
not be semantically confusing...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-08 Thread YKY (Yan King Yin)
On Tue, Sep 2, 2008 at 12:05 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 but in a PLN approach this could be avoided by looking at

 IntensionalInheritance B A

 rather than extensional inheritance..

The question is how do you know when to apply the intensional
inheritance, instead of the extensional one.

It seems to me that using the probabilistic interpretation of
fuzziness would force you to use sum-product calculus...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-08 Thread YKY (Yan King Yin)
On Tue, Sep 9, 2008 at 4:27 AM, Pei Wang [EMAIL PROTECTED] wrote:
 Sorry I don't have the time to type a detailed reply, but for your
 second point, see the example in
 http://www.cogsci.indiana.edu/pub/wang.fuzziness.ps , page 9, 4th
 paragraph:

 If these two types of uncertainty [randomness and fuzziness] are
 different, why bother to treat them in an uniform way?
 The basic reason is: in many practical problems, they are involved
 with each other. Smets stressed
 the importance of this issue, and provided some examples, in which
 randomness and fuzziness are
 encountered in the same sentence ([20]). It is also true for
 inferences. Let's take medical diagnosis
 as an example. When a doctor want to determine whether a patient A is
 suffering from disease D,
 (at least) two types of information need to be taken into account: (1)
 whether A has D's symptoms,
 and (2) whether D is a common illness. Here (1) is evaluated by
 comparing A's symptoms with D's
 typical symptoms, so the result is usually fuzzy, and (2) is
 determined by previous statistics. After
 the total certainty of A is suffering from D is evaluated, it should
 be combined with the certainty
 of  T is a proper treatment to D (which is usually a statistic
 statement, too) to get the doctor's
 degree of belief for T should be applied to A. In such a situation
 (which is the usual case,
 rather than an exception), even if randomness and fuzziness can be
 distinguished in the premises,
 they are mixed in the middle and  final conclusions.


Thanks, that's a good point that I haven't thought of.

For example
I have a _slight_ knee pain  (fuzzy, z = 0.6)
knee pain - rheumatoid arthritis  (p = 0.3)   (excuse me for
making up numbers)
Then my system would convert
knee pain (z = 0.6)   to   knee pain = true (binary)
and conclude
rheumatoid arthritis (p = 0.3)

So there is some loss of information, but I feel this is OK.  Many
commonsense reasoning steps are lossy.  We're not trying to build
doctors here.  A commonsense AGI can control a medical expert system
to achieve professional levels.

The point is, I can always keep P and Z orthogonal.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-02 Thread YKY (Yan King Yin)
On 9/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 About indefinite/imprecise probabilities, you dismiss them as 
 overcomplicated, but you don't address the reason they were introduced in the 
 first place: In essence, to allow a rationally manipulable NARS-like 
 confidence measure that works nicely w/ probabilistic inference and has a 
 probabilistic justification.


NARS confidence is not exactly derived from probability, but is
compatible with probability.  If you have a better measure of
confidence, that's of course a good thing, but I don't see why 2nd
order P is needed for doing this.


 What is the semantics of your confidence values?  Pei's confidence values 
 have a clear nonprobabilistic semantics, and PLN's have a clear semantics in 
 terms of indefinite probabilities.  If you mix NARS confidences with 
 probabilistic inferences you have something semantically confused...


Let me have some thinking on it...  right now it seems to me that NARS
confidence and probability are orthogonal...


 Also, in your example

 A = Jim had sex with 1000 women

 B = Jim had cybersex with 1000 women

 A1 = Jim had sex with woman_1

 B1 = Jim had cybersex with woman_1

 you seem to assume that in a probabilistic approach

 (Inheritance B A).strength =

 ( (Inheritance B1 A1).strength)^1000


The implication (or inheritance in your parlance)
Jim has cybersex -- Jim has sex
is NOT regarded as probabilistic in my view, but it's fuzzy.

If you view that inheritance as probabilistic, there *may* be some
semantic problems, but such problems may be gotten around by clever
tricks -- though I don't know how.


 but in a PLN approach this could be avoided by looking at

 IntensionalInheritance B A

 rather than extensional inheritance..

 Note that PLN's intensional inheritance incorporates some fuzziness
 but within a probabilistic framework... we have

 IntensionalInheritance B A =

 ExtensionalInheritance ( Prop(B) Prop(A) )

 where Prop(X) is the fuzzy set of properties of X (defined in a specific
 way within PLN)


Yes, you seem to have solved the problem in an alternative way.
Though I'm still unclear about your definition of extensional vs
intensional.  It seems to be analogous to the definition in thermal
physics.  We have at least 4 definitions of it:  from ILP, NARS, Judea
Pearl, Ben G. =)


 So fuzzy logic per se is not the only way to handle situations like this
 in a semantically natural, commonsensical way.. there are many ways
 consistent w probability theory I'm sure, including the PLN way...


That's true.  I concur that.  But... we need to consider computational
constraints... there may be a need to prefer simpler logics...


 All in all, my sense is that you want to graft together some standard stuff
 with relatively minor tweaks ... and I am left wondering why you think that 
 such
 a relatively straightforward integration of small variations on well-known 
 stuff is
 going to give results more exciting than those already in the literature  
 Maybe
 there is a good reason, but it didn't seem to be made clear in the paper...


Some things are new in my logic:  fuzzy logic used to be rather
difficult to apply because the inference algorithms are complex.
Also, I kind of touched up fuzzy theory to make it semantically more
precise (?)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-02 Thread YKY (Yan King Yin)
On 9/2/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 NARS confidence is not exactly derived from probability, but is
 compatible with probability.

Sorry, I meant, the definition of NARS confidence is compatible with
probability, but NARS confidence as used in NARS, defies probability
laws.  I intend to use NARS confidence in a way compatible with
probability... not sure if it'll work out as intended...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft paper: a hybrid logic for AGI

2008-09-01 Thread YKY (Yan King Yin)
On 9/1/08, Benjamin Johnston [EMAIL PROTECTED] wrote:

Thanks for your comments =)

 --
 1. Why just P,Z and B?
 Three mechanisms seems somewhat arbitrary - I think you need to make a very
 compelling case for why there are three and only three mechanisms.

 Or, more interestingly, I wonder if you could generalize the reasoning
 framework so that every statement has a potentially unlimited number of
 measures:

This has been proposed before, and the problem, as pointed out by
[Simon Parsons 2001], is that we need to specify an exponential number
of ways of how the different TV types interact / interconvert.

Also, we want to create a new TV type, such as roughness (as in
rough sets), we may be able to build it on top of a base logic such as
BPZ.  In fact, I suspect that the base logic in the human brain does
not include probabilistic logic, which is why many lay people fail at
Bayesian reasoning tasks, but it also explains why human Bayesian
experts can master this art despite that their brains do not have
hard-wired probabilistic logic, either.

So it seems to be a balance between
1.  value-added
2.  simplicity (and thus computational efficiency)
3.  coverage of uncertain phenomena

 --
 2. Why must the 12 combinations on page 19 be analysed separately?
 Could these 12 cases be abstracted into some more general principle? Is
 there some way of rethinking the reasoning mechanisms so that the 12 cases
 don't seem to include arbitrary choices (e.g., Convert the B variable to a
 Z variable - but this is disallowed. We may convert the B variable
 to a P(Z) variable and then invoke Case #8.).

Yes, the 12 cases can be abstracted into 1 general mechanism, where
everything is P(Z), ie P distributed over Z.  But this logic seems
to be slower than B, P, Z separately.  I will try to elaborate this in
the next version

 --
 3. How would you deal with context?
 Something like the water is hot can mean different things depending on
 whether you're talking about: making a cup of coffee, washing clothes, a
 bathtub, a competitive swimming pool, or a glass of cool water without ice
 on a hot day. Modifiers like very and extremely are similarly context
 dependent, and gender, in particular, can have vastly different meanings
 depending on context (even though we as humans are able to immediately
 recognize what context is meant in most cases): social gender, genetic
 gender, physical gender, gender identity, sexual gender, ...

I have some vague ideas of how to deal with contexts (as a multitude
of rules competing for the highest confidence).  I will try to
formulate that better.

 --
 You present a taxonomy of ignorance (Figure 1) and assume it is
 self-evident from the taxonomy that P and Z are sufficient. I certainly do
 not find it to be self-evident: I don't see how the diagram supports your
 argument.

Right, it was just a sleigh-of-hand =)

I guess the only argument I can make is to show that a lot of examples
in everyday uncertainty can be reduced to BPZ.  That's rather time
consuming though.

 Even though you apologize for not being politically correct, I still think
 you should find better examples that do not have the potential to offend
 readers. Rather than saying hermaphrodites are degree 0.6 typical human(?),
 I'm sure you could find a simple and obvious example that addresses human
 emotions that is less insensitive.

I just chose the first example that comes to mind.  Let's see if I can
find better ones... but this is also very time-consuming... =(

 It doesn't look very professional when you cite Wikipedia.

Well, the whole paper is more like a tutorial, with a conversational
style.  If I submit to AGI'09 I'd revise it and make it more like an
academic journal paper. =)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-13 Thread YKY (Yan King Yin)
On 8/13/08, rick the ponderer [EMAIL PROTECTED] wrote:

 Reading this, I get the view of ai as basically neural networks, where
each individual perceptron could be any of a number of algorithms
(decision tree, random forest, svm etc).
 I also get the view that academics such as Hinton are trying to find ways
of automatically learning the network, whereas there could also be a
parallel track of engineering the network, manually creating it perceptron
by percetron, in the way Rodney Brooks advocates bottom up subsumption
architecture.

 How does opencog relate to the above viewpoint. Is there something
fundamentally flawed in the above as an approach to achieving agi.

NN *may* be inadequate for AGI, because logic-based learning seems to be, at
least for some datasets, more efficient than NN learning (that includes
variants such as SVMs).  This has been my intuition for some time, and
recently I've found a book that explores this issue in more detail.  See
Chris Thorton 2000, Truth from Trash -- how learning makes sense, MIT
press, or some of his papers on his web site.

To use Thorton's example, he demontrated that a checkerboard pattern can
be learned using logic easily, but it will drive a NN learner crazy.

It doesn't mean that the NN approach is hopeless, but it faces some
challenges.  Or, maybe this intuition is wrong (ie, do such heavily
logical datasets occur in real life?)

YKY



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-13 Thread YKY (Yan King Yin)
On 8/13/08, rick the ponderer [EMAIL PROTECTED] wrote:
 Thanks for replying YKY
 Is the logic learning you are talking about inductive logic programming.
If so, isn't ilp basically a search through the space of logic programs (i
may be way off the mark here!), wouldn't it be too large of a search space
to explore if you're trying reach agi.
**
Yes, and I guess the search space would be huge no matter what kind of
learning substrate we use.  At least one redeeming trick (for symbolic AI)
is that we can limit the depth of the search of programs, and my intuition
is that commonsense reasoning is mostly shallow (ie, involving few
inference steps).

 And if you're determined to learn a symbolic representation, wouldn't
genetic programming be a better choice, since it won't get stuck in local
minima.
*
It is possible to use GA to search the ILP space;  there is research in that
area.  I may use that too.

One interesting question is to compare ILP search in the space of logic
programs vs genetic programming (ie search in program spaces such as Lisp or
combinator logic or lambda calculus).  Unfortunately I'm unfamiliar with the
latter, so I need some time to study that.

 Would neural networks be better in that case because they have the
mechanisms as in Geoff Hinton's paper that improve on random searching.

**
This is just the age-old debate of symbolic AI vs connectionism, given a new
twist in the context of machine learning.  Note that that first debate was
never really settled.  So, my bet is that we need NN-style learning at the
low levels, and symbolic-style learning at the high levels.  I tend to focus
on the symbolic side.  I'm very skeptical whether NN learning can solve
high-level symbolic problems.

 Also, if you did manage to learn a giant logic program that represented
ai, could it be easily parallelized the way a neural network can be (so that
it can run in real time).


Yes, logical inference can be parallelized.  I have a book about it, but I
haven't bothered to study that -- design first, optimize later.

YKY



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] PLN and Bayes net comparison

2008-08-13 Thread YKY (Yan King Yin)
On 8/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 But if one doesn't need to get into implementation details, in the
simplest case one just has

 VariableScopeLink X
 ImplicationLink
  ___ANDLink
  __ InheritanceLink X male
  __ InheritanceLink X Unmarried
  ___InheritanceLink X bachelor


 where X is a Variable Node ...

 Then the Unification Rule combines this with the links

 InheritanceLink John male
 InheritanceLink John unmarried

 and the conclusion comes out from one step of the
 unification rule

 InheritanceLink John bachelor


 But I'm not sure that's what you're asking ... and note that the
 intension'extension distinction is getting swept behind the scenes here,
 into the definition of InheritanceLink...

Thanks, I guess I'll wait for the better documentation.

I'm just wondering, how could your inference be efficient, if the
mechanism is so complicated.  You know, optimization is very difficult
even for simpler logic-based systems such as Cyc and Soar.  Soar has a
special algorithm called rete that specifically deals with rule-fetching.

One option for us, is to merge my PZ (probabilistic-fuzzy) logic with your
PLN, on the declarative side.  Then I can help design the ILP on that side.
I'll send you my paper ASAP...

BTW, is your definition of intension-extension the same as Pei Wang's?  Or
Judea Pearl's?

YKY



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] PLN and Bayes net comparison

2008-08-12 Thread YKY (Yan King Yin)
Hi Ben,

Hope you don't mind providing more clarification...

In first-order logic there may be a rule such as:
male(X) ^ unmarried(X) - bachelor(X)

We can convert this to a probabilistic rule:
P(bachelor(X) = true | male(X) = true, unmarried(X) = true ) = 1.0
but note that this rule contains the variable X.

In my architecture, if I encounter a query such as:
is John a bachelor?

I would have to construct a propositional (ie, 0th-order) Bayes net to
answer that query.  During the construction, I would instantiate the
rule with the variable substitution { X / John }.

In other words, in my architecture the KB is a collection of logical
formulae that do *not* form a network.  Bayes nets are constructed
_on-the-fly_ to answer specific queries.

I'm not sure if PLN follows a similar arrangement.  How can you have
everything represented as networks when some rules with variables can
be instantiated as many instances?

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] PLN and Bayes net comparison

2008-08-12 Thread YKY (Yan King Yin)
On 8/12/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 construct 1 =

 ImplicationLink
 ___ANDLink
 __ PredicateNode isMale
 __ PredicateNode isUnmarried
 ___PredicateNode isBachelor

 It's just a relationship between functions (predicates being mathematical
functions from entities to truth values)

 Or, with different semantics, we could say

 construct 2 =

 ForAllLink
 ___ VariableNode X_1
 ___ ImplicationLink
 __ANDLink
 _ EvaluationLink (PredicateNode isMale) X_1
 _ EvaluationLink (PredicateNode isUnmarried) X_1
 __EvaluationLink (PredicateNode isBachelor) X_1

I'm interested in how the the rules are fetched from memory, and how the
variables get instantiated, etc...

How would you represent the given facts:
   John is male
John is unmarried
and then perform the inference to get
John is a bachelor?

Sorry if this sounds too simple, but I can't get enough documentation as to
how PLN works.

YKY



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Probabilistic Inductive Logic Programming and PLN

2008-08-07 Thread YKY (Yan King Yin)
Ben,  BTW,  you may try inviting Stephen Muggleton to AGI'09.  He
actually talked to me a few times despite that I knew very little
about ILP at that time.  According to the wikipedia page he is
currently working on an `artificial scientist' .
http://en.wikipedia.org/wiki/Stephen_Muggleton

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Probabilistic Inductive Logic Programming and PLN

2008-08-05 Thread YKY (Yan King Yin)
On 8/5/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Yes, but in PLN/ OpenCogPrime backward chaining *can* create hypothetical
logical relationships and then seek to estimate their truth values

 See this page

 http://opencog.org/wiki/OpenCogPrime:IntegrativeInference

 and the five pages linked to from it (at the top)
It requires efforts to read your stuff, and I'm not fully committed to work
in OCP yet ;)   But let's talk generally...

Can you create hypotheses that contain variables?  If yes, what you're doing
is essentially ILP.  If not, then your version is a kind of propositional
learning, like ID3, and is inadequate for AGI.

 The purpose of the scoring function is precisely to attempt to manage this
combinatorial explosion.

Let's try to touch base here:  We need to search through a tree with a very
high branching factor, because there are so many ways to generate hypotheses
in FOL, with so many predicates to choose from, and the possibility to
introduce variables as desired.  So, there are a huge number of nodes in the
search tree, even if we limit the depth, otherwise it's simply infinite.

The scoring function give a score to each node (my understanding is that the
score is based on how many examples that tree branch can *explain*.)  So,
the score per se, does not reduce the size of the tree.

The only cool idea that can get us out of this quandary is, as you
suggested, by using prior experience as an inductive bias during the
search.  (Yes, maybe you can incorporate this bias into the scoring
function.)

Even so, we need an refinement operator that can help us generate the
nodes in the search tree.  This operator has been defined for ILP under FOL,
but if your logic is messy, it may be very difficult to figure out how to
search the tree systematically.  That's why I've so strongly advocated for a
neat KR!

YKY



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-08-05 Thread YKY (Yan King Yin)
On 7/31/08, Mark Waser [EMAIL PROTECTED] wrote:

 Categorization depends upon context.  This was pretty much decided by the
late 1980s (look up Fuzzy Concepts).

This is an important point so I don't want to miss it.  But I can't think of
a very good example of context-dependence of concepts.

Some books have these examples:

1.  Chess is a sport that is a game (the book claims that people make this
judgement).  But chess is not a sport.

2.  Tree houses are in the category of dwellings that are not buildings.
But people also think tree houses are buildings.  (Again, this example seems
somewhat awkward to me).

3.  All chairs are furniture.  A seat in a car is a chair but people would
not call a car seat furniture.  So, it seems to be a violation of
transitivity.

Can anyone give better examples of context-dependence?

YKY



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Probabilistic Inductive Logic Programming and PLN

2008-08-05 Thread YKY (Yan King Yin)
On 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
 As I understand it, FOL is only Turing complete when
 predicates/relations/functions beyond the ones in the data are
 allowed. Would PLN naturally invent predicates, or would it need to be
 told to specifically? Is this what concept creation does? More
 concretely: if I gave PLN a series of data, and asked it to guess what
 the next item in the series would be, what sort of process would it
 employ?

Prolog (and logic programming) is Turing complete, but FOL is not a
programming language so I'm not sure.

Predicate invention is an aspect of ILP that has recieved research,
and it seems important to AGI.

I guess concept creation involves inventing a predicate, and then
finding its definition via logical rules.

The problem of  ILP is:  given a set of examples, define a concept
(with logical rules) that covers all positive examples and excludes
all negative examples.  (Or approximately so.)

After that, you can use those concepts / rules to make predictions.

To make predictions on a series, one may use ILP to induce rules that
relate S_n+1 to S_n, or some such.  This may be different from the
predictive learning algorithm you have in mind.  I'm not so familiar
with other machine learning paradigms...

Also, note that the KR I'm using is actually called first-order
Bayesian networks.  I use FOL in discussions because it's easier to
understand.  If I say first-order Bayes net more people would be
scratching their heads.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-08-05 Thread YKY (Yan King Yin)
On 8/5/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Jeez, there is NO concept that is not dependent on context. There is NO
concept that is not infinitely fuzzy and open-ended in itself, period -
which is the principal reason why language is and has to be grounded
(although that needs demonstration).

I see...

My current approach is to use fuzzy rules to model these concepts.  In some
cases it seems to work but in other cases it seems problematic...

For example I can give a definition of the concept chair:

chair(X) :-
X has leg #1,
X has leg #2,
X has leg #3,
X has leg #4,
X has a horizontal seat area,
X has a vertical back area,
leg #1 is connected to seat at position #1,
etc,
etc

But what if a chair has one leg missing?  Using fuzzy logic (fuzzy AND), the
missing leg will result in a fuzzy value close to 0, which is not quite
right.

Probabilistic logic is also inappropriate.  I know *every* time that a chair
missing a leg is somewhat a chair -- there is no probability involved
here.

YKY



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Probabilistic Inductive Logic Programming and PLN

2008-08-05 Thread YKY (Yan King Yin)
On 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:

  Prolog (and logic programming) is Turing complete, but FOL is not a
  programming language so I'm not sure.

 You are right, I should have said FOL is turing complete within the
 right inference system [such as Prolog], but only when
 predicates/relations/functions beyond the ones in the data are
 allowed.

Well... is Turing complete really that important?  I mean, most
non-trivial KR schemes should be Turing complete.

Also, one can splice new predicates into old rules, thus eliminating
them.  So strictly speaking, new predicates do not add new content.
But they may make machine learning easier in heuristics (just
speculation).  Oh wait... the exception is that new predicates are
necessary for defining recursive predicates.  So yes, predicate
invention is necessary if resulting logic program need to contain
recursive predicates.

As for functions... ILP is complex enough with function-free FOL
already.  Researchers usually use a flattening technique to
eliminate functions.  So I don't know much about function creation.

 So, you are not trying to create your own new probabilistic logic, you
 are just trying to develop 1st-order bayesian networks further?

Yeah, I'm trying to distribute probabilities over fuzziness, with
first-order Bayes nets.  This is already quite complex.  And even this
seems unable to model some subtleties of commonsense concepts so I
need more thinking of it.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless (AND fuzzy) reasoning - in one

2008-08-05 Thread YKY (Yan King Yin)
On 8/6/08, Abram Demski [EMAIL PROTECTED] wrote:
 There is one common feature to all chairs: They are for the purpose of
 sitting on. I think it is important that this is *not* a visual
 characteristic.

It is possible to recognize chairs that cannot be sat on -- for
example, a broken chair, a miniature chair, a toy chair, a paper
chair, a chair with a long sharp spike on the seat, etc. =)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Probabilistic Inductive Logic Programming and PLN

2008-08-05 Thread YKY (Yan King Yin)
On 8/6/08, Jim Bromer [EMAIL PROTECTED] wrote:

 You made some remarks, (I did not keep a record of them), that sounds
 similar to some of the problems of conceptual complexity (or
 complicatedness) that I am interested in.  Can you describe something
 of what you are working on in a little more detail in a way that
 should make it easy for me to understand?  To start with, what does
 distribute probabilities over fuzziness, mean exactly.  Are you
 trying to use first order Bayes nets to examine different distribution
 patterns?  Does first order Bayes nets refer to something similar to
 the inductive logical probability that you titled this thread after?

I'm writing a paper about my probabilistic-fuzzy logic that should be
fairly easy to understand.  But I got stuck on the fuzzy concept
problem as you can see.

To distribute probabilities over fuzziness means:  each concept has
a fuzzy value, Z, in [0,1].  For example, the chairness of a certain
chair may be 0.7.

I can add probabilities on top of this:  the mean of Z would be at
0.7, with a bell-curve kind of distribution.  So I use 2 numbers, the
mean and the variance.  That allows me to approximate an interval Z
value such as [0.6,0.8].

Probabilistic ILP can be performed on this KR structure (analogous to
ILP on FOL).

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-29 Thread YKY (Yan King Yin)
On 7/29/08, Benjamin Johnston [EMAIL PROTECTED] wrote:

 I see the failure in this argument at step 2. Cybersex is a kind of erotic
 interaction. Erotic interactions are often called sex in general
 conversation, even though there are many kinds of erotic interactions that
 don't result in the transmission of STDs, that can't lead to children, that
 don't involve the exchange of bodily fluids, and so on.

Saying that
cybersex is a kind of sex
is similar to saying
phone sex is a kind of sex
oral sex is a kind of sex
anal sex is a kind of sex
group sex is a kind of sex
or
penguin is a kind of bird

It seems pretty uncontroversial...

It's true that cybersex is a kind of sex, albeit that it has certain
exceptional characteristics.  Just like a penguin is a bird with
special properties.

So I think the trouble is in step 4.

Cheers =)
YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


[agi] a fuzzy reasoning problem

2008-07-28 Thread YKY (Yan King Yin)
Here is an example of a problematic inference:

1.  Mary has cybersex with many different partners
2.  Cybersex is a kind of sex
3.  Therefore, Mary has many sex partners
4.  Having many sex partners - high chance of getting STDs
5.  Therefore, Mary has a high chance of STDs

What's wrong with this argument?  It seems that a general rule is
involved in step 4, and that rule can be refined with some
qualifications (ie, it does not apply to all kinds of sex).  But the
question is, how can an AGI detect that an exception to a general rule
has occurred?

Or, do we need to explicitly state the exceptions to every rule?

Thanks for any comments!
YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread YKY (Yan King Yin)
On 7/28/08, Pei Wang [EMAIL PROTECTED] wrote:

 Every rule is general to a degree, which means it ignores
 exception. It is simply impossible to list all exceptions for any
 given rule. This issue has been discussed by many people in the
 non-monotonic logic community.

 The solution is not to exclude exceptions, but to give more confident
 conclusion higher priority whenever a conflicts happens.


Thanks Pei,  I think your solution is the most sensible so far.  But
it depends on seaching the KB to find the proof with the highest
cumulative confidence -- in general this is unfeasible so sometimes we
may need to use less-confident short-cuts.  But that's still
acceptable.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem.. P.S.

2008-07-28 Thread YKY (Yan King Yin)
On 7/28/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Mary says Clinton had sex with her.
 Clinton says he wouldn't call that sex.

LOL...

But your examples are still symbolic in nature.  I don't see why they
can't be reasoned via logic.

In the above example the concept sex may be a fuzzy concept.  So
certain forms of sex may be construed as 0.75 sex or something like
that.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread YKY (Yan King Yin)
On 7/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 Your inference trajectory assumes that cybersex and STD are 
 probabilistically independent within sex but this is not the case.

We only know that:
   P(sex | cybersex) = high
   P(STD | sex) = high

If we're also given that
   P(STD | cybersex) = 0
then the question is moot -- it is already answered.

It is a problem because we're not given the 3rd piece of information...

 PLN would make this error using the independence-assumption-based term logic 
 deduction rule; but in practice this rule is supposed to be overridden in 
 cases of known dependencies.


Why don't PLN use Pei-Wang-style confidence?

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread YKY (Yan King Yin)
On 7/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 PLN uses confidence values within its truth values, with a different 
 underlying semantics and math than NARS; but that doesn't help much with the 
 above problem...

 There is a confidence-penalty used in PLN whenever an independence assumption 
 is invoked, but it's not that severe of a penalty -- and nor should it be.  
 When additional evidence is not available, making an independence assumption 
 is appropriate, even though sometimes it will turn out to be wrong.

Even if you assume independence, you'll have 2 distinct paths leading
to contradicting conclusions.  So you need some way to pick a winner.

I think Pei Wang's definition of confidence is good and can solve this
example.  I'll check out your book when it's out =)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] new version of NARS

2008-07-28 Thread YKY (Yan King Yin)
On 7/28/08, Pei Wang [EMAIL PROTECTED] wrote:

 A new version of NARS (Open-NARS 1.1.0)...

I'm writing a paper on a probabilistic-fuzzy logic that is suitable
for AGI.  It uses some of your ideas.  I will put it on the net when
it's finished...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread YKY (Yan King Yin)
On 7/29/08, Charles Hixson [EMAIL PROTECTED] wrote:

 There's nothing wrong with the logical argument.  What's wrong is that you
 are presuming a purely declarative logic approach can work...which it can in
 extremely simple situations, where you can specify all necessary facts.

 My belief about this is that the proper solution is to have a model of the
 world, and how interactions happen in it separate from the logical
 statements.  The logical statements are then seen as focusing techniques.
 [ ... ]

The key word here is model.  If you can reason with mental models,
then of course you can resolve a lot of paradoxes in logic.  This
boils down to:  how can you represent mental models?  And they seem to
boil down further to logical statements themselves.  In other words,
we can use logic to represent rich mental models.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem.. P.S.

2008-07-28 Thread YKY (Yan King Yin)
On 7/29/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Why isn't science done via logic? Why don't physicists, chemists,
 biologists, psychologists and sociologists just use logic to find out about
 the world?  Do you see why?And bear in mind that scientists are only formal
 representatives of every human being - IOW we all reason like scientists as
 individuals, however crudely, if we want to find out the truth about events
 in the world -  what happened with Mary  Bill, or why the car broke down.

 Try suggesting that any scientist just use logic - or follow the reasoning
 principles of your AGI. It would be laughable.

Many philosophers do use formal logic.  Scientists work at a level
beyond common sense, so they don't need to tease out the underlying
structure of common sense.  But philosophers do.  And they use logic.

 The reason is: all the symbols you use refer to real world objects, and the
 only definitive way to find out their truth is by looking at their real
 objects not just the symbols  - the evidence  - as science does.

You said ... looking at their real objects not just the symbols...

The knowledge representation *represents* the external world.  You can
*never* have a KR that corresponds perfectly with the external world.
Symbols, in a broad sense, are necessary to AGI.

 There are then various secondhand ways - getting other people's
 opinions/reports, looking at scientific data etc etc - but the only way to
 assess the reliability of those is by comparing their success with respect
 to other real world objects.  You can't as you guys seem to - (correct me) -
 quite arbitrarily -* programmer ex machina * - assign degrees of
 confidence/certainty to information  - 0.75 sex.

This is a bit complicated.  I'll explain it in my up-coming paper =)

 What is needed here - for any true General Intelligence -  is a whole new
 branch of metacognition to supplement logic that will set out the main
 principles by which we actually reason about the world most of the time.
 Logic  is a v. limited form of reasoning and metacognition. It alone cannot
 and never wll refer to reality. What Russell said of maths applies equally
 to logic (and he was even better than you guys at both) :

I think you're just taking the concept of logic to an extreme.  In
my usage of logic, it is just a computational formalism.  I don't
even mention a correspondence between logic and big-T Truth.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread YKY (Yan King Yin)
On 7/29/08, Charles Hixson [EMAIL PROTECTED] wrote:

 This is true, but the logic statements of the model are rather different
 than simple assertions, much more like complex statements specifying
 proportional relationships and causal links.  I envision the causal links
 as being at statements about the physical layer.  And everything covered
 with amount of belief .  The model would also need to include mechanisms
 believed to be in operation. (E.g., fire is caused by the phologiston
 escaping from the wood).  And mechanisms would need to be in place for
 correcting and updating the model.  Etc.

FINALLY, this provides an angle where you guys can understand what I
mean by logic =)

I use logic as a computational structure to build mental models, eg,
by stating relations among entities.  Some of the logic statements may
appear to be similar to everyday, conversational statements.  But the
actual KR can be *much* richer than conversational statements.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread YKY (Yan King Yin)
On 7/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
 YKY: The key word here is model.  If you can reason with mental models,
  then of course you can resolve a lot of paradoxes in logic.  This
  boils down to:  how can you represent mental models?  And they seem to
  boil down further to logical statements themselves.  In other words,
  we can use logic to represent rich mental models.
 

 Pei,

You're responding to my post (YKY), not Pei Wang.

 Can you identify a single metalogical dispute - about how to resolve
 paradoxes in logic, or,say, which form of logic to use for a given type of
 problem  - that has been resolved by formally LOGICAL means? Can you give
 one actual example of what you have just asserted above - one such
 paradox-resolving mental model that really was logical?

 My contention would be that metalogical reasoning - depends on a totally
 different kind of reasoning to that of logic itself. And you cannot
 *logically* derive any new kind of logic - nonmotonic, fuzzy etc - from any
 previous kind. Nor can you derive any new branch of mathematics
 *mathematically* from any previous kind. The foundations of logic, maths and
 rational systems generally do not lie in themselves - which, if true, is
 rather important for General Intelligence.

 But by all means disprove me.

Guys like Bertrand Russell are concerned about the relation of
mathematical logic to Truth.

AGI builders who use logic, use it as a computational structure to
*represent* the world.  It is understood that a representation is an
*approximation* of the real thing.

Your problem stems from the fact that you think there's a
correspondence of AGI logic statements with Truths and thus there
emerges the problem of meta-logic -- how do we know the Truths about
Truths.

The correct way to understand logic-based AGI is that the AGI logic
statements do not correspond to Truths, but they correspond to
hypotheses about the world in an *approximate* manner.  There is no
escape from this.  Welcome to the postmodern, I guess =)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] need some help with loopy Bayes net

2008-07-06 Thread YKY (Yan King Yin)
On 7/5/08, Pei Wang [EMAIL PROTECTED] wrote:
 Though there is a loop, YKY's problem not is caused by circular
 inference, but by multiple Inheritances, that is, different
 inference paths give different conclusions. This is indeed a problem
 in Bayes net, and there is no general solution in that theory, except
 in special cases.

 This problem is solved in NARS mainly by the confidence measurement,
 though inference trails are also relevant.

 See my Reference Classes and Multiple Inheritances at
 http://www.cogsci.indiana.edu/farg/peiwang/papers.html#reference_classes

I like the general direction of your approach (especially using w+ and
w-), but there are some problems...

Take an example:
   S1:  AGIers are usually nerds
   S2:  Nerds are usually socially awkward
   -
   S:  AGIers are probably socially awkward

Suppose the frequencies of S, S1 and S2 are f, f1 and f2, resp.

If you draw the Venn diagram, you'd find that one piece is missing if
we want to deduce f, namely the portion of AGIers who are not nerds
but who are socially awkward.

In your paper you seem to suggest a heuristic rule:
   f = f1 f2 / (f1 + f2 - f1 f2)

I'm not saying the rule is bad, just wondering what kind of
assumptions you're making?

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] need some help with loopy Bayes net

2008-07-04 Thread YKY (Yan King Yin)
I'm considering nonmonotonic reasoning using Bayes net, and got stuck.

There is an example on p483 of J Pearl's 1988 book PRIIS:

Given:
birds can fly
penguins are birds
penguins cannot fly

The desiderata is to conclude that penguins are birds, but penguins
cannot fly.

Pearl translates the KB to:
   P(f | b) = high
   P(f | p) = low
   P(b | p) = high
where high and low means arbitrarily close to 1 and 0, respectively.

If you draw this on paper you'll see a triangular loop.

Then Pearl continues to deduce:

Conditioning P(f | p) on both b and ~b,
P(f | p) = P(f | p,b) P(b | p) + P(f | p,~b) [1-P(b | p)]
 P(f | p,b) P(b | p)

Thus
P(f | p,b)  P(f | p) / P(b | p) which is close to 0.

Thus Pearl concludes that given penguin and bird, fly is not true.

But I found something wrong here.  It seems that the Bayes net is
loopy and we can conclude that fly given penguin and bird can be
either 0 or 1.  (The loop is somewhat symmetric).

Ben, do you have a similar problem dealing with nonmonotonicity using
probabilistic networks?

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-24 Thread YKY (Yan King Yin)
On 6/23/08, William Pearson [EMAIL PROTECTED] wrote:

 The base beliefs shared between the group would be something like

  - The entities will not have goals/motivations inherent to their
 form. That is robots aren't likely to band together to fight humans,
 or try to take over the world for their own means.  These would have
 to be programmed into them, as evolution has programmed group loyalty
 and selfishness into humans.
 - The entities will not be capable of fully wrap around recursive
 self-improvement. They will improve in fits and starts in a wider
 economy/ecology like most developments in the world *
 - The goals and motivations of the entities that we will likely see in
 the real world will be shaped over the long term by the forces in the
 world, e.g. evolutionary, economic and physics.

 Basically an organisation trying to prepare for a world where AIs
 aren't sufficiently advanced technology or magic genies, but still
 dangerous and a potentially destabilising world change. Could a
 coherent message be articulated by the subset of the people that agree
 with these points. Or are we all still too fractured?


What you propose sounds reasonable, but I'm more interested in how to
make AGI developers collaborate, which is more urgent to myself.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 Also, YKY, I can't help but note that your currently approach seems
 extremely similar to Texai (which seems quite similar to Cyc to me),
 more so than to OpenCog Prime (my proposal for a Novamente-like system
 built on OpenCog, not yet fully documented but I'm actively working on
 the docs now).

 I wonder why you don't join Stephen Reed on the texai project?  Is it
 because you don't like the open-source nature of his project?

You have built an AGI enterprise (at least, on the way to it).  Often
the *people* matter more than the technology.  I *need* to collaborate
with the community in order to win.  And vice versa.  Texai is closer
to my theory but you have a bigger community.  I don't have the
resources to rebuild the infrastructure that you have, eg the virtual
reality embodiment etc.

Opensource is such a thorny issue.  I don't have a clear idea yet...

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
Hi Ben,

Note that I did not pick FOL as my starting point because I wanted to
go against you, or be a troublemaker.  I chose it because that's what
the textbooks I read were using.  There is nothing personal here.
It's just like Chinese being my first language because I was born in
China.  I don't speak bad English just to sound different.

I think the differences in our approaches are equally superficial.  I
don't think there is a compelling reason why your formalism is
superior (or inferior, for that matter).

You have domain-specific heuristics;  I'm planning to have
domain-specific heuristics too.

The question really boils down to whether we should collaborate or
not.  And if we want meaningful collaboration, everyone must exert a
little effort to make it happen.  It cannot be one-way.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 1) representing uncertainties in a way that leads to tractable, meaningful
 logical manipulations.  Indefinite probabilities achieve this.  I'm not saying
 they're the only way to achieve this, but I'll argue that single-number,
 Walley-interval, fuzzy, or full-pdf approaches are not adequate for various
 reasons.

First of all, the *tractability* of your algorithm depends on
heuristics that you design, which are separable from the underlying
probabilistic logic calculus.  In your mind, these 2 things may be
mixed up.

Indefinite probabilities DO NOT imply faster inference.
Domain-specific heuristics do that.

Secondly, I have no problem at all, with using your indefinite
probability approach.

It's a laudable achievement what you've accomplished.

Thirdly, probabilistic logics -- of *any* flavor -- should
[approximately] subsume binary logic if they are sound.  So there is
no reason why your logic is so different that it cannot be expressed
in FOL.

Fourthly, the approach that I'm more familiar with is interval
probability.  I acknowledge that you have gone further in this
direction, and that's a good thing.

 2) using inference rules that lead to relatively high-confidence uncertainty
 propagation.  For instance term logic deduction is better for uncertain
 inference than modus ponens deduction, as detailed analysis reveals

I believe term logic is translatable to FOL -- Fred Sommers mentioned
that in his book.

 3) propagating uncertainties meaningfully through abstract logical
 formulae involving nested quantifiers (we do this in a special way in PLN
 using third-order probabilities; I have not seen any other conceptually
 satisfactory solution)

Again, that's well done.

But are you saying that the same cannot be achieved using FOL?

 4) most critically perhaps, using uncertain truth values within inference
 control to help pare down the combinatorial explosion

Uncertain truth values DO NOT imply faster inference.  In fact, they
slow down inference wrt binary logic.

If your inference algorithm is faster than resolution, and it's sound
(so it subsumes binary logic), then you have found a faster FOL
inference algorithm.  But that's not true;  what you're doing is
domain-specific heuristics.

 How these questions are answered matters a LOT, and my colleagues
 and I spent years working on this stuff.  It's not a matter of converting
 between equivalent formalisms.

I think one can do
indefinite probability + FOL + domain-specific heuristics
just as you can do
indefinite probability + term logic + domain-specific heuristics
but it may cost an amount of effort that you're unwilling to pay.

This is a very sad situation...
YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 One thing I don't get, YKY, is why you think you are going to take
 textbook methods that have already been shown to fail, and somehow
 make them work.  Can't you see that many others have tried to use
 FOL and ILP already, and they've run into intractable combinatorial
 explosion problems?

Calm down =)

I'll use domain-specific heuristics just as you do.  There's nothing
wrong with textbooks.

 Some may argue that my approach isn't radical **enough** (and in spite
 of my innate inclination toward radicalism, I'm trying hard in my AGI work
 to be no more radical than is really needed, out of a desire to save time/
 effort by reusing others' insights wherever  possible) ... but at least I'm
 introducing a host of clearly novel technical ideas.

Yes, I acknowledge that you have novel ideas.  But do you really think
I'm so dumb that I ONLY use textbook ideas?  I try to integrate
existing methods.  My style of innovation is kind of subtle.

You have done something new, but not so new as to be in a totally
different dimension.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] modus ponens

2008-06-03 Thread YKY (Yan King Yin)
Modus ponens can be defined in a few ways.

If you take the binary logic definition:
A - B  means  ~A v B
you can translate this into probabilities but the result is a mess.  I
have analysed this in detail but it's complicated.  In short, this
definition is incompatible with probability calculus.

Instead I simply use
   A - B  meaning  P(B|A) = p
where p is the probability.  You can change p into an indefinite
probability or interval.

Is your modus ponens different from this?

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
Ben,

If we don't work out the correspondence (even approximately) between
FOL and term logic, this conversation would not be very fruitful.  I
don't even know what you're doing with PLN.  I suggest we try to work
it out here step by step.  If your approach really makes sense to me,
you will gain another helper =)   Also, this will be good for your
project's documentation.

I have some examples:

Eng:  Some philosophers are wise
TL:  +Philosopher+Wise
FOL:  philosopher(X) - wise(X)

Eng:  Romeo loves Juliet
TL:  +-Romeo* + (Loves +-Juliet*)
FOL:  loves(romeo, juliet)

Eng:  Women often have long hair
TL:  ?
FOL:  woman(X) - long_hair(X)

I know your term logic is slightly different from Fred Sommers'.  Can
you fill in the TL parts and also attach indefinite probabilities?

On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 If you attach indefinite probabilities to FOL propositions, and create
 indefinite probability formulas corresponding to standard FOL rules,
 you will have a subset of PLN

 But you'll have a hard time applying Bayes rule to FOL propositions
 without being willing to assign probabilities to terms ... and you'll
 have a hard time applying it to FOL variable expressions without doing
 something that equates to assigning probabilities to propositions w.
 unbound variables ... and like I said, I haven't seen any other
 adequate way of propagating pdf's through quantifiers than the one we
 use in PLN, though Halpern's book describes a lot of inadequate ways
 ;-)

Re assigning probabilties to terms...

Term in term logic is completely different from term in FOL.  I
guess terms in term logic roughly correspond to predicates or
propositions in FOL.  Terms in FOL seem to have no counterpart in term
logic..

Anyway there should be no confusion here.  Propositions are the ONLY
things that can have truth values.  This applies to term logic as well
(I just refreshed my memory of TL).  When truth values go from { 0, 1
} to [ 0, 1 ], we get single-value probabilistic logic.  All this has
a very solid and rigorous foundation, based on so-called model theory.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Propositions are not the only things that can have truth values...

Terms in term logic can have truth values.  But such terms
correspond to propositions in FOL.  There is absolutely no confusion
here.

 I don't have time to carry out a detailed mathematical discussion of
 this right now...

 We're about to (this week) finalize the PLN book draft ... I'll send
 you a pre-publication PDF early next week and then you can read it and
 we can argue this stuff after that ;-)

Thanks alot =)

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Stephen Reed [EMAIL PROTECTED] wrote:


  I believe that the crisp (i.e. certain or very near certain) KR for these
 domains will facilitate the use of FOL inference (e.g. subsumption) when I
 need it to supplement the current Texai spreading activation techniques for
 word sense disambiguation and relevance reasoning.

 I expect that OpenCog will focus on domains that require probabilistic
 reasoning, e.g. pattern recognition, which I am postponing until Texai is
 far enough along that expert mentors can teach it the skills for
 probabilistic reasoning.



Your approach is sensible, indeed similar to mine -- I'm also experimenting
with crisp logic only.  But there are 2 problems:

1.  Probabilistic inference cannot be grafted onto crisp logic easily.
The changes may be so great that much of the original work will be rendered
useless.

2.  You think we can do program synthesis with crisp logic only?  This has
profound implications if true...

YKY



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Do you have any insights on how this learning will be done?

That research area is known as ILP (inductive logic programming).
It's very powerful in the sense that almost anything (eg, any Prolog
program) can be learned that way.  But the problem is that the
combinatorial explosion is so great that you must use heuristics and
biases.  So far no one has applied it to large-scale commonsense
learning.  Some Cyc people have experimented with it recently.

  Cyc put a lot of effort into a natural language interface and failed.  What 
 approach will you use that they have not tried?  FOL requires a set of 
 transforms, e.g.

 All men are mortal - forall X, man(X) - mortal(X) (hard)
 Socrates is a man - (man(Socrates) (hard)
 - mortal(Socrates) (easy)
 - Socrates is mortal (hard).

 We have known for a long time how to solve the easy parts.  The hard parts 
 are AI-complete.  You have to solve AI before you can learn the knowledge 
 base.  Then after you build it, you won't need it.  What is the point?


We don't need 100% perfect NLP ability to learn the KB.  An NL
interface that can accept a simple subset of English will do.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/4/08, Stephen Reed [EMAIL PROTECTED] wrote:


  All of the work to date on program generation, macro processing,
 application configuration via parameters, compilation, assembly, and program
 optimization has used crisp knowledge representation (i.e. non-probabilistic
 data structures).  Dynamic, feedback based optimizing compilers, such as the
 Java HotSpot VM, do keep track of program path statistics in order to decide
 when to inline methods for example.  But on the whole, the traditional
 program development life cycle is free of probabilistic inference.


How about these scenarios:

1.  If a task is to be repeated 'many' times, use a loop.  If only 'a few'
times, write it out directly.  -- this requires fuzziness

2.  The gain of using algorihtm X on this problem is likely to be small.
-- requires probability


 I have a hypothesis that program design (to satisfy requirements), and in
 general engineering design, can be performed using crisp knowledge
 representation - with the provision that I will use cognitively-plausible
 spreading activation instead of, or to cache, time-consuming deductive
 backchaining.  My current work will explore this hypothesis with regard to
 composing simple programs that compose skills from more primitive skills.
 I am adapting Gerhard Wickler's Capability Description 
 Languagehttp://www.aiai.ed.ac.uk/oplan/cdl/index.htmlto match capabilities 
 (e.g. program composition capabilities) with tasks
 (e.g. clear a StringBuilder object).  CDL conveniently uses a crisp FOL
 knowledge representation.   
 Herehttp://texai.svn.sourceforge.net/viewvc/texai/BehaviorLanguage/data/method-definitions.bl?view=markupis
  a Texai behavior language file that contains capability descriptions for
 primitive Java compositions.  Each of these primitive capabilities is
 implemented by a Java object that can be persisted in the Texai KB as RDF
 statements.



Maybe you mean spreading activation is used to locate candidate facts /
rules, over which actual deductions are attempted?  That sounds very
promising.  One question is how to learn the association between nodes.

YKY



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread YKY (Yan King Yin)
On 6/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 eats(x, mouse)

That's a perfectly legitimate proposition.  So it is perfectly OK to write:

 P( eats(x,mouse) )

Note here that I assume your mouse refers to a particular instance
of a mouse, as in:

eats(X, mouse_1234)

What's confusing is:

 for instance the term

 cat

 has probability

 P(cat)

 P(x is a cat | x is in my experience base)

In FOL, the term term means either a constant, a variable, or a
function applied to a tuple of other terms.  In other words, terms in
FOL are objects, not propositions.

Examples of terms in FOL:

stray_cat_1234
mary_queen_of_scots
X
mother(X)
mother(mary_queen_of_scots)
etc...

If you want to say

P ( X is a cat | X is in my experience base )

the corresponding FOL proposition should be:

cat(X)

instead of

cat.

I think your notation of cat translates to cat(X) in FOL.

Your experience base may contain an instances such as:

cat( stray_cat_1234 )
female( mary_queen_of_scots )
eats( cat_4567, mouse_890 )
etc...

 You can map terms and free-variable expressions into propositions if
 you want to, though...

It's a bit confusing to map OpenCog terms to FOL propositions.  IMO
terms should not have probabilities attached to them.  Anyway let me
just leave that decision to you.  No more comments.

 However these propositional representations are a bit awkward and are
 not the way to represent things for the PLN rules to be simply
 applied... it is nicer by far to leave the experiential semantics
 implicit...

I'm interested to see how this is done.

1.  The contents of your experience base can be translated to FOL.
2.  Reasoning algorithms in FOL such as resolution are known to be
quite complex and slow.
3.  You claim that your reasoning algorithm is faster.
4.  That means, you've found a heuristic to reason quickly in FOL
(assuming your results can be translated back to FOL in polynomial
time).

More likely though, is that your algorithm is incomplete wrt FOL, ie,
there may be some things that FOL can infer but PLN can't.  Either
that, or your algorithm may be actually slower than FOL.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread YKY (Yan King Yin)
Well, it's still difficult for me to get a handle on how your logic
works, I hope you will provide some info in your docs, re the
correspondence between FOL and PLN.

I think it's fine that you use the term atom in your own way.  The
important thing is, whatever the objects that you attach probabilities
to, that class of objects should correspond to *propositions* in FOL.
From there it would be easier for me to understand your ideas.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread YKY (Yan King Yin)
On 6/2/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 YKY, how are you going to solve the natural language interface problem?  You 
 seem to be going down the same path as CYC.  What is different about your 
 system?

One more point:

Yes, my system is similar to Cyc in that it's logic-based.  But of
course, it will be augmented with probabilities and fuzziness, in some
ways yet to be figured out.

I guess your idea is that the language model should be the basis of
the AGI, whereas my idea is that AGI should be based on logical
representation.  The difference may not be as great as you think.

You may think that natural language is fluid and therefore more
suitable for AGI as compared to logic.  Let me point out that logic,
equipped with learning, can be equally fluid.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread YKY (Yan King Yin)
Ben,

I should not say that FOL is the standard of KR, but that it's
merely more popular.  I think researchers ought to be free to explore
whatever they want.

Can we simply treat PLN as a black box, so you don't have to explain
its internals, and just tell us what are the input and output format?

The ideal is to have everyone work on the same KR, but if that's
unattainable, the next best thing is to enable different modules to
interoperate as easily as possible...

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-01 Thread YKY (Yan King Yin)
Ben, Thanks for the answers.

One more question about the term atom used in OpenCog.

In logic an atom is a predicate applied to some arguments, for example:
   female(X)
   female(mary)
   female(mother(john))
   etc.

Truth values only apply to propositions, but they may consist of
only single atoms as above.  But still, there is a distinction.
Probabilities should only be attached to propositions, but not to
atoms (in logic).

Do OpenCog atoms roughly correspond to logical atoms?
And what is the counterpart of (logic) propositions in OpenCog?

I suggest don't use non-standard terminology 'cause it's very confusing...

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Uncertainty

2008-06-01 Thread YKY (Yan King Yin)
On 6/2/08, Matt Mahoney [EMAIL PROTECTED] wrote:
  Can you give an example of something expressed in PLN that
  is very hard or impossible to express in FOL?

 Mary is probably female

 Not impossible, as Ben says, just awkward.  The problem is that nearly every 
 statement has uncertain truth value, as well as uncertainty about the degree 
 of uncertainty.


I have briefly surveyed the research on uncertain reasoning, and found
out that no one has a solution to the entire problem.  Ben and Pei
Wang may be working towards their solutions but a satisfactory one may
be difficult to find.

Yes, we need uncertainties about uncertainties, ie, 2nd-order
probabilities.  This can be represented as either probability of
probability (Ben's approach), or interval probability.  It's hard to
say which approach is superior.  I guess it doesn't matter that much
at this stage.  Both problems have been solved by other researchers,
but the algorithms are rather complex.

The question of how we can justify the values of 2nd-order uncertainty
can be solved via machine learning:  that is, we prove that the
learning algorithm will make the values converge to *consistency*.

BTW, there is also a qualitative approach which tries to dispense
with exact values but this approach is not needed if we use 2nd-order
values.

Another type of uncertainty is fuzziness and the same situation occurs
as in probability -- we need 2nd-order fuzziness.  Again, the interval
fuzzy logic problem has been solved, with complex algorithms.

Step 2 is to combine the above solutions.

Step 3 is to find heuristics to approximate step 2, 'cause the exact
algorithm would definitely be too inefficient.

I suppose one can try to tackle steps 2 and 3 at the same time.

I don't really understand NARS or Ben's solution so I have to give
them the benefit of the doubt ;)

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Uninterpreted RDF terms

2008-05-19 Thread YKY (Yan King Yin)
On 5/18/08, Stephen Reed [EMAIL PROTECTED] wrote:

 For the others on this list following my progress, the example is from a
set of essential capability descriptions that I'll use to bootstrap the
skill acquisition facility of the the Texai dialog system.   The
subsumption-based capability matcher is done.  I'm writing Java code that
implements each of these capabilities.  That should be completed in a few
more days, and then I'll fit that into the already completed dialog system.
At that point I should be able to begin exploring what essential utterances
will be needed to acquire skills by being taught, and generate Java programs
to perform them.

This is a good step towards program synthesis, but I guess real
programming requires reasoning processes more sophisticated than subsumption
of pre- and post- conditions.

My system (I'm working on the prototype of it) uses exclusively declarative
knowledge, so it would be more suitable for simple commonsense reasoning
rather than program synthesis.  The latter will require building up
program-related concepts from the ground up.  That will be a long term goal,
and a challenging one.

If you use the Behavioral Language, one problem is that the
procedural knowledge is separated from the declarative knowledge -- the 2
parts may have no connection at all.

The problem with my approach is that reasoning about programs may be very
expensive as it requires many commonsense steps.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] standard way to represent NL in logic?

2008-05-08 Thread YKY (Yan King Yin)
On 5/7/08, Mike Tintner [EMAIL PROTECTED] wrote:
 YKY : Logic can deal with almost everything, depending on how much effort
 you put in it =)

 LES sanglots longs. des violons. de l'automne.
 Blessent mon cour d'une langueur monotone.

 You don't just read those words, (and most words), you hear them. How's
 logic going to hear them?


Google translates that into English as:
The long sobbing violins of autumn hurt my heart with a monotonous languor.

Believe me, an AGI is potentially capable of appreciating the sounds
of the verse and other such nuances.  I won't go into the details, but
the input sentence would be represented as a raw sensory event, and it
is up to abductive interpretation to derive its meanings.  That means,
the AGI would understand it superficially as The long sobbing
violins... etc, but augmented with other logic formulae that convey
other nuances.

You're talking about some very suble effects and that's not my focus
right now.  Right now I'm focusing on simple and practical AGI.  Your
stuff is potentially solvable by logic-based AGI, but I won't be
spending time on it now.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] standard way to represent NL in logic?

2008-05-07 Thread YKY (Yan King Yin)
Is there any standard (even informal) way of representing NL sentences in logic?

Especially complex sentences like John eat spaghetti with a fork or
The dog that chased the cat jumped over the fence. etc.

I have my own way of translating those sentences, but having a
standard would be much better.

Maybe we need to create such a standard, using a wiki-like place where
people can contribute their NL -- logic translations.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


  1   2   3   4   5   >