I have refined my P(Z) logic a bit. Now the truth values are all
unified to one type, probability distribution over Z, which has a
pretty nice interpretation. The new stuff are at sections 4.4.2 and
4.4.3.
http://www.geocities.com/genericai/P-Z-logic-excerpt-12-Jan-2009.pdf
I'm wondering if
Do you have any experimental results supporting your proposed probabilistic
fuzzy logic implementation? How would you devise such an experiment (for
example, a prediction task) to test alternative interpretations of logical
operators like AND, OR, NOT, IF-THEN, etc? Maybe you could manually
(Also, instead of a disclaimer about political correctness, couldn't you
just find examples that don't reveal your obsession with sex?)
OK, I've eliminated one instance.
http://www.geocities.com/genericai/P-Z-logic-excerpt-12-Jan-2009.pdf
There are still 2 mentions of sex, I'll eliminate
On Tue, Jan 13, 2009 at 6:19 AM, Vladimir Nesov robot...@gmail.com wrote:
I'm more interested in understanding the relationship between
inference system and environment (rules of the game) that it allows to
reason about,
Next thing I'll work on is the planning module. That's where the AGI
DARPA buys G.Tononi for 4.9 $Million! For what amounts to little more
than vague hopes that any of us here could have dreamed up. Here I am, up to
my armpits in an actual working proposition with a real science basis...
scrounging for pennies. hmmm...maybe if I sidle up and adopt an aging
You can start a PhD without having an MS first, but you'll still need to
take all the coursework corresponding to the MS
Like what kind of courses are those MS ones? I may or may not have
those background knowledge, through self-teaching...
And I think this makes sense! The PhD is supposed
Thanks for all the info...
I'll try both UK and US... (OK and Ireland too!)
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
Hi group,
I'm considering getting a PhD somewhere, and I've accumulated some
material for a thesis in my 50%-finished AGI book. I think getting a
PhD will put my work in a more rigorous form and get it published.
Also it may help me get funding afterwards, either in academia or in
the business
I got my PhD there in 1989 in math, not AI
Let me see... you were about 22 in 1989? I was still an undergrad at
that age...
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
On the contrary, getting a PhD is an astoundingly poor strategy for raising
$$ for a startup. If you have a talent for biz sufficient to raise $$ for a
startup, you can always get some prof to join your team to lend you academic
credibility.
It is also useful in terms of lending you more
Do you have an MS degree?
I don't have an MS.
In Europe, it's sometimes the case that after you get an MS, you can do a
PhD with no additional coursework, only thesis work.
That sounds good, but in Europe I may need to spend some time learning
a third language... =(
In the US, considerably
If...you want a non-research career, a Ph.D. is definitely not for you.
I want to be either an entrepreneur or a researcher... it's hard to
decide. What does AGI need most? Further research, or a sound
business framework? It seems that both are needed...
How about funding from academia -- would that be significant? I mean,
can I expect to get research grants right after I get a PhD?
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Depends how much time your thesis supervisor has gotten you writing
grant applications during your third year ;)
Generally speaking, if the $$ amount of research grants is bigger
than, say, investing my tuition fees on some business projects, then
it seems that the PhD is worth it (in terms of
On Wed, Nov 5, 2008 at 10:18 PM, Mike Tintner [EMAIL PROTECTED] wrote:
YKY,
As I was saying, before I so rudely interrupted myself - re the narrow AI vs
AGI problem difference:
*the syllogistic problems of logic - is Aristotle mortal? etc - which you
mainly use as examples - are narrow AI
On Thu, Nov 6, 2008 at 12:55 AM, Harry Chesley [EMAIL PROTECTED] wrote:
Personally, I'm not making an AGI that has emotions...
So you take the view that, despite our minimal understanding of the basis of
emotions, they will only arise if designed in, never spontaneously as an
emergent
On Wed, Nov 5, 2008 at 7:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Personally, I'm not making an AGI that has emotions, and I doubt if
emotions are generally desirable in AGIs, except when the goal is to
make human companions (and I wonder why people need them anyway, given
that there're
On Wed, Nov 5, 2008 at 6:05 AM, Harry Chesley [EMAIL PROTECTED] wrote:
The question of when it's ethical to do AGI experiments has bothered me
for a while. It's something that every AGI creator has to deal with
sooner or later if you believe you're actually going to create real
intelligence
Hi Ben and others,
After some more thinking, I decide to try the virtual credit approach afterall.
Last time Ben's argument was that the virtual credit method confuses
for-profit and charity emotions in people. At that time it sounded
convincing, but after some thinking I realized that it is
On Wed, Oct 29, 2008 at 6:34 PM, Trent Waddington
Don't forget my argument..
I don't recall hearing an argument from you. All your replies to me
are rather rude one liners.
YKY
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS
On Sun, Oct 12, 2008 at 8:23 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
I don't think that's a major difference conceptually, as there's a
constant-time
conversion between the two representations.
In my approach (which is not even implemented yet) the KB contains
rules that are used to
On Sun, Oct 12, 2008 at 12:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
OpenCog has VariableNodes in the AtomTable, which are used to represent
variables in the sense of FOL ...
I'm still unclear as to how OC performs inference with variables,
unification, etc. Maybe you can explain that
On Tue, Oct 7, 2008 at 11:33 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
I was trying to find a way so we can collaborate on one project, but
people don't seem to like the virtual credit idea.
No, no we don't :-)
Why not?
---
agi
Archives:
As has been said previously, there have been AI projects in the past
which tried this credits or shares route which turned out to be very
unsuccessful. The problem with issuing credits is that, rightly or
wrongly, an expectation of short term financial reward is built up in
the minds of some
On Tue, Oct 7, 2008 at 8:13 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
A good idea and a euro will get you a cup of coffee. Whoever said you
need to protect ideas is just shilly-shallying you. Ideas have no
market value; anyone capable of taking them up, already has more ideas
of his own
On Tue, Oct 7, 2008 at 7:55 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Cyc's DB is not publicly modifiable, but it's **huge** ... big enough that
its bulk would take others a really long time to replicate
A competent AGI should be able to absorb Cyc's knowledge, and I will
probably do so
On Tue, Oct 7, 2008 at 9:16 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
But whichever route you pick, follow it with conviction. If you flag
your project open source and then start talking about protecting
your ideas and trying to measure the exact value of everybody's
contributions so
On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski [EMAIL PROTECTED]
One way of going about it would be to let each person create their own
instance, which would have access to the global body of facts but
would be somewhat separate. This would prevent people from
contaminating the global
Hi all,
I need some advice as to open or closed source for my AGI project.
This is a very difficult choice as there are pros and cons on each
side.
The main reason why opensource is bad is that we cannot protect
innovative ideas from being copied by others. This may be a
disincentive for
On Tue, Oct 7, 2008 at 11:50 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
I still don't understand why you think a simple interface for entering facts
is so important... Cyc has a great UI for entering facts, and used it to
enter millions of them already ... how far did it get them toward AGI???
On Tue, Sep 30, 2008 at 6:43 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
We are talking about 2 things:
1. Using an ad hoc parser to translate NL to logic
2. Using an AGI to parse NL
I'm not sure what you mean by parse in step 2
Sorry, to put it more accurately:
#1 is using an ad hoc NLP
On Tue, Sep 30, 2008 at 12:50 PM, Linas Vepstas [EMAIL PROTECTED] wrote:
I'm planning to make the project opensource, but I want to have a web
site that keeps a record of contributors' contributions. So that's
taking some extra time.
Most wiki's automatically keep tracl of who made
what
On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski [EMAIL PROTECTED] wrote:
How much will you focus on natural language? It sounds like you want
that to be fairly minimal at first. My opinion is that chatbot-type
programs are not such a bad place to start-- if only because it is
good publicity.
I
On Mon, Sep 29, 2008 at 9:38 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
It seems to me the main limitation is that the language model has to be
described formally in Cycl, as a lexicon and rules for parsing and
disambiguation. There seems to be no mechanism for learning natural language
by
On Sun, Sep 28, 2008 at 5:23 PM, David Hart [EMAIL PROTECTED] wrote:
Actually, It's been my hunch for some time that the richness and importance
of Hellen Keller's sensational environment is frequently grossly
underestimated. The sensations of a deaf/blind person still include
proprioception,
On Mon, Sep 29, 2008 at 9:18 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Parsing English sentences into sets of formal-logic relationships is not
extremely hard given current technology.
But the only feasible way to do it, without making AGI breakthroughs
first, is to accept that these
On Tue, Sep 30, 2008 at 1:51 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
My point for YKY was (as you know) not that this is an impossible problem
but that it's a fairly deep AI problem which is not provided out-of-the-box
in any existing NLP toolkit. Solving disambiguation thoroughly is
On Sun, Sep 28, 2008 at 1:22 PM, Eric Burton [EMAIL PROTECTED] wrote:
The purpose of YKY's invocation of Helen Keller is interestingly at
odds with the usage that appears in the Jargon File.
In choosing Helen-Keller mode, I'm not deliberately trying to make
things harder for the baby AGI, it's
Hi group,
I'm starting an AGI project called G_0 which is focused on commonsense
reasoning (my long-term goal is to become the world's leading expert
in common sense). I plan to use it to collect commonsense knowledge
and to learn commonsense reasoning rules.
One thing I need is a universal
On Sun, Sep 28, 2008 at 5:21 AM, David Hart [EMAIL PROTECTED] wrote:
Hi YKY,
Can you explain what is meant by collect commonsense knowledge?
That means collecting facts and rules.
Example of a commonsense fact: apples are red
Example of a commonsense rule: if X is female X has an
On Thu, Sep 18, 2008 at 3:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Prolog is not fast, it is painfully slow for complex inferences due to using
backtracking as a control mechanism
The time-complexity issue that matters for inference engines is
inference-control ... i.e. dampening the
On Tue, Sep 23, 2008 at 6:59 PM, Abram Demski [EMAIL PROTECTED] wrote:
I'm in the process of reading this paper:
http://www.jair.org/papers/paper1410.html
It might answer a couple of your questions. And, it looks like it has
an interesting proposal about generating heuristics from the
On Tue, Sep 23, 2008 at 9:00 PM, Abram Demski [EMAIL PROTECTED] wrote:
No transfer? This paper suggests otherwise:
http://www.cs.washington.edu/homes/pedrod/papers/aaai06b.pdf
Well, people know that propositional SAT is fast, so
propositionalization is a tempting heuristic, but as the paper's
On Tue, Sep 23, 2008 at 9:00 PM, Abram Demski [EMAIL PROTECTED]
No transfer? This paper suggests otherwise:
http://www.cs.washington.edu/homes/pedrod/papers/aaai06b.pdf
Sorry, I replied too quickly...
This paper does contribute to solving FOL inference problems, but it
is still inadequate
On Tue, Sep 23, 2008 at 9:20 PM, YKY (Yan King Yin)
Sorry, I replied too quickly...
This paper does contribute to solving FOL inference problems, but it
is still inadequate for AGI because the FOL is required to be
function-free. If you remember programming in Prolog, we often use
functors
On Thu, Sep 18, 2008 at 4:21 AM, Kingma, D.P. [EMAIL PROTECTED] wrote:
Small question... aren't Bbayesian network nodes just _conditionally_
independent: so that set A is only independent from set B when
d-separated by some set Z? So please clarify, if possible, what kind
of independence you
On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski [EMAIL PROTECTED] wrote:
Speaking of my BPZ-logic...
2. Good at quick-and-dirty reasoning when needed
Right now I'm focusing on quick-and-dirty *only*. I wish to make the
logic's speed approach that of Prolog (which is a fast inference
algorithm
BTW, if any AGI projects would like to incorporate my ideas, feel free
to do so, and I'd like to get involved too!
YKY
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
A somewhat revised version of my paper is at:
http://www.geocities.com/genericai/AGI-ch4-logic-9Sep2008.pdf
(sorry it is now a book chapter and the bookmarks are lost when extracting)
On Tue, Sep 2, 2008 at 7:05 PM, Pei Wang [EMAIL PROTECTED] wrote:
I intend to use NARS confidence in a way
On Tue, Sep 2, 2008 at 12:05 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
but in a PLN approach this could be avoided by looking at
IntensionalInheritance B A
rather than extensional inheritance..
The question is how do you know when to apply the intensional
inheritance, instead of the
On Tue, Sep 9, 2008 at 4:27 AM, Pei Wang [EMAIL PROTECTED] wrote:
Sorry I don't have the time to type a detailed reply, but for your
second point, see the example in
http://www.cogsci.indiana.edu/pub/wang.fuzziness.ps , page 9, 4th
paragraph:
If these two types of uncertainty [randomness and
On 9/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
About indefinite/imprecise probabilities, you dismiss them as
overcomplicated, but you don't address the reason they were introduced in the
first place: In essence, to allow a rationally manipulable NARS-like
confidence measure that works
On 9/2/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
NARS confidence is not exactly derived from probability, but is
compatible with probability.
Sorry, I meant, the definition of NARS confidence is compatible with
probability, but NARS confidence as used in NARS, defies probability
laws
On 9/1/08, Benjamin Johnston [EMAIL PROTECTED] wrote:
Thanks for your comments =)
--
1. Why just P,Z and B?
Three mechanisms seems somewhat arbitrary - I think you need to make a very
compelling case for why there are three and only three mechanisms.
Or, more
On 8/13/08, rick the ponderer [EMAIL PROTECTED] wrote:
Reading this, I get the view of ai as basically neural networks, where
each individual perceptron could be any of a number of algorithms
(decision tree, random forest, svm etc).
I also get the view that academics such as Hinton are trying
On 8/13/08, rick the ponderer [EMAIL PROTECTED] wrote:
Thanks for replying YKY
Is the logic learning you are talking about inductive logic programming.
If so, isn't ilp basically a search through the space of logic programs (i
may be way off the mark here!), wouldn't it be too large of a search
On 8/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
But if one doesn't need to get into implementation details, in the
simplest case one just has
VariableScopeLink X
ImplicationLink
___ANDLink
__ InheritanceLink X male
__ InheritanceLink X Unmarried
___InheritanceLink X bachelor
Hi Ben,
Hope you don't mind providing more clarification...
In first-order logic there may be a rule such as:
male(X) ^ unmarried(X) - bachelor(X)
We can convert this to a probabilistic rule:
P(bachelor(X) = true | male(X) = true, unmarried(X) = true ) = 1.0
but note that this rule
On 8/12/08, Ben Goertzel [EMAIL PROTECTED] wrote:
construct 1 =
ImplicationLink
___ANDLink
__ PredicateNode isMale
__ PredicateNode isUnmarried
___PredicateNode isBachelor
It's just a relationship between functions (predicates being mathematical
functions from entities to truth
Ben, BTW, you may try inviting Stephen Muggleton to AGI'09. He
actually talked to me a few times despite that I knew very little
about ILP at that time. According to the wikipedia page he is
currently working on an `artificial scientist' .
http://en.wikipedia.org/wiki/Stephen_Muggleton
YKY
On 8/5/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Yes, but in PLN/ OpenCogPrime backward chaining *can* create hypothetical
logical relationships and then seek to estimate their truth values
See this page
http://opencog.org/wiki/OpenCogPrime:IntegrativeInference
and the five pages linked to
On 7/31/08, Mark Waser [EMAIL PROTECTED] wrote:
Categorization depends upon context. This was pretty much decided by the
late 1980s (look up Fuzzy Concepts).
This is an important point so I don't want to miss it. But I can't think of
a very good example of context-dependence of concepts.
On 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
As I understand it, FOL is only Turing complete when
predicates/relations/functions beyond the ones in the data are
allowed. Would PLN naturally invent predicates, or would it need to be
told to specifically? Is this what concept creation does?
On 8/5/08, Mike Tintner [EMAIL PROTECTED] wrote:
Jeez, there is NO concept that is not dependent on context. There is NO
concept that is not infinitely fuzzy and open-ended in itself, period -
which is the principal reason why language is and has to be grounded
(although that needs
On 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
Prolog (and logic programming) is Turing complete, but FOL is not a
programming language so I'm not sure.
You are right, I should have said FOL is turing complete within the
right inference system [such as Prolog], but only when
On 8/6/08, Abram Demski [EMAIL PROTECTED] wrote:
There is one common feature to all chairs: They are for the purpose of
sitting on. I think it is important that this is *not* a visual
characteristic.
It is possible to recognize chairs that cannot be sat on -- for
example, a broken chair, a
On 8/6/08, Jim Bromer [EMAIL PROTECTED] wrote:
You made some remarks, (I did not keep a record of them), that sounds
similar to some of the problems of conceptual complexity (or
complicatedness) that I am interested in. Can you describe something
of what you are working on in a little more
On 7/29/08, Benjamin Johnston [EMAIL PROTECTED] wrote:
I see the failure in this argument at step 2. Cybersex is a kind of erotic
interaction. Erotic interactions are often called sex in general
conversation, even though there are many kinds of erotic interactions that
don't result in the
Here is an example of a problematic inference:
1. Mary has cybersex with many different partners
2. Cybersex is a kind of sex
3. Therefore, Mary has many sex partners
4. Having many sex partners - high chance of getting STDs
5. Therefore, Mary has a high chance of STDs
What's wrong with
On 7/28/08, Pei Wang [EMAIL PROTECTED] wrote:
Every rule is general to a degree, which means it ignores
exception. It is simply impossible to list all exceptions for any
given rule. This issue has been discussed by many people in the
non-monotonic logic community.
The solution is not to
On 7/28/08, Mike Tintner [EMAIL PROTECTED] wrote:
Mary says Clinton had sex with her.
Clinton says he wouldn't call that sex.
LOL...
But your examples are still symbolic in nature. I don't see why they
can't be reasoned via logic.
In the above example the concept sex may be a fuzzy concept.
On 7/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Your inference trajectory assumes that cybersex and STD are
probabilistically independent within sex but this is not the case.
We only know that:
P(sex | cybersex) = high
P(STD | sex) = high
If we're also given that
P(STD | cybersex)
On 7/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
PLN uses confidence values within its truth values, with a different
underlying semantics and math than NARS; but that doesn't help much with the
above problem...
There is a confidence-penalty used in PLN whenever an independence assumption
On 7/28/08, Pei Wang [EMAIL PROTECTED] wrote:
A new version of NARS (Open-NARS 1.1.0)...
I'm writing a paper on a probabilistic-fuzzy logic that is suitable
for AGI. It uses some of your ideas. I will put it on the net when
it's finished...
YKY
---
On 7/29/08, Charles Hixson [EMAIL PROTECTED] wrote:
There's nothing wrong with the logical argument. What's wrong is that you
are presuming a purely declarative logic approach can work...which it can in
extremely simple situations, where you can specify all necessary facts.
My belief about
On 7/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
Why isn't science done via logic? Why don't physicists, chemists,
biologists, psychologists and sociologists just use logic to find out about
the world? Do you see why?And bear in mind that scientists are only formal
representatives of every
On 7/29/08, Charles Hixson [EMAIL PROTECTED] wrote:
This is true, but the logic statements of the model are rather different
than simple assertions, much more like complex statements specifying
proportional relationships and causal links. I envision the causal links
as being at statements
On 7/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
YKY: The key word here is model. If you can reason with mental models,
then of course you can resolve a lot of paradoxes in logic. This
boils down to: how can you represent mental models? And they seem to
boil down further to logical
On 7/5/08, Pei Wang [EMAIL PROTECTED] wrote:
Though there is a loop, YKY's problem not is caused by circular
inference, but by multiple Inheritances, that is, different
inference paths give different conclusions. This is indeed a problem
in Bayes net, and there is no general solution in that
I'm considering nonmonotonic reasoning using Bayes net, and got stuck.
There is an example on p483 of J Pearl's 1988 book PRIIS:
Given:
birds can fly
penguins are birds
penguins cannot fly
The desiderata is to conclude that penguins are birds, but penguins
cannot fly.
Pearl translates the KB
On 6/23/08, William Pearson [EMAIL PROTECTED] wrote:
The base beliefs shared between the group would be something like
- The entities will not have goals/motivations inherent to their
form. That is robots aren't likely to band together to fight humans,
or try to take over the world for
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Also, YKY, I can't help but note that your currently approach seems
extremely similar to Texai (which seems quite similar to Cyc to me),
more so than to OpenCog Prime (my proposal for a Novamente-like system
built on OpenCog, not yet fully
Hi Ben,
Note that I did not pick FOL as my starting point because I wanted to
go against you, or be a troublemaker. I chose it because that's what
the textbooks I read were using. There is nothing personal here.
It's just like Chinese being my first language because I was born in
China. I
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
1) representing uncertainties in a way that leads to tractable, meaningful
logical manipulations. Indefinite probabilities achieve this. I'm not saying
they're the only way to achieve this, but I'll argue that single-number,
Walley-interval,
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
One thing I don't get, YKY, is why you think you are going to take
textbook methods that have already been shown to fail, and somehow
make them work. Can't you see that many others have tried to use
FOL and ILP already, and they've run into
Modus ponens can be defined in a few ways.
If you take the binary logic definition:
A - B means ~A v B
you can translate this into probabilities but the result is a mess. I
have analysed this in detail but it's complicated. In short, this
definition is incompatible with probability
Ben,
If we don't work out the correspondence (even approximately) between
FOL and term logic, this conversation would not be very fruitful. I
don't even know what you're doing with PLN. I suggest we try to work
it out here step by step. If your approach really makes sense to me,
you will gain
On 6/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Propositions are not the only things that can have truth values...
Terms in term logic can have truth values. But such terms
correspond to propositions in FOL. There is absolutely no confusion
here.
I don't have time to carry out a detailed
On 6/3/08, Stephen Reed [EMAIL PROTECTED] wrote:
I believe that the crisp (i.e. certain or very near certain) KR for these
domains will facilitate the use of FOL inference (e.g. subsumption) when I
need it to supplement the current Texai spreading activation techniques for
word sense
On 6/3/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Do you have any insights on how this learning will be done?
That research area is known as ILP (inductive logic programming).
It's very powerful in the sense that almost anything (eg, any Prolog
program) can be learned that way. But the problem
On 6/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
All of the work to date on program generation, macro processing,
application configuration via parameters, compilation, assembly, and program
optimization has used crisp knowledge representation (i.e. non-probabilistic
data structures).
On 6/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
eats(x, mouse)
That's a perfectly legitimate proposition. So it is perfectly OK to write:
P( eats(x,mouse) )
Note here that I assume your mouse refers to a particular instance
of a mouse, as in:
eats(X, mouse_1234)
What's confusing is:
Well, it's still difficult for me to get a handle on how your logic
works, I hope you will provide some info in your docs, re the
correspondence between FOL and PLN.
I think it's fine that you use the term atom in your own way. The
important thing is, whatever the objects that you attach
On 6/2/08, Matt Mahoney [EMAIL PROTECTED] wrote:
YKY, how are you going to solve the natural language interface problem? You
seem to be going down the same path as CYC. What is different about your
system?
One more point:
Yes, my system is similar to Cyc in that it's logic-based. But of
Ben,
I should not say that FOL is the standard of KR, but that it's
merely more popular. I think researchers ought to be free to explore
whatever they want.
Can we simply treat PLN as a black box, so you don't have to explain
its internals, and just tell us what are the input and output format?
Ben, Thanks for the answers.
One more question about the term atom used in OpenCog.
In logic an atom is a predicate applied to some arguments, for example:
female(X)
female(mary)
female(mother(john))
etc.
Truth values only apply to propositions, but they may consist of
only single
On 6/2/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Can you give an example of something expressed in PLN that
is very hard or impossible to express in FOL?
Mary is probably female
Not impossible, as Ben says, just awkward. The problem is that nearly every
statement has uncertain truth
On 5/18/08, Stephen Reed [EMAIL PROTECTED] wrote:
For the others on this list following my progress, the example is from a
set of essential capability descriptions that I'll use to bootstrap the
skill acquisition facility of the the Texai dialog system. The
subsumption-based capability matcher
On 5/7/08, Mike Tintner [EMAIL PROTECTED] wrote:
YKY : Logic can deal with almost everything, depending on how much effort
you put in it =)
LES sanglots longs. des violons. de l'automne.
Blessent mon cour d'une langueur monotone.
You don't just read those words, (and most words), you hear
Is there any standard (even informal) way of representing NL sentences in logic?
Especially complex sentences like John eat spaghetti with a fork or
The dog that chased the cat jumped over the fence. etc.
I have my own way of translating those sentences, but having a
standard would be much
1 - 100 of 442 matches
Mail list logo