On 2/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Feeding all the ambiguous interpretations of a load of sentences into
a probabilistic
logic network, and letting them get resolved by reference to each
other, is a sort of
search for the most likely solution of a huge system of simultaneous
On 2/25/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi,
There is no good overview of SMT so far as I know, just some technical
papers... but SAT solvers are not that deep and are well reviewed in
this book...
http://www.sls-book.net/
But that's *propositional* satisfiability, the results may
On 2/15/08, Pei Wang [EMAIL PROTECTED] wrote:
To me, the following two questions are independent of each other:
*. What type of reasoning is needed for AI? The major answers are:
(A): deduction only, (B) multiple types, including deduction,
induction, abduction, analogy, etc.
*. What type
On 2/26/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Obviously, extracting knowledge from the Web using a simplistic SAT
approach is infeasible
However, I don't think it follows from this that extracting rich
knowledge from the Web is infeasible
It would require a complex system involving at
On 2/27/08, Ben Goertzel [EMAIL PROTECTED] wrote:
YKY
I thought you were talking about the extraction of information that
is explicitly stated in online text.
Of course, inference is a separate process (though it may also play a
role in direct information extraction).
I don't think the
My latest thinking tends to agree with Matt that language and common sense
are best learnt together. (Learning langauge before common sense
is impossible / senseless).
I think Ben's text mining approach has one big flaw: it can only reason
about existing knowledge, but cannot generate new ideas
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
I'm going to try and elucidate my approach to building an intelligent
system, in a round about fashion. This is the problem I am trying to
solve.
Imagine you are designing a computer system to solve an unknown
problem, and you have these
On 2/28/08, William Pearson [EMAIL PROTECTED] wrote:
Note I want something different than computational universality. E.g.
Von Neumann architectures are generally programmable, Harvard
architectures aren't. As they can't be reprogrammed at run time.
It seems that you want to build the AGI from
I'm increasingly convinced that the human brain is not a statistical
learner, but a logical learner. There are many examples of humans
learning concepts/rules from one or two examples, rather than thousands of
examples. So I think that at a high level, AGI should be logic-based.
But it would be
On 2/28/08, Mark Waser [EMAIL PROTECTED] wrote:
I think Ben's text mining approach has one big flaw: it can
only reason about existing knowledge, but cannot generate new ideas using
words / concepts
There is a substantial amount of literature that claims that *humans*
can't generate new
On 3/4/08, Mike Tintner [EMAIL PROTECTED] wrote:
Good example, but how about: language is open-ended, period and capable of
infinite rather than myriad interpretations - and that open-endedness is
the whole point of it?.
Simple example much like yours : handle. You can attach words for
objects
On 3/5/08, david cash [EMAIL PROTECTED] wrote:
In my opinion, instead of having to cherry-pick desirable and
undesirable traits in an unconscious AGI entity, that we, of course, wish to
have consciousness and cognitive abilites like reasoning, deductive and
inductive logic comprehension skills,
On 3/4/08, Mark Waser [EMAIL PROTECTED] wrote:
But the question is whether the internal knowledge representation of
the AGI needs to allow ambiguities, or should we use an ambiguity-free
representation. It seems that the latter choice is better.
An excellent point. But what if the
On 3/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Rather, I think the right goal is to create an AGI that, in each
context, can be as ambiguous as it wants/needs to be in its
representation of a given piece of information.
Ambiguity allows compactness, and can be very valuable in this regard.
For those using database systems for AGI, I'm wondering if the data
retrieval rate would be a problem.
Typically we need to retrieve many nodes from the DB to do inference.
The nodes may be scattered around the DB. So it may require *many*
disk accesses. My impression is that most DBMS are
On 4/17/08, J. Andrew Rogers [EMAIL PROTECTED] wrote:
No, you are not correct about this. All good database engines use a
combination of clever adaptive cache replacement algorithms (read: keeps
stuff you are most likely to access next in RAM) and cost-based optimization
(read: optimizes
To use an example,
If a lot of people search for Harry Porter, then a conventional
database system would make future retrieval of the Harry Porter node
faster.
But the requirement of the inference system is such that, if Harry
Porter is fetched, then we would want *other* things that are
On 4/17/08, Mark Waser [EMAIL PROTECTED] wrote:
You *REALLY* need to get up to speed on current database systems before you
make more ignorant statements.
First off, *most* databases RARELY go to the disk for reads. Memory is
cheap and the vast majority of complex databases are generally
Hi Stephen,
Thanks for sharing this! VERY few people have experience with this stuff...
On 4/17/08, Stephen Reed [EMAIL PROTECTED] wrote:
4. I began writing my own storage engine, for a fast, space-efficient,
partitioned and sharded knowledge base, soon realizing that this was far too
big
On 4/17/08, J. Andrew Rogers [EMAIL PROTECTED] wrote:
Again, most good database engines can do this, as it is a standard access
pattern for databases, and most databases can solve this problem multiple
ways. As an example, clustering and index-organization features in
databases address your
On 4/18/08, Mark Waser [EMAIL PROTECTED] wrote:
Yes. RAM is *HUGE*. Intelligence is *NOT*.
Really? I will believe that if I see more evidence... right now I'm skeptical.
Also, I'm designing a learning algorithm that stores *hypotheses* in
the KB along with accepted rules. This will
On 4/18/08, Stephen Reed [EMAIL PROTECTED] wrote:
I agree with your side of the debate about whole KB not fitting into RAM. As
a solution, I propose to partition the whole KB into the tiniest possible
cached chunks, suitable for a single agent running on a host computer with
RAM resources
On 4/18/08, J. Andrew Rogers [EMAIL PROTECTED] wrote:
On Apr 17, 2008, at 3:32 PM, YKY (Yan King Yin) wrote:
Disk access rate is ~10 times faster than ethernet access rate. IMO,
if RAM is not enough the next thing to turn to should be the harddisk.
Eh? Ethernet latency is sub-millisecond
On 4/18/08, Mark Waser [EMAIL PROTECTED] wrote:
Um. Neither side is arguing that the whole KB fit into RAM. I'm arguing
that the necessary *core* for intelligence plus enough cached chunks (as
you phrase it) to support the current thought processes WILL fit into RAM.
It's obviously ludicrous
On 4/18/08, Matt Mahoney [EMAIL PROTECTED] wrote:
What is your estimate of the quantity of all the world's knowledge? (Or the
amount needed to achieve AGI or some specific goal?)
Matt,
The world's knowledge is irrelevant to the goal of AGI. What we
need is to build a commonsense AGI and then
On 4/19/08, Richard Loosemore [EMAIL PROTECTED] wrote:
PREMISES:
(1) AGI is one of the most complicated problem in the history of
science, and therefore requires substantial funding for it to happen.
Potentially, though, massively distributed, collaborative open-source
On 4/19/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Though it is unlikely to do so, because collaborative open-source
projects are best suited to situations in which the fundamental ideas behind
the design has been solved.
I believe I've solved the fundamental issues behind the
On 4/19/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
we lack such a consensus. So the theorists are not working together.
I correct that. Theorists do not need to work together; theories can
be applied anywhere. It's the *designers* who are not working
together.
YKY
On 4/19/08, Pei Wang [EMAIL PROTECTED] wrote:
Not all theoretical problems can or need to be solved by practical
testing. Also, in this field, no infrastructure is really
theoretically neutral --- OpenCog is clearly not suitable to test
all kinds of AGI theories, though I like the project, and
On 4/19/08, Ben Goertzel [EMAIL PROTECTED] wrote:
I don't claim that the Novamente/OpenCog design is the **only** way ... but I
do
note that the different parts are carefully designed to interoperate together
in subtle ways, so replacing any one component w/ some standard system
won't work.
There is no doubt that learning new languages at an older age is much
more difficult than younger. I wonder if there are some hard
computational constraints that we must observe in order for the
learning algorithm to be tractable. Perhaps sensory / linguistic
learning should be most intense
On Thu, Apr 24, 2008 at 2:20 AM, J. Andrew Rogers
[EMAIL PROTECTED] wrote:
On Apr 22, 2008, at 11:55 PM, YKY (Yan King Yin) wrote:
There is no doubt that learning new languages at an older age is much
more difficult than younger.
I seem to recall that recent research does not support
On Thu, Apr 24, 2008 at 6:22 AM, Mark Waser [EMAIL PROTECTED] wrote:
I think a person thinks in his/her first language, and when talking in
a second language there is some extra processing going on (though it
may not be exactly a translation process), which slow things down,
giving the
(I'm kind of busy with personal matters... so will be brief)
I want to know where can we have an AGI project that allows
collaboration, and is also commercial?
I think many of the other AI communities are strongly academical.
This list is slightly different in that respect.
YKY
@Stephen Reed and others:
I'm writing a prototype of my AGI in Lisp, with special emphasis on
the inductive learning algorithm. I'm looking for collaborators.
It seems that Texai is the closed to my AGI theory, so it would be
easier for us to jam. I wonder if Texai has already developed
On 5/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
Interesting that you should ask about Texai and reasoning / learning
algorithms. As you know, my initial approach to learning is learning by
being taught. Therefore I do not have much yet to offer with regard to
machine learning, learning
On 5/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
As perhaps you know, I want to organize Texai as a vast multitude of
agents situated in a hierarchical control system, grouped as possibly
redundant, load-sharing, agents within an agency sharing a specific
mission. I have given some thought to
I'm wondering if it's possible to plug in my learning algorithm to
OpenCog / Novamente?
The main incompatibilities stem from:
1. predicate logic vs term logic
2. graphical KB vs sentential KB
If there is a way to somehow bridge these gaps, it may be possible
YKY
On 5/6/08, Stephen Reed [EMAIL PROTECTED] wrote:
I believe the opposite of what you say I hope that my following
explanation will help converge our thinking. Let me first emphasize
that I plan a vast multitude of specialized agencies, in which each
agency has a particular
Is there any standard (even informal) way of representing NL sentences in logic?
Especially complex sentences like John eat spaghetti with a fork or
The dog that chased the cat jumped over the fence. etc.
I have my own way of translating those sentences, but having a
standard would be much
On 5/7/08, Matt Mahoney [EMAIL PROTECTED] wrote:
No. But it hasn't stopped people from trying.
The meaning of sentences and even paragraphs depends on context that is
not captured in logic. Consider the following examples, where a different
word is emphasized in each case:
- I didn't
On 5/7/08, Stephen Reed [EMAIL PROTECTED] wrote:
To my knowledge there is a standard style but there is of course no standard
ontology. Roughly the standard style is First Order Predicate Calculus
(FOPC) and within the linguistics community this is called logical form. For
reference see
On 5/7/08, Stephen Reed [EMAIL PROTECTED] wrote:
I have not heard about Rus form. Could you provide a link or reference?
This is one of the papers:
http://citeseer.ist.psu.edu/cache/papers/cs/22812/http:zSzzSzwww.seas.smu.eduzSz~vasilezSzictai2001.pdf/rus01high.pdf
you can find some examples
On 5/7/08, Mike Tintner [EMAIL PROTECTED] wrote:
YKY : Logic can deal with almost everything, depending on how much effort
you put in it =)
LES sanglots longs. des violons. de l'automne.
Blessent mon cour d'une langueur monotone.
You don't just read those words, (and most words), you hear
On 5/18/08, Stephen Reed [EMAIL PROTECTED] wrote:
For the others on this list following my progress, the example is from a
set of essential capability descriptions that I'll use to bootstrap the
skill acquisition facility of the the Texai dialog system. The
subsumption-based capability matcher
Ben, Thanks for the answers.
One more question about the term atom used in OpenCog.
In logic an atom is a predicate applied to some arguments, for example:
female(X)
female(mary)
female(mother(john))
etc.
Truth values only apply to propositions, but they may consist of
only single
On 6/2/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Can you give an example of something expressed in PLN that
is very hard or impossible to express in FOL?
Mary is probably female
Not impossible, as Ben says, just awkward. The problem is that nearly every
statement has uncertain truth
On 6/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
eats(x, mouse)
That's a perfectly legitimate proposition. So it is perfectly OK to write:
P( eats(x,mouse) )
Note here that I assume your mouse refers to a particular instance
of a mouse, as in:
eats(X, mouse_1234)
What's confusing is:
Well, it's still difficult for me to get a handle on how your logic
works, I hope you will provide some info in your docs, re the
correspondence between FOL and PLN.
I think it's fine that you use the term atom in your own way. The
important thing is, whatever the objects that you attach
On 6/2/08, Matt Mahoney [EMAIL PROTECTED] wrote:
YKY, how are you going to solve the natural language interface problem? You
seem to be going down the same path as CYC. What is different about your
system?
One more point:
Yes, my system is similar to Cyc in that it's logic-based. But of
Ben,
I should not say that FOL is the standard of KR, but that it's
merely more popular. I think researchers ought to be free to explore
whatever they want.
Can we simply treat PLN as a black box, so you don't have to explain
its internals, and just tell us what are the input and output format?
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Also, YKY, I can't help but note that your currently approach seems
extremely similar to Texai (which seems quite similar to Cyc to me),
more so than to OpenCog Prime (my proposal for a Novamente-like system
built on OpenCog, not yet fully
Hi Ben,
Note that I did not pick FOL as my starting point because I wanted to
go against you, or be a troublemaker. I chose it because that's what
the textbooks I read were using. There is nothing personal here.
It's just like Chinese being my first language because I was born in
China. I
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
1) representing uncertainties in a way that leads to tractable, meaningful
logical manipulations. Indefinite probabilities achieve this. I'm not saying
they're the only way to achieve this, but I'll argue that single-number,
Walley-interval,
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
One thing I don't get, YKY, is why you think you are going to take
textbook methods that have already been shown to fail, and somehow
make them work. Can't you see that many others have tried to use
FOL and ILP already, and they've run into
Modus ponens can be defined in a few ways.
If you take the binary logic definition:
A - B means ~A v B
you can translate this into probabilities but the result is a mess. I
have analysed this in detail but it's complicated. In short, this
definition is incompatible with probability
Ben,
If we don't work out the correspondence (even approximately) between
FOL and term logic, this conversation would not be very fruitful. I
don't even know what you're doing with PLN. I suggest we try to work
it out here step by step. If your approach really makes sense to me,
you will gain
On 6/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Propositions are not the only things that can have truth values...
Terms in term logic can have truth values. But such terms
correspond to propositions in FOL. There is absolutely no confusion
here.
I don't have time to carry out a detailed
On 6/3/08, Stephen Reed [EMAIL PROTECTED] wrote:
I believe that the crisp (i.e. certain or very near certain) KR for these
domains will facilitate the use of FOL inference (e.g. subsumption) when I
need it to supplement the current Texai spreading activation techniques for
word sense
On 6/3/08, Matt Mahoney [EMAIL PROTECTED] wrote:
Do you have any insights on how this learning will be done?
That research area is known as ILP (inductive logic programming).
It's very powerful in the sense that almost anything (eg, any Prolog
program) can be learned that way. But the problem
On 6/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
All of the work to date on program generation, macro processing,
application configuration via parameters, compilation, assembly, and program
optimization has used crisp knowledge representation (i.e. non-probabilistic
data structures).
On 6/23/08, William Pearson [EMAIL PROTECTED] wrote:
The base beliefs shared between the group would be something like
- The entities will not have goals/motivations inherent to their
form. That is robots aren't likely to band together to fight humans,
or try to take over the world for
I'm considering nonmonotonic reasoning using Bayes net, and got stuck.
There is an example on p483 of J Pearl's 1988 book PRIIS:
Given:
birds can fly
penguins are birds
penguins cannot fly
The desiderata is to conclude that penguins are birds, but penguins
cannot fly.
Pearl translates the KB
On 7/5/08, Pei Wang [EMAIL PROTECTED] wrote:
Though there is a loop, YKY's problem not is caused by circular
inference, but by multiple Inheritances, that is, different
inference paths give different conclusions. This is indeed a problem
in Bayes net, and there is no general solution in that
Here is an example of a problematic inference:
1. Mary has cybersex with many different partners
2. Cybersex is a kind of sex
3. Therefore, Mary has many sex partners
4. Having many sex partners - high chance of getting STDs
5. Therefore, Mary has a high chance of STDs
What's wrong with
On 7/28/08, Pei Wang [EMAIL PROTECTED] wrote:
Every rule is general to a degree, which means it ignores
exception. It is simply impossible to list all exceptions for any
given rule. This issue has been discussed by many people in the
non-monotonic logic community.
The solution is not to
On 7/28/08, Mike Tintner [EMAIL PROTECTED] wrote:
Mary says Clinton had sex with her.
Clinton says he wouldn't call that sex.
LOL...
But your examples are still symbolic in nature. I don't see why they
can't be reasoned via logic.
In the above example the concept sex may be a fuzzy concept.
On 7/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Your inference trajectory assumes that cybersex and STD are
probabilistically independent within sex but this is not the case.
We only know that:
P(sex | cybersex) = high
P(STD | sex) = high
If we're also given that
P(STD | cybersex)
On 7/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
PLN uses confidence values within its truth values, with a different
underlying semantics and math than NARS; but that doesn't help much with the
above problem...
There is a confidence-penalty used in PLN whenever an independence assumption
On 7/28/08, Pei Wang [EMAIL PROTECTED] wrote:
A new version of NARS (Open-NARS 1.1.0)...
I'm writing a paper on a probabilistic-fuzzy logic that is suitable
for AGI. It uses some of your ideas. I will put it on the net when
it's finished...
YKY
---
On 7/29/08, Charles Hixson [EMAIL PROTECTED] wrote:
There's nothing wrong with the logical argument. What's wrong is that you
are presuming a purely declarative logic approach can work...which it can in
extremely simple situations, where you can specify all necessary facts.
My belief about
On 7/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
Why isn't science done via logic? Why don't physicists, chemists,
biologists, psychologists and sociologists just use logic to find out about
the world? Do you see why?And bear in mind that scientists are only formal
representatives of every
On 7/29/08, Charles Hixson [EMAIL PROTECTED] wrote:
This is true, but the logic statements of the model are rather different
than simple assertions, much more like complex statements specifying
proportional relationships and causal links. I envision the causal links
as being at statements
On 7/29/08, Mike Tintner [EMAIL PROTECTED] wrote:
YKY: The key word here is model. If you can reason with mental models,
then of course you can resolve a lot of paradoxes in logic. This
boils down to: how can you represent mental models? And they seem to
boil down further to logical
On 7/29/08, Benjamin Johnston [EMAIL PROTECTED] wrote:
I see the failure in this argument at step 2. Cybersex is a kind of erotic
interaction. Erotic interactions are often called sex in general
conversation, even though there are many kinds of erotic interactions that
don't result in the
On 8/5/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Yes, but in PLN/ OpenCogPrime backward chaining *can* create hypothetical
logical relationships and then seek to estimate their truth values
See this page
http://opencog.org/wiki/OpenCogPrime:IntegrativeInference
and the five pages linked to
On 7/31/08, Mark Waser [EMAIL PROTECTED] wrote:
Categorization depends upon context. This was pretty much decided by the
late 1980s (look up Fuzzy Concepts).
This is an important point so I don't want to miss it. But I can't think of
a very good example of context-dependence of concepts.
On 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
As I understand it, FOL is only Turing complete when
predicates/relations/functions beyond the ones in the data are
allowed. Would PLN naturally invent predicates, or would it need to be
told to specifically? Is this what concept creation does?
On 8/5/08, Mike Tintner [EMAIL PROTECTED] wrote:
Jeez, there is NO concept that is not dependent on context. There is NO
concept that is not infinitely fuzzy and open-ended in itself, period -
which is the principal reason why language is and has to be grounded
(although that needs
On 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
Prolog (and logic programming) is Turing complete, but FOL is not a
programming language so I'm not sure.
You are right, I should have said FOL is turing complete within the
right inference system [such as Prolog], but only when
On 8/6/08, Abram Demski [EMAIL PROTECTED] wrote:
There is one common feature to all chairs: They are for the purpose of
sitting on. I think it is important that this is *not* a visual
characteristic.
It is possible to recognize chairs that cannot be sat on -- for
example, a broken chair, a
On 8/6/08, Jim Bromer [EMAIL PROTECTED] wrote:
You made some remarks, (I did not keep a record of them), that sounds
similar to some of the problems of conceptual complexity (or
complicatedness) that I am interested in. Can you describe something
of what you are working on in a little more
Ben, BTW, you may try inviting Stephen Muggleton to AGI'09. He
actually talked to me a few times despite that I knew very little
about ILP at that time. According to the wikipedia page he is
currently working on an `artificial scientist' .
http://en.wikipedia.org/wiki/Stephen_Muggleton
YKY
Hi Ben,
Hope you don't mind providing more clarification...
In first-order logic there may be a rule such as:
male(X) ^ unmarried(X) - bachelor(X)
We can convert this to a probabilistic rule:
P(bachelor(X) = true | male(X) = true, unmarried(X) = true ) = 1.0
but note that this rule
On 8/12/08, Ben Goertzel [EMAIL PROTECTED] wrote:
construct 1 =
ImplicationLink
___ANDLink
__ PredicateNode isMale
__ PredicateNode isUnmarried
___PredicateNode isBachelor
It's just a relationship between functions (predicates being mathematical
functions from entities to truth
On 8/13/08, rick the ponderer [EMAIL PROTECTED] wrote:
Reading this, I get the view of ai as basically neural networks, where
each individual perceptron could be any of a number of algorithms
(decision tree, random forest, svm etc).
I also get the view that academics such as Hinton are trying
On 8/13/08, rick the ponderer [EMAIL PROTECTED] wrote:
Thanks for replying YKY
Is the logic learning you are talking about inductive logic programming.
If so, isn't ilp basically a search through the space of logic programs (i
may be way off the mark here!), wouldn't it be too large of a search
On 8/13/08, Ben Goertzel [EMAIL PROTECTED] wrote:
But if one doesn't need to get into implementation details, in the
simplest case one just has
VariableScopeLink X
ImplicationLink
___ANDLink
__ InheritanceLink X male
__ InheritanceLink X Unmarried
___InheritanceLink X bachelor
On 9/1/08, Benjamin Johnston [EMAIL PROTECTED] wrote:
Thanks for your comments =)
--
1. Why just P,Z and B?
Three mechanisms seems somewhat arbitrary - I think you need to make a very
compelling case for why there are three and only three mechanisms.
Or, more
On 9/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
About indefinite/imprecise probabilities, you dismiss them as
overcomplicated, but you don't address the reason they were introduced in the
first place: In essence, to allow a rationally manipulable NARS-like
confidence measure that works
On 9/2/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
NARS confidence is not exactly derived from probability, but is
compatible with probability.
Sorry, I meant, the definition of NARS confidence is compatible with
probability, but NARS confidence as used in NARS, defies probability
laws
A somewhat revised version of my paper is at:
http://www.geocities.com/genericai/AGI-ch4-logic-9Sep2008.pdf
(sorry it is now a book chapter and the bookmarks are lost when extracting)
On Tue, Sep 2, 2008 at 7:05 PM, Pei Wang [EMAIL PROTECTED] wrote:
I intend to use NARS confidence in a way
On Tue, Sep 2, 2008 at 12:05 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
but in a PLN approach this could be avoided by looking at
IntensionalInheritance B A
rather than extensional inheritance..
The question is how do you know when to apply the intensional
inheritance, instead of the
On Tue, Sep 9, 2008 at 4:27 AM, Pei Wang [EMAIL PROTECTED] wrote:
Sorry I don't have the time to type a detailed reply, but for your
second point, see the example in
http://www.cogsci.indiana.edu/pub/wang.fuzziness.ps , page 9, 4th
paragraph:
If these two types of uncertainty [randomness and
BTW, if any AGI projects would like to incorporate my ideas, feel free
to do so, and I'd like to get involved too!
YKY
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski [EMAIL PROTECTED] wrote:
Speaking of my BPZ-logic...
2. Good at quick-and-dirty reasoning when needed
Right now I'm focusing on quick-and-dirty *only*. I wish to make the
logic's speed approach that of Prolog (which is a fast inference
algorithm
On Thu, Sep 18, 2008 at 4:21 AM, Kingma, D.P. [EMAIL PROTECTED] wrote:
Small question... aren't Bbayesian network nodes just _conditionally_
independent: so that set A is only independent from set B when
d-separated by some set Z? So please clarify, if possible, what kind
of independence you
On Thu, Sep 18, 2008 at 3:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
Prolog is not fast, it is painfully slow for complex inferences due to using
backtracking as a control mechanism
The time-complexity issue that matters for inference engines is
inference-control ... i.e. dampening the
On Tue, Sep 23, 2008 at 6:59 PM, Abram Demski [EMAIL PROTECTED] wrote:
I'm in the process of reading this paper:
http://www.jair.org/papers/paper1410.html
It might answer a couple of your questions. And, it looks like it has
an interesting proposal about generating heuristics from the
On Tue, Sep 23, 2008 at 9:00 PM, Abram Demski [EMAIL PROTECTED] wrote:
No transfer? This paper suggests otherwise:
http://www.cs.washington.edu/homes/pedrod/papers/aaai06b.pdf
Well, people know that propositional SAT is fast, so
propositionalization is a tempting heuristic, but as the paper's
301 - 400 of 442 matches
Mail list logo