HI,
Currently Ben's Novamente is among the most mature and promising AGIs
out there, which I think is no small accomplishment. But still, it is
not yet clear that NM will be the *ultimate* winner, if we take
into consideration the entry of the big guys (eg Microsoft, Google,
DARPA etc)
4) So, the question is not whether DARPA, M$ or Google will enter the
AI race -- they are there. The question is whether they will adopt a
workable approach and put money behind it. History shows that large
organizations often fail to do so, even when workable approaches exist,
allowing
Hi,
The question is whether your work can be duplicated after your initial
success, and how hard is that.
It certainly could be duplicated but once we demonstrate enough
success that everyone wants to copy us,
then we will be able to raise all the $$ we want and hire all the great
Hi all,
This doesn't really showcase Novamente's learning ability very much --
it's basically a smoke test of the integration of Novamente probabilistic
learning with the AGISim sim world -- an integration which we've had
sorta working
for a while but has had a lot of kinks needing
Bob Mottram wrote:
It's difficult to judge how impressive or otherwise such demos are,
since it would be easy to produce an animation of this kind with
trivial programming. What are we really seeing here? How much does
the baby AGI know about fetching before it plays the game, and how
What's the size of the space NM is searching for this plan?
Well that really depends on how you count things
One way to calculate it would be to look at the number of trees with 15
nodes, with say 20 possibilities at each node.
Because in practice the plans it comes up with, with
Well, Novamente LLC submitted a proposal to this that was rejected.
My impression was that most of the recipients of the BICA funding did
not have credible AGI approaches, and many were in fact relatively new
to the AGI problem.
The recipients were by and large smart scientists with
J. Storrs Hall, PhD. wrote:
On Tuesday 13 March 2007 22:34, Ben Goertzel wrote:
J. Storrs Hall, PhD. wrote:
On Tuesday 13 March 2007 20:33, Ben Goertzel wrote:
I am confused about whether you are proposing a brain model or an AGI
design.
I'm working with a brain model
Chuck,
Regarding AGI tests, this is something I've thought about a bit because
some people from the nonprofit world have told me they felt it would be
relatively easy to raise money for some kind of AGI Prize, similar to the
Hutter Prize or the Methuselah Mouse Prize.
However, I thought about
It is a trite point, but I can't help repeating that, given how very
little we know about the
brain's deeper workings, these estimates of the brain's computational
and memory capability
are all kinda semi-useless...
I think that brain-inspired AGI may become very interesting in 5-20
years
Eugen Leitl wrote:
On Wed, Mar 14, 2007 at 09:12:55AM -0400, Ben Goertzel wrote:
It is a trite point, but I can't help repeating that, given how very
little we know about the
brain's deeper workings, these estimates of the brain's computational
Not to belabor the point
In my thinking I've dropped the neural inspiration and everything is in terms
of pure math. Each module (probably better drop that term, it's ambiguous and
confusing: let's use IAM, interpolating associative memory, instead), each
IAM is simply a relation, a set of points in N-space, with an
(which are
dead simple by comparison). But there's a new field of reactive programming
that imports just enough control from programming languages to the always
active paradigm that it looks tractable. And it cuts the size of programs in
half.
Josh
On Wednesday 14 March 2007 18:07, Ben Goertzel
But the bottom line problem for using FOPC (or whatever) to represent the
world is not that it's computationally incapable of it -- it's Turing
complete, after all -- but that it's seductively easy to write propositions
with symbols that are English words and fool yourself into thinking
Numeric vectors are strictly more powerful as a representation than
predicates.
This is not really true...
A set of vectors is a relation, which is a predicate; I can do
any logical operation on them (given, e.g. a term constructor that is simply
a hash function along an axis). But they
Hi,
Yes, copycat simulated a metric in an ad hoc way, because it lacked a
robust way of measuring and utilizing uncertainty...
I am unsure (heh heh) what uncertainty has to do with it. CC got a fixed,
completely known problem. It could only construct valid interpretations, so
there
I'm working on the assumption that a basic, simple, universal, FAST capability
to do analogical quadrature in each module (read: the chunk of brain that
owns each concept) working all at once and all together is what can make this
possible.
I basically agree with this, actually.
In
J. Storrs Hall, PhD. wrote:
On Tuesday 13 March 2007 20:33, Ben Goertzel wrote:
I am confused about whether you are proposing a brain model or an AGI
design.
I'm working with a brain model for inspiration, but I speculate that once we
understand what it's doing we can squeeze a few
These terms need to be used carefully...
Evolutionary algorithms, as a learning technique, are sometimes a good
solution ... though in NM we don't use any classical evolutionary
algorithms, relying instead on a customized version of MOSES (see
www.metacog.org for a description of the general
YKY (Yan King Yin) wrote:
Hi John,
Re your idea that there should be an intermediate-level representation:
1. Obviously, we do not currently know how the brain stores that
representation. Things get insanely complex as neuroscientists
go higher up the visual pathways from the primary
YKY (Yan King Yin) wrote:
On 3/11/07, Ben Goertzel [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
wrote:
All this is perfectly useful stuff, but IMO is not in itself sufficient
for an AGI design. The basic problem is that there are many tasks
important for intelligence, for which
3. Give an example of a task where logical inference is inefficient? ;)
I mentioned your question to my wife and she responded: Well, how about
mathematical theorem-proving? ;-)
Quite apropos...
In fact we may refine the retort as: Well, how about proving nontrivial
theorems in
YKY (Yan King Yin) wrote:
On 3/12/07, Ben Goertzel [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
wrote:
Natural concepts in the mind are ones for which inductively learned
feature-combination-based classifiers and logical classifiers give
roughly the same answers...
1. The feature-combination
Hi all,
If you have 2.5 minutes or so to spare, my 13-year-old son Zebulon has
made another Singularity-focused mini-movie:
http://www.zebradillo.com/AnimPages/The%20Shtinkularity.html
This one is not as deep as RoboTurtle II, his 14-minute
Singularity-meets-Elvis epic from a year ago or so
Mark Waser wrote:
In the Novamente design this is dealt with via a currently
unimplemented aspect of the design called the internal simulation
world. This is a very non-human-brain-like approach
Why do you believe that this is a very non-human-brain-like
approach? Mirror neurons and many
YKY (Yan King Yin) wrote:
I agree with Ben and Pei etc on this issue. Narrow AI is VERY
different from general AI. It is not at all easy to integrate several
narrow AI applications to a single, functioning system. I have never
heard of something like this being done, even for two computer
Hi Bob,
Is there a document somewhere describing what is unique about your approach?
Novamente doesn't involve real robotics right now but the design does
involve occupancy grids and probabilistic simulated robotics, so your
ideas are of some practical interest to me...
Ben
Bob Mottram
for reasoning under uncertainty, are actually
more critical to its
general intelligence, as they have subtler and more thoroughgoing
synergies with other tools
that help give rise to important emergent structures/dynamics.
-- Ben Goertzel
-
This list is sponsored by AGIRI: http://www.agiri.org/email
functions
accurately reflect the characteristics of the sensors being used (in
this case stereo cameras).
On 06/03/07, *Ben Goertzel* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Hi Bob,
Is there a document somewhere describing what is unique about your
approach
-replacement updates... Qualitatively this
seems like the sort of thing the brain must be doing, and the kind of
thing any AI system must do to cope with a rapidly changing environment...
Ben
On 06/03/07, *Ben Goertzel* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Thanks
for some purposes, but
is more foundationally represented as a dynamic
configuration of nodes and links that habitually become important
together in certain contexts...]
There are plenty other examples too, but that will suffice for now...
-- Ben
- Original Message - From: Ben Goertzel
Mark Waser wrote:
Just polynomially expensive, I believe
Depends upon whether you're fully connected or not but yeah, yeah . . . .
Another, simpler example is indexing items via time and space: you
need to be able to submit a spatial and/or temporal region as a query
and find items relevant
What about the AGIs that people are building or working towards, such
as those from Novamente, AdaptiveAI, Hall, etc.? Do/Will your systems
have sleep periods for internal maintenance and improvement? If so,
what types of activities do they perform during sleep?
Novamente doesn't need to
and maintaining indexes
Yep...
or are you just calculating index values for a single use and then
discarding them?
Mark
- Original Message - From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, February 21, 2007 7:51 PM
Subject: **SPAM** Re: [agi
that are rapidly evolving and need
to support complex queries rapidly...
-- Ben
-- Ben
- Original Message - From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, February 21, 2007 8:13 PM
Subject: **SPAM** Re: [agi] Development Environments for AI (a few
non
modification.
OK ... but if an RDB is not the right data container to use, then what
advantage is there in running your C# code (including e.g. your graph
DB) in in a SQL engine???
- Original Message - From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday
just think that you've missed out on some opportunities to focus even
more on what matters instead of re-inventing some wheels.
- Original Message - From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, February 20, 2007 6:34 PM
Subject: **SPAM** Re: **SPAM** Re: [agi
Mark Waser wrote:
So from the below, can I take it that you're a big proponent of
the .NET framework since database access is built into the
framework *AND* the framework is embedded in the database?
The crazy like a fox way to build an AGI may very well be to
write it in the .NET
Richard Loosemore wrote:
Ben Goertzel wrote:
It's pretty clear that humans don't run FOPC as a native code, but
that we can learn it as a trick.
I disagree. I think that Hebbian learning between cortical columns
is essentially equivalent to basic probabilistic term logic.
Lower
Also, why would 32 - 64 bit be a problem, provided you planned for
it in advance?
Name all the large, long-term projects that you know of that *haven't*
gotten bitten by something like this. Now, name all of the large,
long-term projects that you know of that HAVE gotten bitten repeatedly
J. Storrs Hall, PhD. wrote:
On Sunday 18 February 2007 19:22, Ricardo Barreira wrote:
You can spend all the time you want sharpening your axes, it'll do you
no good if you don't know what you'll use it for...
True enough. However, as I've also mentioned in this venue before, I want
It's pretty clear that humans don't
run FOPC as a native code, but that we can learn it as a trick.
I disagree. I think that Hebbian learning between cortical columns is
essentially equivalent to basic probabilistic
term logic.
Lower-level common-sense inferencing of the
Aki Iskandar wrote:
Hello -
I'm new on this email list. I'm very interested in AI / AGI - but do
not have any formal background at all. I do have a degree in Finance,
and have been a professional consultant / developer for the last 9
years (including having worked at Microsoft for almost
BTW: I really loved Haskell when I used it in the 90's, and if there
were a rip-roaring fast SMP Haskell implementation with an effective
customizable garbage collector, Novamente would probably be written in
Haskell.
But, there is not, and so Novamente is written in C++ ... but
In Abraham Lincoln's case I think it makes sense, since he already
knows how he'll use the axe. I doubt that most people who are worrying
about which language they'll use actually have a good idea of how to
actually design an AGI...
You can spend all the time you want sharpening your axes,
gts wrote:
LEADING TO THE ONLY THING REALLY INTERESTING ABOUT THIS DISCUSSION:
What interests me is that the Principle of Indifference is taken for
granted by so many people as a logical truth when in reality it is
fraught with logical difficulties.
I think it's been a pretty long time
-- Assume there will be persistent objects in the 3D space
This is not innate. Babies don't recognize that when an object is hidden from
view that it still exists.
I'm extremely familiar with the literature on object permanence; and the
truth seems to be that babies
**do** have
Eric Baum wrote:
Ben And, I'm not thinking to use such a list as the basis for
Ben creating an AGI, but simply as a tool for assisting in thinking
Ben about an already-existing AGI design that was created based on
Ben other principles. My suspicion is that all the known and
Ben powerful human
Peter Voss wrote:
... various comments ...
It more fundamental than that: The design of your 'senses' - what feature
extraction, sampling and encoding you provide lays a primary foundation to
induction.
Peter
That is definitely true, and is PART of what I meant by saying that the
But I should clarify-- I don't mean the final routines are explicitly
coded in exactly. The genomic code runs, interacts with data in
the sensory stream, and produces the mental structures reflecting
the routines. That's how it evolves, because as the genome is being
mutated, what survives
Indeed, that is a cleaner and simpler argument than the various more
concrete PI paradoxes... (wine/water, etc.)
It seems to show convincingly that the PI cannot be consistently applied
across the board, but can be heuristically applied to certain cases but
not others as judged contextually
Hi,
In a recent offlist email dialogue with an AI researcher, he made the
following suggestion regarding the inductive bias that DNA supplies
to the human brain to aid it in learning:
*
What is encoded in the DNA may include a starting ontology (as proposed,
with exasperating vaguess, by
Matt Mahoney wrote:
I don't think there is a simple answer to this problem. We observe very
complex behavior in much simpler organisms that lack long term memory or the
ability to learn. For example, bees are born knowing how to fly, build hives,
gather food, and communicate its location.
Eliezer,
Ben, is the indefinite probability approach compatible with local
propagation in graphical models?
Hmmm... I haven't thought about this before, but on first blush, I don't
see any reason why you couldn't locally propagate indefinite
probabilities through a Bayes net...
We
On Feb 9, 2007, at 3:26 AM, Eugen Leitl wrote:
On Thu, Feb 08, 2007 at 11:03:38PM -0500, Ben Goertzel wrote:
But, Novamente is certainly architected to take advantage of their
1000-qubit version for various tasks, when it comes out... ;-)
Which part of it is massively parallel
On Feb 9, 2007, at 11:01 AM, gts wrote:
On Wed, 07 Feb 2007 18:37:52 -0500, Ben Goertzel [EMAIL PROTECTED]
wrote:
This is simply a re-post of my prior post, with corrected
terminology, but unchanged substance:
Thanks! Very helpful.
Now that you have a better understanding of dutch
Thus, to be coherent, we need to ensure that our beliefs fit
together (logically).
Yes, but for a cognitive system to make its beliefs about a large
number of complex and interdependent statements fit
together (logically) requires infeasibly much computing power.
The brain doesn't do
On Feb 9, 2007, at 4:31 PM, Pei Wang wrote:
This is a better example of conjunction fallacy than the Linda
example (again, I don't think the latter is a fallacy), but still,
there are issues in the mainstream explanation:
Pei: If the Linda example** is presented in the context of
I don't think a betting situation will be different for this case. To
really avoid intensional considerations, experiment instructions
should be so explicit that the human subjects know exactly what they
are asked to do.
I believe that, in the betting versions of the Linda-fallacy experiment,
On Feb 8, 2007, at 8:59 AM, gts wrote:
On Wed, 07 Feb 2007 20:40:27 -0500, Charles D Hixson
[EMAIL PROTECTED] wrote:
I suspect you of mis-analyzing the goals and rewards of casino
gamblers I'm not sure whether or not this speaks to the points
that you are attempting to raise, but it
On Feb 8, 2007, at 9:16 AM, gts wrote:
On Wed, 07 Feb 2007 16:51:18 -0500, Ben Goertzel [EMAIL PROTECTED]
wrote:
In fact I have been thinking about how one might attempt a dutch
book against Novamente involving your multiple component values,
but I do not yet fully understand b. My
On Feb 8, 2007, at 9:37 AM, gts wrote:
On Thu, 08 Feb 2007 09:26:28 -0500, Pei Wang [EMAIL PROTECTED]
wrote:
In simple cases like the above one, an AGI should achieve coherence
with little difficulty. What an AGI cannot do is to guarantee
coherence in all situations, which is impossible
Changing the subject slightly: the optimal use of probabilities is
NOT always the best foundation for action.
I say this because of a news report I heard a few months back (on
NPR: sorry, I can't remember the reference), about a math student
who was very bright, and whose professor
On Feb 8, 2007, at 11:39 AM, gts wrote:
re: the right order of definition
De Finetti's (and Ramsey's) main contribution was in showing that
the formal axioms of probability can be derived entirely from
considerations about people betting on their subjective beliefs
under the relatively
Actually, conjunction fallacy is probably going to be one of the
most difficult of all biases to eliminate; it may even be provably
impossible for entities using any complexity-based variant of
Occam's Razor, such as Kolmogorov complexity. If you ask for P(A)
at time T and then P(AB) at
that stupid,
they won't be able to achieve complete probabilistic coherence on
complex domains.
-- Ben
On Feb 7, 2007, at 10:44 AM, gts wrote:
On Tue, 06 Feb 2007 20:02:11 -0500, Ben Goertzel [EMAIL PROTECTED]
wrote:
Consistency in the sense of de Finetti or Cox is out of reach
On Feb 7, 2007, at 3:48 PM, gts wrote:
Ben,
Of course the world is an enormously complex relation of
interdependencies between many causes and effects. I do not dispute
that fact.
I question however whether this should really be an important
consideration in developing AGI.
One's
Pei, gts and others:
I will now try to rephrase my ideas about indefinite probabilities
and betting, since my prior
exposition was not well-understood.
What I am suggesting is pretty different from Walley's ideas about
betting and imprecise probabilities, and so
far as I can tell is also
On Feb 7, 2007, at 4:35 PM, gts wrote:
On Wed, 07 Feb 2007 16:07:13 -0500, Ben Goertzel [EMAIL PROTECTED]
wrote:
only under an independence assumption.
True, I did not make the independence assumption explicit.
Note that dutch books cannot be made against an AGI that does not
claim
is not a Dutch Book.
Pei
On 2/7/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Pei, gts and others:
I will now try to rephrase my ideas about indefinite probabilities
and betting, since my prior
exposition was not well-understood.
What I am suggesting is pretty different from Walley's ideas about
betting
This is simply a re-post of my prior post, with corrected
terminology, but unchanged substance:
Suppose we have a category C of discrete events, e.g. a set of tosses
of a certain coin
which has heads on one side and tails on the other.
Next, suppose we have a predicate S, which
/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Ok, sorry if I used the term wrong. The actual game is clearly
defined though even if I
attached the wrong label to it. I will resubmit the post with
corrected terminology...
ben
On Feb 7, 2007, at 6:21 PM, Pei Wang wrote:
Ben,
Before going
measurements should and can be
defined in this way.
Though I'll need more time to comment on the details, I don't feel
good about the overall picture.
Pei
On 2/7/07, Ben Goertzel [EMAIL PROTECTED] wrote:
As I understand it, his idea was that if you set your operational
subjective probability
, of course...
Your view of humans as accurate probabilistic reasoners is definitely
not borne out by the Heuristics and Biases literature in cognitive
psychology.
Ben
On Feb 7, 2007, at 8:40 PM, Charles D Hixson wrote:
gts wrote:
On Wed, 07 Feb 2007 10:57:04 -0500, Ben Goertzel
[EMAIL
a difference --- to me, the problem
in an improper interpretation cannot be made up by formal/technical
treatments.
If you propose indefinite probability as a pure mathematical notion,
I'll have much less problem with it.
Pei
On 2/7/07, Ben Goertzel [EMAIL PROTECTED] wrote:
H Peii
probability in general (though binary second-order
statement is OK).
Pei
On 2/5/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Pei (and others),
I thought of a way to define two-component truth values in terms of
betting strategies (vaguely in the spirit of de Finetti).
I was originally trying
Consistency in the sense of de Finetti or Cox is out of reach for a
modest-resources AGI, in principle...
Sorry to be the one to break the news. But, don't blame the
messenger. It's a rough universe out there ;-)
Ben G
On Feb 6, 2007, at 4:10 PM, gts wrote:
I understand the resources
So, sorry, but I am looking at the same data, and as far as I am
concerned I see almost no evidence that probability theory plays a
significant role in cognition at the concept level.
What that means, to go back to the original question, is that the
possibility I raised is still
Ben, this is also why I was wondering why your hypothesis is framed
in terms of both Cox and De Finetti. Unless I misunderstand Cox,
their interpretations are in some ways diametrically opposed. De
Finetti was a radical subjectivist while Cox is (epistemically) an
ardent
The interpretation of probability is a different matter --- we have
been talking about consistency, which is largely independent to which
interpretation you subscribe to.
Correct
In my opinion, in the AGI context, each of the three traditional
interpretation is partly applicable but partly
: when you can construct an internal model of part
of the world, so that everything is consistent within the model, then
you can reason via probability...
ben
On Feb 4, 2007, at 10:49 AM, Pei Wang wrote:
On 2/4/07, Ben Goertzel [EMAIL PROTECTED] wrote:
As you know I feel differently. I
The definition of 'probabilistic consistency' that I was using
comes from
ET Jaynes' book _Probability Theory - The Logic of Science_, page
114.
These are Jaynes' three 'consistency desiderata' for a
probabilistic robot:
1. If a conclusion can be reasoned out in more than one way, then
Hi,
s.
1) Would anyone currently putting energy into the foundations of
probability discussion be willing to say that this hypothetical
human mechanism could *still* be meaningfully described in terms of
a tractable probabilistic formalism (by, e.g., transforming or
approximating
On Feb 4, 2007, at 2:23 PM, gts wrote:
On Sun, 04 Feb 2007 13:15:27 -0500, Pei Wang [EMAIL PROTECTED]
wrote:
none of the existing AGI project is designed [according to the
tenets of objective/logical bayesianism]
Hmm. My impression is that to whatever extent AGI projects use
bayesian
It **could** be that the only way a system can give rise to
probabilistically sensible patterns of action-selection, given
limited computational resources, is to do stuff internally that is
based on nonlinear dynamics rather than probability theory.
But, I doubt it...
The human brain may
Pei (and others),
I thought of a way to define two-component truth values in terms of
betting strategies (vaguely in the spirit of de Finetti).
I was originally trying to think of a way to define two-component
truth values a la Cox, but instead a betting strategy approach is
what I came
For a different view on probabilistic and logical consistency, we can
always turn to Dostoevsky, who posited that the essence of being
human is that we can make ourselves believe 2+2=5 if we really want
to '-)
I.e., he saw our potential for **willful inconsistency**, considered
in the
Eliezer,
I don't think a mind that evaluates probabilities *is*
automatically the best way to make use of limited computing
resources. That is: if you have limited computing resources, and
you want to write a computer program that makes the best use of
those resources to solve a
On Feb 3, 2007, at 9:02 AM, Russell Wallace wrote:
On 2/3/07, Ben Goertzel [EMAIL PROTECTED] wrote:
My approach was to formulate a notion of general intelligence as
achieving a complex goal, and then ask something like: Given what
resource levels R and goals G, is approximating probability
That suggests you mean A. Well then, it seems to me that terms are
being used in this discussion so that probability theory is
_defined_ as giving the right answers in all cases. So the
original question boils down to is it always best to give
approximately the right answers?; the answer
Hi Russell,
OK, I'll try to specify my ideas in this regard more clearly. Bear
in mind though that there are many ways to formalize an intuition,
and the style of formalization I'm suggesting here may or may not be
the right one. With this sort of thing, you only know if the
.
ben
On Feb 3, 2007, at 12:18 PM, gts wrote:
On Fri, 02 Feb 2007 22:01:34 -0500, Ben Goertzel [EMAIL PROTECTED]
wrote:
In Novamente, we use entities called indefinite probabilities,
which are described in a paper to appear in the AGIRI Workshop
Proceedings later this year...
Roughly
an observer could infer from the
system's behaviors, rather than a definition that assumes the system
is explicitly doing formal logic internally.
-- Ben
On Feb 3, 2007, at 8:47 PM, Pei Wang wrote:
On 2/3/07, Ben Goertzel [EMAIL PROTECTED] wrote:
My desire in this context is to show
Okay... let's say when an agent exhibits inconsistent implicit
preferences for acting in a particular situation, it is being
suboptimal, and the degree of suboptimality depends on the degree
of inconsistency.
Given:
S = a situation
I = importance (to G) of acting correctly in that
I must agree with Pei on this. Think of a reasonably large AI,
say, eight light hours across. Any belief frame guaranteed to be
globally consistent must be at least eight hours out of date. So
if you only act on globally consistent knowledge, your reaction
time is never less than your
Again, to take consistency as an ultimate goal (which is never fully
achievable) and as a precondition (even an approximate one) are two
very different positions. I hope you are not suggesting the latter ---
at least your posting makes me feel that way.
Hi,
In the Novamente system,
pertain to single-number representations (but does not
state that a single number is a sufficient quantification of a mind's
uncertainty about a statement)
ben
On Feb 2, 2007, at 1:52 PM, gts wrote:
On Thu, 01 Feb 2007 14:00:06 -0500, Ben Goertzel [EMAIL PROTECTED]
wrote:
Discussing
algorithms for propagating these indefinite
probabilities through logical inferences.
-- Ben
On Feb 2, 2007, at 9:37 PM, gts wrote:
On Fri, 02 Feb 2007 15:57:24 -0500, Ben Goertzel [EMAIL PROTECTED]
wrote:
Interpretation-wise, Cox followed Keynes pretty closely. Keynes
had his own eccentric
This seems to also be dealt with at the end of Cox's book...
Interesting. I'm tempted to read Cox's book so that you and I can
discuss his ideas in more detail here on your list. (I worry that
my enthusiasm for this subject is only annoying people on that
other discussion list.) Is that
HI,
Pei Wang's uncertain logic is **not** probabilistic, though it uses
frequency calculations
IMO Pei's logic has some strong points, especially that it unifies
fuzzy and probabilistic truth values into one pair of values. I
think in Pei's logic the frequency f is indeed a
801 - 900 of 1549 matches
Mail list logo