If you're that focused on fitness functions, why not define it as the core of
intelligence? Make an AI focused on evolving fitness functions for itself...
without anything else, it might be an empty definition, but if you insert some
other, trivial fitness criteria, such as maze navigation with
The article seems to assume that just because a neural event can be detected
that accounts for a feeling people have, it must have been hard-wired by
evolution. Why can't morality be a learned behavior?
On 5/28/07, Mark Waser [EMAIL PROTECTED] wrote:
http://www.msnbc.msn.com/id/18899688/
I'm an undergrad who's been lurking here for about a year. It seems to me
that many people on this list take Solomonoff Induction to be the ideal
learning technique (for unrestricted computational resources). I'm wondering
what justification there is for the restriction to turing-machine models of
Thanks for the replies,
On Fri, Feb 29, 2008 at 4:44 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
I am not so sure that humans use uncomputable models in any useful sense,
when doing calculus. Rather, it seems that in practice we use
computable subsets
of an in-principle-uncomputable theory...
On Sat, Mar 1, 2008 at 5:23 PM, daniel radetsky [EMAIL PROTECTED] wrote:
[...]
My thinking is that a more-universal theoretical prior would be a prior
over logically definable models, some of which will be incomputable.
I'm not exactly sure what you're talking about, but I assume that
I like the attractor approach, I really do! But I think the version you give
needs a fundamental clarification.
How about Don't interfere with the goals of others unless not doing so
basically prevents you fulfilling your goals (explicitly not including low
probability freak events for you
I'd be interested in looking at a paper. However, I'll be honest: your
claim of AGI sounds over-inflated, mainly because it sounds like your
algorithm is text-specific and wouldn't help with things like vision,
robot control, etc. Nonetheless, a good 'chatbot' is still something
of interest (I
On Wed, Apr 23, 2008 at 5:43 PM, Mike Tintner [EMAIL PROTECTED] wrote:
[..]
And these different instantiations *have* to be fairly precise, if we are
to understand a text, or effect an instruction, successfully. The next
sentence in the text may demand that we know the rough angle of reaching
Sorry to intrude, but I think the formula complexity is the border
between order and chaos resolves this dispute nicely...
Choice 1: The operators end up being clean and modular in their design,
which means that if we were able to examine them from the outside, we would
be able to understand
I previously posted here claiming that the human mind (and therefore
an ideal AGI) entertains uncomputable models, counter to the
AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
Anyway, I hope I'm not being too annoying if I try to argue the point
once again. This paper also
I'm not sure that I'm responding to your intended meaning, but: all
computers are in reality finite-state machines, including the brain
(granted we don't think the real-number calculations on the cellular
level are fundamental to intelligence). However, the finite state
machines we call PCs are so
Mike A.:
Well, if you're convinced that infinity and the uncomputable are
imaginary things, then you've got a self-consistent view that I can't
directly argue against. But are you really willing to say that
seemingly understandable notions such as the problem of deciding
whether a given Turing
is... inhuman.
On Tue, Jun 17, 2008 at 1:29 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Tue, Jun 17, 2008 at 9:10 PM, Abram Demski [EMAIL PROTECTED] wrote:
Mike A.:
Well, if you're convinced that infinity and the uncomputable are
imaginary things, then you've got a self-consistent view that I
.
A. D.
On Tue, Jun 17, 2008 at 2:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Tue, Jun 17, 2008 at 10:14 PM, Abram Demski [EMAIL PROTECTED] wrote:
No nonsense, just finite sense. What is this with verification that a
machine doesn't halt? One can't do it, so what is the problem
On Wed, Jun 18, 2008 at 9:54 AM, Benjamin Johnston
[EMAIL PROTECTED] wrote:
[...]
In any case, this whole conversation bothers me. It seems like we're
focussing on the wrong problems; like using the Theory of Relativity to
decide on an appropriate speed limit for cars in school zones. If it
Yes, it's ordinary human language -- whether written or spoken; English or
Spanish or Chinese or whatever . . . . .
I was tempted to include that in my statement, but decided against for
brevity... the thing is, we have the language, but we don't know what
to do with it. Solving the problem of
Well, what exactly are the constraints you wish you place on capture.
Clearly humans can express the ideas so in some sense they are trivially
(say text and graphics) captured. :-)
- samantha
Personally, the constraint that I want to satisfy is that the rules of
manipulation should reflect
this in is induction unformalizable? [2] on the everything mailing list.
Abram Demski also made similar points in recent posts on this mailing list.
[1] http://www.mail-archive.com/agi@v2.listbox.com/msg00864.html
[2]
http://groups.google.com/group/everything-list/browse_frm/thread
To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now that the fundamental split
between logical/messy methods should occur at the line between perfect
approximate methods. This is one type of messiness, but one only. I
think you are
of convergence properties... but my intuition says that from
clear meaning, everything else follows.
On Sun, Jun 22, 2008 at 9:45 AM, Jim Bromer [EMAIL PROTECTED] wrote:
Abram Demski said:
To be honest, I am not completely satisfied with my conclusion on the
post you refer to. I'm not so sure now
- Original Message
From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, June 22, 2008 4:38:02 PM
Subject: Re: [agi] Approximations of Knowledge
Well, since you found my blog, you probably are grouping me somewhat
with the probability buffs. I have stated that I
Thanks for the comments. My replies:
It does happen to be the case that I
believe that logic-based methods are mistaken, but I could be wrong about
that, and it could turn out that the best way to build an AGI is with a
completely logic-based AGI, along with just one small mechanism that
And Abram said,
A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to
to be analogous?
On Tue, Jun 24, 2008 at 9:02 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Abram Demski wrote:
I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
The problem is starting to become AI-complete, so
On Sun, Jun 22, 2008 at 10:12 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
I find the absence of such models troubling. One problem is that there are no
provably hard problems. Problems like tic-tac-toe and chess are known to be
easy, in the sense that they can be fully analyzed with sufficient
Ah, so you do not accept AIXI either.
Put this way, your complex system dilemma applies only to pure AGI,
and not to any narrow AI attempts, no matter how ambitious. But I
suppose other, totally different reasons (such as P != NP, if so) can
block those.
Is this the best way to understand your
1, 2008 at 2:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
2008/6/16 Abram Demski [EMAIL PROTECTED]:
I previously posted here claiming that the human mind (and therefore
an ideal AGI) entertains uncomputable models, counter to the
AIXI/Solomonoff model. There was little enthusiasm about
master's thesis was on the subject so if you are interested in
getting an electronic copy just let me know. It is in French though.
On Wed, Jul 2, 2008 at 11:15 AM, Abram Demski [EMAIL PROTECTED] wrote:
So yes, I think there are perfectly fine, rather simple
definitions for computing machines
PROTECTED] wrote:
On Wed, Jul 2, 2008 at 1:30 PM, Abram Demski [EMAIL PROTECTED] wrote:
Hector Zenil said:
and that is one of the many issues of hypercomputation: each time one
comes up with a standard model of hypercomputation there is always
another not equivalent model of hypercomputation
How do you assign credit to programs that are good at generating good
children? Particularly, could a program specialize in this, so that it
doesn't do anything useful directly but always through making highly
useful children?
On Wed, Jul 2, 2008 at 1:09 PM, William Pearson [EMAIL PROTECTED]
In general I agree with Richard Loosemore's reply.
Also, I think that it is not surprising that the approaches referred
to (gen/comp hierarchies, Hinton's hierarchies, hierarchical-temporal
memory, and many similar approaches) become too large if we try to use
them for more than the first few
At one point in the recent past, I had relegated the concept of
clustering to the narrow AI domain. But at around the same time, I
was attempting to wrap my head around the problem of hidden variables.
Hidden variables allow an AI to reason about entities beyond its
sensory data, but they
, Abram Demski [EMAIL PROTECTED] wrote:
...
So the
question is: is clustering in general powerful enough for AGI? Is it
fundamental to how minds can and should work?
You seem to be referring to k-means clustering, which assumes a special form
of mixture model, which is a class of generative models
number of clusters it may give up and focus on
the important inputs.
On Mon, Jul 7, 2008 at 3:39 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Abram,
On 7/6/08, Abram Demski [EMAIL PROTECTED] wrote:
The SPI paper does make that constraint, but it also allows for
multiple clusterings; so
It is true that Mark Waser did not provide much justification, but I
think he is right. The if-then rules involved in forward/backward
chaining do not need to be causal, or temporal. A mutual implication
is still treaded differently by forward chaining and backward
chaining, so it does not cause
Ed Porter wrote:
I am I correct that you are implying the distinction is independent
of direction, but instead is something like this: forward chaining
infers from information you have to implications you don't yet have,
and backward chaining infers from patterns you are interested in to
ones
be in any intro AI textbook.
--Abram
On Tue, Jul 15, 2008 at 11:08 AM, Ed Porter [EMAIL PROTECTED] wrote:
Lukasz,
Your post below was great.
Your clippings from Google confirm much of the understanding that Abram
Demski was helping me reach yesterday.
In one of his posts Abram
For what it is worth, I agree with Richard Loosemore in that your
first description was a bit ambiguous, and it sounded like you were
saying that backward chaining would add facts to the knowledge base,
which would be wrong. But you've cleared up the ambiguity.
On Wed, Jul 16, 2008 at 5:02 AM,
The way I see it, on the expert systems front, bayesian networks
replaced the algorithms being currently discussed. These are more
flexible, since they are probabilistic, and also have associated
learning algorithms. For nonprobabilistic systems, the resolution
algorithm is more generally
, so you cannot be using them as a
generative model, but they also lack accept-states, so you can't be
using them as recognition models, either. How are you using them?
-Abram
On Thu, Jul 17, 2008 at 1:05 PM, John G. Rose [EMAIL PROTECTED] wrote:
From: Abram Demski [mailto:[EMAIL PROTECTED]
John
Can you cite any papers related to the approach you're attempting? I
do not know anything about morphism detection, morphism forests, etc.
Thanks,
Abram
On Sun, Jul 20, 2008 at 2:03 AM, John G. Rose [EMAIL PROTECTED] wrote:
From: Abram Demski [mailto:[EMAIL PROTECTED]
No, not especially
On Tue, Jul 22, 2008 at 4:29 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Abram,
On 7/22/08, Abram Demski [EMAIL PROTECTED] wrote:
From the paper you posted, and from wikipedia articles, the current
meaning of PCA is very different from your generalized version. I
doubt the current
a lot of smoke...
On 7/22/08, Abram Demski [EMAIL PROTECTED] wrote:
On Tue, Jul 22, 2008 at 4:29 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Abram,
On 7/22/08, Abram Demski [EMAIL PROTECTED] wrote:
From the paper you posted, and from wikipedia articles, the current
meaning of PCA
This is getting long in embedded-reply format, but oh well
On Wed, Jul 23, 2008 at 12:24 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Abram,
On 7/23/08, Abram Demski [EMAIL PROTECTED] wrote:
Replying in reverse order
Story: I once viewed being able to invert the Airy Disk
The Wikipedia article on PCA cites papers that show K-means clustering
and PCA to be in a certain sense equivalent-- from what I read so far,
the idea is that clustering is simply extracting discrete versions of
the continuous variables that PCA extracts.
It seems like you have some valid points, but I cannot help but point
out a problem with your question. It seems like any system for pattern
recognition and/or prediction will have a sensible I Don't Know
state. An algorithm in a published paper might suppress this in an
attempt to give as
Harry,
In what way do you think your approach is not grounded?
--Abram
On Mon, Aug 4, 2008 at 2:55 PM, Harry Chesley [EMAIL PROTECTED] wrote:
As I've come out of the closet over the list tone issues, I guess I should
post something AI-related as well -- at least that will make me net neutral
, and asked it to guess what
the next item in the series would be, what sort of process would it
employ?
Thanks,
--Abram Demski
On 8/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
On Mon, Aug 4, 2008 at 6:10 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On 8/5/08, Ben Goertzel [EMAIL PROTECTED
continuous variables are involved.
-Abram
On Tue, Aug 5, 2008 at 2:35 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On 8/6/08, Abram Demski [EMAIL PROTECTED] wrote:
There is one common feature to all chairs: They are for the purpose of
sitting on. I think it is important
to apply to your response, but I won't quote that one.
Sincerely,
Abram Demski
On Tue, Aug 5, 2008 at 1:50 PM, Terren Suydam [EMAIL PROTECTED] wrote:
The Chinese Room argument counters only the assertion that the computational
mechanism that manipulates symbols is capable of understanding
of emergence.
Terren
--- On Tue, 8/5/08, Abram Demski [EMAIL PROTECTED] wrote:
From: Abram Demski [EMAIL PROTECTED]
Subject: Re: [agi] Groundless reasoning -- Chinese Room
To: agi@v2.listbox.com
Date: Tuesday, August 5, 2008, 6:07 PM
Terren,
You and I could agree. But the Chinese Room
Terren,
Are you ignoring my reply on purpose, or accidentally? If it is on
purpose, that is fine, but if it is by accident then the original
message is replicated below.
-Abram
On Wed, Aug 6, 2008 at 10:24 AM, Abram Demski [EMAIL PROTECTED] wrote:
On Wed, Aug 6, 2008 at 12:04 AM, Terren Suydam
AI
failing, by your own argument. Maybe it could be fixed or extended to
argue against symbolic AI. However, it does not do so by itself, and
in my opinion it would be clearer to come up with a different argument
rather than fixing that one.
-Abram Demski
On Wed, Aug 6, 2008 at 1:44 PM, Terren
This looks like it could be an interesting thread.
However, I disagree with your distinction between ad hoc and post hoc.
The programmer may see things from the high-level maze view, but the
program itself typically deals with the mess. So, I don't think
there is a real distinction to be made
as
some things not worth capturing) about how we think.
-Abram
On Thu, Aug 14, 2008 at 2:04 PM, Jim Bromer [EMAIL PROTECTED] wrote:
On Thu, Aug 14, 2008 at 12:59 PM, Abram Demski [EMAIL PROTECTED] wrote:
A more worrisome problem is that B may be contradictory in and of
itself. If (1) I can
On Thu, Aug 14, 2008 at 4:26 PM, Jim Bromer [EMAIL PROTECTED] wrote:
On Thu, Aug 14, 2008 at 3:06 PM, Abram Demski [EMAIL PROTECTED] wrote:
Jim,
You are right to call me on that. I need to provide an argument that,
if no logic satisfying B exists, human-level AGI is impossible.
I don't know
That made more sense to me. Responses follow.
On Fri, Aug 15, 2008 at 10:57 AM, Jim Bromer [EMAIL PROTECTED] wrote:
On Thu, Aug 14, 2008 at 5:05 PM, Abram Demski [EMAIL PROTECTED] wrote:
But, I am looking for a system that is me.
You, like everyone else's me, has it's limitations. So
I don't think the problems of a self-referential paradox is
significantly more difficult than the problems of general reference.
Not only are there implicit boundaries, some of which have to be
changed in an instant as the conversation develops, there are also
multiple levels of
Mike,
There are at least 2 ways this can happen, I think. The first way is
that a mechanism is theoretically proven to be complete, for some
less-than-sufficient formalism. The best example of this is one I
already mentioned: the neural nets of the nineties (specifically,
feedforward neural nets
Mike,
But this is horrible! If what you are saying is true, then research
will barely progress.
On Mon, Aug 18, 2008 at 11:46 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Abram,
The key distinction here is probably that some approach to AGI may be widely
accepted as having great *promise*. That
itself could be seen as the top, the correct logic. I
am not sure what this view implies.
--Abram
On Sun, Aug 17, 2008 at 10:52 PM, Charles Hixson
[EMAIL PROTECTED] wrote:
Abram Demski wrote:
On Fri, Aug 15, 2008 at 5:19 PM, Jim Bromer [EMAIL PROTECTED] wrote:
On Fri, Aug 15, 2008 at 3:40 PM
Mike,
I agree with Brad somewhat, because I do not think copying human (or
animal) intellect is the goal. It is a means to the end of general
intelligence.
However, that certainly doesn't stop me from participating in a
thought experiment.
I think the big thing with artificial play is figuring
Matt,
What is your opinion on Goedel machines?
http://www.idsia.ch/~juergen/goedelmachine.html
--Abram
On Sun, Aug 24, 2008 at 5:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Eric Burton [EMAIL PROTECTED] wrote:
These have profound impacts on AGI design. First, AIXI is (provably) not
Mike,
The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.
--Abram Demski
,
The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.
--Abram Demski
On Tue, Aug
inadequacy. And so, it seems, such a logic could exist!
Right?
Maybe?
Hopefully?
--Abram Demski
Mike,
The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory
that is, say, 99.999% probable to only improve
itself in the next 100 years, or a faster self-improver that is 50%
guaranteed.
Does this satisfy your criteria?
On Wed, Aug 27, 2008 at 9:14 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Abram Demski [EMAIL PROTECTED] wrote:
Matt,
What is your opinion
Mark,
I agree that we are mired 5 steps before that; after all, AGI is not
solved yet, and it is awfully hard to design prefab concepts in a
knowledge representation we know nothing about!
But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful?
Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, August 27, 2008 11:52 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity of Embodiment))
Mark,
I agree that we are mired 5 steps before
of
undesirable behavior.
By the way, where does this term wireheading come from? I assume
from context that it simply means self-stimulation.
-Abram Demski
On Wed, Aug 27, 2008 at 2:58 PM, Mark Waser [EMAIL PROTECTED] wrote:
Hi,
A number of problems unfortunately . . . .
-Learning is pleasurable
them. Knowing this, we do not want to enter
that state.
--Abram Demski
On Thu, Aug 28, 2008 at 9:18 AM, Mark Waser [EMAIL PROTECTED] wrote:
No, the state of ultimate bliss that you, I, and all other rational, goal
seeking agents seek
Your second statement copied below not withstanding, I *don't
for example we can
probabilistically declare that a program never halts if we run it for
a while and it doesn't. But there are certain facts that are not even
probabilistically learnable, so until I can show that none of these
are absolutely essential to RSI, I concede.
--Abram Demski
On Wed, Aug 27
environment (unless the AI makes a rational decision to stop using
resources on RSI since it has found a solution that is probably
optimal).
On Thu, Aug 28, 2008 at 11:25 AM, Abram Demski [EMAIL PROTECTED] wrote:
Matt,
Ok, you have me, I admit defeat.
I could only continue my argument if I could pin
and I *think* I'm making rather
good headway.
- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, August 28, 2008 9:57 AM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to
AGI (was Re: [agi] The Necessity
* to change the world.-- but
that's just the absurdity and self-defeating arguments that I expect from
many of the list denizens that can't be defended against except by
allocating far more time than it's worth.
- Original Message - From: Abram Demski [EMAIL PROTECTED]
To: agi@v2
I like that argument.
Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.
--Abram
On Thu, Aug 28, 2008 at 9:04 PM, j.k.
Matt, I have several objections.
First, as I understand it, your statement about the universe having a
finite description length only applies to the *observable* universe,
not the universe as a whole. The hubble radius expands at the speed of
light as more light reaches us, meaning that the
OK, then the observable universe has a finite description length. We don't
need to describe anything else to model it, so by universe I mean only the
observable part.
But, what good is it to only have finite description of the observable
part, since new portions of the universe enter the
On Thu, Sep 4, 2008 at 10:53 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
To clarify what I mean by observable universe, I am including any part that
could be observed in the future, and therefore must be modeled to make
accurate predictions. For example, if our universe is computed by one of an
On Thu, Sep 4, 2008 at 12:47 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Terren,
If you think it's all been said, please point me to the philosophy of AI
that includes it.
I believe what you are suggesting is best understood as an interaction machine.
General references:
.
--Abram Demski
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered
Mike,
The reason I decided that what you are arguing for is essentially an
interactive model is this quote:
But that is obviously only the half of it.Computers are obviously
much more than that - and Turing machines. You just have to look at
them. It's staring you in the face. There's something
.
-- Matt Mahoney, [EMAIL PROTECTED]
--- On Thu, 9/4/08, Abram Demski [EMAIL PROTECTED] wrote:
From: Abram Demski [EMAIL PROTECTED]
Subject: [agi] open models, closed models, priors
To: agi@v2.listbox.com
Date: Thursday, September 4, 2008, 2:19 PM
A closed model is one that is interpreted
Mike,
standard Bayesianism somewhat accounts for this-- exact-number
probabilities are defined by the math, but in no way are they seen as
the real probability values. A subjective prior is chosen, which
defines all further probabilities, but that prior is not believed to
be correct. Subsequent
Pei
On Thu, Sep 4, 2008 at 2:19 PM, Abram Demski [EMAIL PROTECTED] wrote:
A closed model is one that is interpreted as representing all truths
about that which is modeled. An open model is instead interpreted as
making a specific set of assertions, and leaving the rest undecided.
Formally, we
Mike,
In that case I do not see how your view differs from simplistic
dualism, as Terren cautioned. If your goal is to make a creativity
machine, in what sense would the machine be non-algorithmic? Physical
random processes?
--Abram
On Thu, Sep 4, 2008 at 6:59 PM, Mike Tintner [EMAIL PROTECTED]
Mike,
Will's objection is not quite so easily dismissed. You need to argue
that there is an alternative, not just that Will's is more of the
same.
--Abram
On Fri, Sep 5, 2008 at 9:34 AM, Mike Tintner [EMAIL PROTECTED] wrote:
MT:By contrast, all deterministic/programmed machines and computers
Mike,
The philosophical paradigm I'm assuming is that the only two
alternatives are deterministic and random. Either the next state is
completely determined by the last, or it is only probabilistically
determined.
Deterministic does not mean computable, since physical processes can
be totally
Mike,
On Fri, Sep 5, 2008 at 1:15 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Abram,
I don't understand why.how I need to argue an alternative - please explain.
I am not sure what to say, but here is my view of the situation. You
are claiming that there is a broad range of things that
Hi,
I am curious about the result you mention. You say that the genetic
algorithm stopped search very quickly. Why? It sounds like they want
to search to go longer, but can't they just tell it to go longer if
they want it to? And to reduce convergence, can't they just increase
the level of
Hi everyone,
Most people on this list should know about at least 3 uncertain logics
claiming to be AGI-grade (or close):
--Pie Wang's NARS
--Ben Goertzel's PLN
--YKY's recent hybrid logic proposal
It seems worthwhile to stop and take a look at what criteria such
logics should be judged by. So,
Good point, this applies to me as well (I'll let YKY answer as it
applies to him). I should have said conditional independence rather
than just independence.
--Abram
On Wed, Sep 17, 2008 at 4:21 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
On Wed, Sep 17, 2008 at 9:00 PM, YKY (Yan King Yin)
assumptions, are
runner-ups.
--Abram
On Wed, Sep 17, 2008 at 3:00 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski [EMAIL PROTECTED] wrote:
Speaking of my BPZ-logic...
2. Good at quick-and-dirty reasoning when needed
Right now I'm focusing on quick
PROTECTED] wrote:
On Wed, Sep 17, 2008 at 1:46 PM, Abram Demski [EMAIL PROTECTED] wrote:
Hi everyone,
Most people on this list should know about at least 3 uncertain logics
claiming to be AGI-grade (or close):
--Pie Wang's NARS
Yes, I heard of this guy a few times, who happens to use the same
be of interest.
--Abram Demski
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered
philosophizing ;-) ... it's just elementary algebra. The subtle part is
really
the semantics, i.e. the way the math is used to model situations.
-- Ben G
On Sat, Sep 20, 2008 at 2:22 PM, Abram Demski [EMAIL PROTECTED] wrote:
It has been mentioned several times on this list that NARS has
Well, one question is whether you want to be able to do inference like
A --B tv1
|-
B --A tv2
Doing that without term probabilities is pretty hard...
Not the way I set it up. A--B is not the conditional probability
P(B|A), but it *is* a conditional probability, so the normal Bayesian
Thanks for the critique. Replies follow...
On Sat, Sep 20, 2008 at 8:20 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Sat, Sep 20, 2008 at 2:22 PM, Abram Demski [EMAIL PROTECTED] wrote:
[...]
The key, therefore, is whether NARS can be FULLY treated as an
application of probability theory
the question is, can this be
justified probabilistically? I think I can give a very tentative
yes.
--Abram
On Sat, Sep 20, 2008 at 9:38 PM, Pei Wang [EMAIL PROTECTED] wrote:
On Sat, Sep 20, 2008 at 9:09 PM, Abram Demski [EMAIL PROTECTED] wrote:
(1) In probability theory, an event E has a constant
1 - 100 of 244 matches
Mail list logo