go.
Richard, even if your concerns are somewhat valid, why is it
interesting here? It's not like neuroscience is dominated by
discussions of (mis)interpretation of results, they are collecting
data, and with that they are steadily getting somewhere.
--
Vladimir Nesov
robot...@gmail.com
http
little to do with neuroscience. The field as a whole is hardly
mortally afflicted with that problem (whether it's even real or not).
If you look at any field large enough, there will be bad science. How
is it relevant to study of AGI?
--
Vladimir Nesov
robot...@gmail.com
http
On Thu, Jan 15, 2009 at 4:34 AM, Richard Loosemore r...@lightlink.com wrote:
Vladimir Nesov wrote:
On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com
wrote:
The whole point about the paper referenced above is that they are
collecting
(in a large number of cases) data
On Tue, Jan 13, 2009 at 7:50 AM, YKY (Yan King Yin)
generic.intellige...@gmail.com wrote:
On Tue, Jan 13, 2009 at 6:19 AM, Vladimir Nesov robot...@gmail.com wrote:
I'm more interested in understanding the relationship between
inference system and environment (rules of the game) that it allows
the capabilities of not one human mind, but a
system of 10^10 minds. That is why my AGI proposal is so hideously expensive.
http://www.mattmahoney.net/agi2.html
Let's fire Matt and hire 10 chimps instead.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com
too wrapped
up in themselves, and their development as ways to AI turns into a
wild goose chase.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
of simulation in the previous message
(that includes a special format for request for simulation), no
contradictions, and you've got an example.
--
Vladimir Nesov
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
, that'd be your abstraction.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
On Fri, Jan 9, 2009 at 8:48 PM, Harry Chesley ches...@acm.org wrote:
On 1/9/2009 9:28 AM, Vladimir Nesov wrote:
You need to name those parameters in a sentence only because it's
linear, in a graph they can correspond to unnamed nodes. Abstractions
can have structure
don't pay enough
attention to formal definitions: what this has a description means,
and which reference TMs specific Kolmogorov complexities are measured
in.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https
that length(P)length(Q), and longer strings
can easily have smaller programs that output them. If P is
10^(10^10) symbols X, and Q is some random number of X smaller
than 10^(10^10), it's probably K(P)K(Q), even though Q is a
substring of P.
--
Vladimir Nesov
robot...@gmail.com
http
average science reporter.
;-)
Here is a critique of the article:
http://neurocritic.blogspot.com/2008/12/deal-no-deal-or-dots.html
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com
so, it'll make your own thinking clearer if nothing else.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303
creativity is one of the aspects of the
quest of AGI research, and understood and optimized algorithms of
creativity should allow to build ideas that are strong from the
beginning, verification part of the process. Although it all sounds
kinda warped in this language.
--
Vladimir Nesov
robot
specified parameters and narrow domain.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
(slippages, temperature,
salience, structural analogy), even though algorithm on the low level
is different.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed
.
Was this text even supposed to be coherent?
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
.
See ( http://www.scholarpedia.org/article/Algorithmic_probability )
for introduction.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
of final product of expression are relatively
loose. These are rules of the game, that enable the complexity of
skill to emerge, not square bounds on imagination. Most of the work
comes from creative process, not from formality.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
don't quite see what you are criticizing, apart from
specific examples of apparent confusion.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
not completely explicit.
So, at this point I see at least this item in your paper as a strawman
objection (given that I didn't revisit other items).
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com
in a different language you don't like or consider
meaningless, but it's a question of definitions and style, not
essence, as long as the audience of the paper doesn't get confused.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Here's a link to the paper:
http://wpcarey.asu.edu/pubs/index.cfm?fct=detailsarticle_cobid=2216410author_cobid=1039524journal_cobid=2216411
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com
and nontechnical
assertions. As a result, in his own example (at the very end of
section 2), a doctor is considered in control of treating a patient
only if he can prescribe *arbitrary* treatment that doesn't depend on
the patient (or his illness).
--
Vladimir Nesov
[EMAIL PROTECTED]
http
of the controlled system S in
any way, including the destruction of S.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303
give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory and better
cognitive theory.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com
just to a lesser degree than
this program? A concept, like any other. Also, some shades of gray are
so thin you'd run out of matter in the Universe to track all the
things that light.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
Referencing your own work is obviously not what I was asking for.
Still, something more substantial than neuron is not a concept, as
an example of cognitive theory?
On Fri, Nov 21, 2008 at 4:35 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
Vladimir Nesov wrote:
Could you give some references
overall
picture exists.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https
perceptual input.
You didn't argue about a general case of AGI, so how does it follow
that any AGI is bound to be conscious?
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303
connects to what, what
can be inferred from what, what indicates what.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member
are written up here:
http://causalityrelay.wordpress.com/2008/08/01/causal-rules/
http://causalityrelay.wordpress.com/2008/08/04/causation-and-actions/
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https
On Sat, Oct 25, 2008 at 3:17 AM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 7:42 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
This general sentiment doesn't help if I don't know what to do specifically.
Well, given a C/C++ program that does have buffer overrun or stray
interesting again, in an entirely
new light). Maybe I'll understand this area better in months to come.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
On Sat, Oct 25, 2008 at 1:17 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Sat, Oct 25, 2008 at 9:57 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Note that people are working on this specific technical problem for 30
years, (see the scary amount of work by Cousot's lab,
http://www.di.ens.fr
you've gotten there?
If you don't believe in ad-hoc then you must have an algorithmic solution .
. . .
I pointed out only that it doesn' follow from AIXI that ad-hoc is justified.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
solution to
x*3=7, but you can only use integers, the perfect solution is
impossible, but it doesn't mean that we are justified in using x=3
that looks good enough, as x=2 is the best solution given limitations.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
lisp with all its bells and whistles.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
On Fri, Oct 24, 2008 at 1:36 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 10:24 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Russel, in what capacity do you use that language?
In all capacities, for both hand written and machine generated content.
Why mix AI-written
On Fri, Oct 24, 2008 at 2:16 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 10:56 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Needing many different
features just doesn't look like a natural thing for AI-generated
programs.
No, it doesn't, does it? And then you run
On Fri, Oct 24, 2008 at 5:42 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 11:49 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Well, my point was that maybe the mistake is use of additional
language constructions and not their absence? You yourself should be
able
On Fri, Oct 24, 2008 at 5:54 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 2:50 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
I'd write it in a separate language, developed for human programmers,
but keep the language with which AI interacts minimalistic, to
understand how
On Fri, Oct 24, 2008 at 6:16 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 3:04 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
I'd write this specification in language it understands, including a
library that builds more convenient primitives from that foundation
not really the point, the point is
simplicitly of this process. Where simplicity matters is the question
that needs to be answered before that.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com
On Fri, Oct 24, 2008 at 7:24 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 4:09 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
If that allows AI to understand the code, without directly helping it.
In this case teaching it to understand these other languages might
On Fri, Oct 24, 2008 at 8:29 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 5:26 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
It's a specific problem: jumping right to the code generation to
specification doesn't work, because you'd need too much specification.
At the same
On Fri, Oct 24, 2008 at 8:47 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 5:37 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
You are describing it as a step one, with writing huge specifications
by hand in formally interpretable language.
I skipped a lot of details
On Fri, Oct 24, 2008 at 9:28 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 6:00 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
If it's not supposed to be a generic language war, that becomes relevant.
Fair point. On the other hand, I'm not yet ready to write a detailed
On Fri, Oct 24, 2008 at 10:30 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
On Fri, Oct 24, 2008 at 6:49 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
I write software for analysis of C/C++ programs to find bugs in them
(dataflow analysis, etc.). Where does AI come into this? I'd really
like
. ;-)
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member
On Thu, Oct 23, 2008 at 4:13 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
On Thu, Oct 23, 2008 at 8:39 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
If you consider programming an AI social activity, you very
unnaturally generalized this term, confusing other people. Chess
programs do learn
activities ;-).
And chess might be a good drosophila for AI, if it's treated as such (
http://www-formal.stanford.edu/jmc/chess.html ).
This was uncalled for.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https
the same thing)
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com
without conflicts, since
T(N,S,O) is the maximum number of assemblies that each one in the pool
is able to subtract from the total pool of assemblies.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https
calculate A as follows?
A = SUM FROM X = 0 TO O OF C(S,X)*C(N-S,S-X)
Because some of these sets intersect with each other, you can't
include them all.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https
and D.
In any case isn't good enough, Why does it even make sense to say
that brain sends entities? From L? So far, all of this is
completely unjustified, and probably not even wrong.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
enough level to persuade
other people (which is NOT good enough in itself, but barring that,
who are we kidding).
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS
be
equivalent to node assemblies with undesirably high cross talk.
Ed, find my reply where I derive a lower bound. Even if overlap must
be no more than 1 node, you can still have a number of assemblies as
much more than N as necessary, if N is big enough, given fixed S.
--
Vladimir Nesov
[EMAIL
, it explodes when
you try to increase N.
But at S=10, O=2, you can see how lower bound increases as you
increase N. At N=5000, lower bound is 6000, at N=10^6, it's 2.5*10^8,
and at N=10^9 it's 2.5*10^14.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
for
translation. This outward appearance has little bearing on semantic
models.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive
and what it models, in your own head, but this perspective
loses technical precision, although to some extent it's necessary.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive
that complicates semantics of time and more
radical self-improvement). Sufficiently flexible cognitive algorithm
should be able to integrate facts about any domain, becoming able to
generate appropriate behavior in corresponding contexts.
--
Vladimir Nesov
[EMAIL PROTECTED]
http
is an instance of analogical reasoning. Looking at it the
other way around, relational similarity is superior to attributional
similarity because the former is more robust than the latter when
there is contextual variation.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
mix with each other and establish
transitions conditional on external input, thus creating combined
trajectories. And so on. I'll work my way up to this in maybe a couple
of months on the blog, after sequences on fluid representation,
information and goals, and inference surface.
--
Vladimir Nesov
results even mean anything, he does a poor job of explaining
what it is and why, and what makes what he says new. Another slim
possibility is that his theory is way over my background, but all the
cues point the other way.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
is the maximum allowed overlap, in my
last reply I used O incorrectly in the first paragraph, but checked
and used in correctly in the second with the lower bound.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https
to capture the feedback loop through environment
starting from a single cell, and to include the activity of that cell
in goal-directed control process, based on the effect on the
environment.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
lower bound is trivial, and answers the question. It's likely
somewhere in the references there.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
has the same
weight, w. In a bounded-weight (w) code, every word has at most w
ones.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com
sense to assume an upper
bound on their size...
Which is why I don't like this whole fuss about cell assemblies in the
first place, and prefer free exploration of Hamming space. ;-)
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
Seelig,
R Aschenbrenner-Scheibe, MJ Kahana
Proceedings of the National Academy of Sciences of the United States
of America, Vol. 100, No. 13. (24 June 2003), pp. 7931-7936.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
with higher complexity can guess a superset of environments that a lower
complexity agent could, and therefore cannot do worse in accumulated reward.
Interstellar void must be astronomically intelligent, with all its
incompressible noise...
--
Vladimir Nesov
[EMAIL PROTECTED]
http
Property'? (that's a rhetorical question, just in case there
was any doubt!)
I'd like to suggest that the COMP=false thread be considered a completely
mis-placed, undebatable and dead topic on the AGI list.
That'd be great.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
are more difficult, and I
don't want another workflow to worry about. Using notifications
complicates access, and transparent notifications that post all the
content to e-mail make forum equivalent to a mailing list anyway.
Mailing list also forces better coherence to the discussion.
--
Vladimir
, limitations on
complexity are pragmatically void.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
argument. Using the fact of 2+2=4 won't give
technical support to e.g. philosophy of solipsism.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
structure.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com
reasoning by wearing a lab coat.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
with probability theory
in the true Bayesian way? ;-)
http://www.overcomingbias.com/2007/12/cult-koans.html
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed
.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id
facts,
deploying the solutions.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
, dull way. See a picture of a 6-layer neural
network in the link below.
Stephen Thaler
Creativity machine: http://www.imagination-engines.com/cm.htm
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https
On Tue, Sep 23, 2008 at 12:23 AM, Eric Burton [EMAIL PROTECTED] wrote:
Creativity machine: http://www.imagination-engines.com/cm.htm
Six layers, though? Perhaps the result is magic!
Yes, and magic only works in the la-la land.
--
Vladimir Nesov
[EMAIL PROTECTED]
http
for it)? This is an easily falsifiable statement, if a
small group implements AGI, you'll be proven wrong.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
the question: you are making a very strong assertion by
effectively saying that there is no shortcut, period (in the
short-term perspective, anyway). How sure are you in this assertion?
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
On Mon, Sep 22, 2008 at 3:56 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Hence the question: you are making a very strong assertion by
effectively saying that there is no shortcut, period (in the
short-term perspective, anyway). How
On Mon, Sep 22, 2008 at 5:40 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
So, do you think that there is at least, say, 99%
probability that AGI
won't be developed by a reasonably small group in the
next 30 years?
Yes
about harvesting.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https
with yourself as the main
character, and ask yourself who you want to be.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss
grateful for your feedback. I'm
also very curious to see the results of your approach, and those of others
here... I may be critical of what you're trying to do, but that doesn't mean
I think you shouldn't do it (in most cases anyway :-] ).
--
Vladimir Nesov
[EMAIL PROTECTED]
http
take an algorithm currently fueled by intelligence (human
economy), take intelligence out of it and hope that there will be
enough traces of intelligence essence left to do the work regardless.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
keys are incorrect! This is a big discovery,
therefore this first bit of information must be really important.
Nope.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
iron before you win this lottery blindly.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your
not important, unless these people start to pose a serious
threat to the project. You need to care about what is the correct
answer, not what is a popular one, in the case where popular answer is
dictated by ignorance.
P.S. AGI? I'm again not sure what we are talking about here.
--
Vladimir Nesov
[EMAIL
or the cook.
Sorry Terren, I don't understand what you are trying to say in the
last two sentences. What does considering itself Friendly means and
how it figures into FAI, as you use the phrase? What (I assume) kind
of experiment or arbitrary decision are you talking about?
--
Vladimir Nesov
it to be Friendly, you don't generate an arbitrary AI and
then test it. The latter, if not outright fatal, might indeed prove
impossible as you suggest, which is why there is little to be gained
from AI-boxes.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
On Thu, Aug 28, 2008 at 8:29 PM, Terren Suydam [EMAIL PROTECTED] wrote:
Vladimir Nesov [EMAIL PROTECTED] wrote:
AGI doesn't do anything with the question, you do. You
answer the
question by implementing Friendly AI. FAI is the answer to
the
question.
The question is: how could one
On Thu, Aug 28, 2008 at 9:08 PM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Wed, 8/27/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
One of the main motivations for the fast development of
Friendly AI is
that it can be allowed to develop superintelligence to
police the
human space from
On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
You start with what is right? and end with
Friendly AI, you don't
start with Friendly AI and close the circular
argument. This doesn't
answer the question
On Sat, Aug 30, 2008 at 9:18 PM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
Given the psychological unity of humankind, giving the
focus of
right to George W. Bush personally will be
enormously better for
everyone than going in any
1 - 100 of 392 matches
Mail list logo