On Thu, Jan 15, 2009 at 4:34 AM, Richard Loosemore wrote:
> Vladimir Nesov wrote:
>>
>> On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore
>> wrote:
>>>
>>> The whole point about the paper referenced above is that they are
>>> collecting
>>&
e to do with neuroscience. The field as a whole is hardly
mortally afflicted with that problem (whether it's even real or not).
If you look at any field large enough, there will be bad science. How
is it relevant to study of AGI?
--
Vladimir Nesov
robot...@gmail.com
http://causalityre
; The short version of the overall story is that neuroscience is out of
> control as far as overinflated claims go.
>
Richard, even if your concerns are somewhat valid, why is it
interesting here? It's not like neuroscience is dominated by
discussions of (mis)interpretation of results,
economy, we have to replicate the capabilities of not one human mind, but a
> system of 10^10 minds. That is why my AGI proposal is so hideously expensive.
> http://www.mattmahoney.net/agi2.html
>
Let's fire Matt and hire 10 chimps instead.
--
On Tue, Jan 13, 2009 at 7:50 AM, YKY (Yan King Yin)
wrote:
> On Tue, Jan 13, 2009 at 6:19 AM, Vladimir Nesov wrote:
>
>> I'm more interested in understanding the relationship between
>> inference system and environment (rules of the game) that it allows to
>> reas
ful. It looks like many logics become too wrapped
up in themselves, and their development as ways to AI turns into a
wild goose chase.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/membe
Ronald,
It is NOT OK to post utter nonsense. Don't.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rs
On Fri, Jan 9, 2009 at 8:48 PM, Harry Chesley wrote:
> On 1/9/2009 9:28 AM, Vladimir Nesov wrote:
>>
>> You need to name those parameters in a sentence only because it's
>> linear, in a graph they can correspond to unnamed nodes. Abstractions
>> can have struct
ntion, that'd be your abstraction.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Su
ng to your definition of simulation in the previous message
(that includes a special format for request for simulation), no
contradictions, and you've got an example.
--
Vladimir Nesov
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
On Fri, Jan 9, 2009 at 6:04 AM, Matt Mahoney wrote:
>
> Your earlier counterexample was a trivial simulation. It simulated itself but
> did
> nothing else. If P did something that Q didn't, then Q would not be
> simulating P.
My counterexample also bragged, outside the input format that
request
seudomathematical assertion of yours once. You don't pay enough
attention to formal definitions: what this "has a description" means,
and which reference TMs specific Kolmogorov complexities are measured
in.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com
f unfounded,
> simplistic hyperbole I'd expect from your average science reporter.
> ;-)
>
Here is a critique of the article:
http://neurocritic.blogspot.com/2008/12/deal-no-deal-or-dots.html
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
--
t'll make your own thinking clearer if nothing else.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/r
only 1e-30 chance of working, it's
no use. Developing synthetic creativity is one of the aspects of the
quest of AGI research, and understood and optimized algorithms of
creativity should allow to build ideas that are strong from the
beginning, verification part of the process. Although it all sounds
k
lly
described using lexicon from CopyCat (slippages, temperature,
salience, structural analogy), even though algorithm on the low level
is different.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.
tend this style of
algorithm to anything interesting, too much gets projected into
manually specified parameters and narrow domain.
--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/
sible is a problem for higher intelligence, not present
> day computer intelligence.
>
Was this text even supposed to be coherent?
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/memb
lts, but the idea of simple hypotheses prior and proof that it
does good at learning are Solomonoff's.
See ( http://www.scholarpedia.org/article/Algorithmic_probability )
for introduction.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
final product of expression are relatively
loose. These are rules of the game, that enable the complexity of
skill to emerge, not square bounds on imagination. Most of the work
comes from creative process, not from formality.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com
as strongly asserting anything. They are just saying
the same thing in a different language you don't like or consider
meaningless, but it's a question of definitions and style, not
essence, as long as the audience of the paper doesn't get confus
presentation is much more explicit than in the
extremely distributed case. Or course, it's not completely explicit.
So, at this point I see at least this item in your paper as a strawman
objection (given that I didn't revisit other items).
--
Vladimir Nesov
[EMAIL PROTECTED]
http://c
nvalidate analysis considering individual cells
or small areas of cortex, just as gravitation pull from the Mars
doesn't invalidate approximate calculations made on Earth according to
Newton's laws. I don't quite see what you are criticizing, apart from
specific examples of apparen
em with that.
Still, it's so murky even for simple correlates that no good overall
picture exists.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS
Referencing your own work is obviously not what I was asking for.
Still, something more substantial than "neuron is not a concept", as
an example of "cognitive theory"?
On Fri, Nov 21, 2008 at 4:35 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Vladimir Nesov wro
oes a rock compute Fibonacci numbers just to a lesser degree than
this program? A concept, like any other. Also, some shades of gray are
so thin you'd run out of matter in the Universe to track all the
things that light.
--
Vladimir Nesov
[EMAIL PROTECTED]
ht
is 30 or 40 years out of date.
>
Could you give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory and better
cognitive theory.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
--
On Fri, Nov 21, 2008 at 2:03 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 21, 2008 at 1:40 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>>
>> The main problem is that if you interpret spike timing to be playing the
>> role that you (and they) im
use this notion of "freedom"
to establish asymmetry:
"The controller C may change the state of the controlled system S in
any way, including the destruction of S."
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
-
quot;, and, having trouble with free
will-like issues, produces a combination of brittle and nontechnical
assertions. As a result, in his own example (at the very end of
section 2), a doctor is considered "in control" of treating a patient
only if he can prescribe *arbitrary* treatment th
Here's a link to the paper:
http://wpcarey.asu.edu/pubs/index.cfm?fct=details&article_cobid=2216410&author_cobid=1039524&journal_cobid=2216411
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
ag
pts, it
> will be aware of its consciousness.
>
> I will take that argument further in another paper, because we need to
> understand animal minds, for example.
It's hard and iffy business trying to recast a different architecture
in the language that involves these bottomless concepts and
language of perceptual wiring,
with correspondence between qualia and areas implementing
modalities/receiving perceptual input.
You didn't argue about a general case of AGI, so how does it follow
that any AGI is bound to be conscious?
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.
expected variations of context, which
distinguishes it from mere correlation, when you can set up a context
that breaks it. My thoughts on this subject are written up here:
http://causalityrelay.wordpress.com/2008/08/01/causal-rules/
http://causalityrelay.wordpress.com/2008/08/04/causation-and-actions
ng step is wrong. If you need to find the best solution to
x*3=7, but you can only use integers, the perfect solution is
impossible, but it doesn't mean that we are justified in using x=3
that looks good enough, as x=2 is the best solution given limitations.
--
Vladimir Nesov
[EMAIL PROT
tion under uncertainty? How do you know when
> you've gotten there?
>
> If you don't believe in ad-hoc then you must have an algorithmic solution .
> . . .
>
I pointed out only that it doesn' follow from AIXI that ad-hoc is justified.
--
Vla
On Sat, Oct 25, 2008 at 1:17 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Sat, Oct 25, 2008 at 9:57 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> Note that people are working on this specific technical problem for 30
>> years, (see the scary amount of wo
ow, it looks like a long way there. I'm currently shifting
towards probabilistic analysis of huge formal systems in my thinking
about AI (which is why chess looks interesting again, in an entirely
new light). Maybe I'll understand this area better in months to come.
--
Vladimir Nesov
On Sat, Oct 25, 2008 at 3:17 AM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 7:42 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> This general sentiment doesn't help if I don't know what to do specifically.
>
> Well, given a C/C++ pro
here and
exploit its computational potential on industrial scale. ;-)
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/ar
ect solution is
impossible, you could still have an optimal approximation under given
limitations.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: http
On Fri, Oct 24, 2008 at 10:30 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 6:49 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> I write software for analysis of C/C++ programs to find bugs in them
>> (dataflow analysis, etc.). Where does AI
On Fri, Oct 24, 2008 at 9:28 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 6:00 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> If it's not supposed to be a generic language war, that becomes relevant.
>
> Fair point. On the other ha
On Fri, Oct 24, 2008 at 8:47 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 5:37 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> You are describing it as a step one, with writing huge specifications
>> by hand in formally interpretable langu
On Fri, Oct 24, 2008 at 8:29 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 5:26 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> It's a specific problem: jumping right to the code generation to
>> specification doesn't work, beca
On Fri, Oct 24, 2008 at 7:24 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 4:09 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> If that allows AI to understand the code, without directly helping it.
>> In this case teaching it to understand
ut you can get hold of internal representation of any language and
emulate/compile/analyze it. It's not really the point, the point is
simplicitly of this process. Where simplicity matters is the question
that needs to be answered before that.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causality
On Fri, Oct 24, 2008 at 6:39 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 3:24 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> Again, specifics. What is this "specification" thing? What kind of
>> task are to be specified in it?
On Fri, Oct 24, 2008 at 6:16 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 3:04 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> I'd write this specification in language it understands, including a
>> library that builds more convenient pr
On Fri, Oct 24, 2008 at 5:54 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 2:50 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> I'd write it in a separate language, developed for human programmers,
>> but keep the language with whi
On Fri, Oct 24, 2008 at 5:42 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 11:49 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> Well, my point was that maybe the mistake is use of additional
>> language constructions and not their abse
On Fri, Oct 24, 2008 at 2:16 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 10:56 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
>> Needing many different
>> features just doesn't look like a natural thing for AI-generated
>> p
On Fri, Oct 24, 2008 at 1:36 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 10:24 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> Russel, in what capacity do you use that language?
>
> In all capacities, for both hand written and machine gen
real lisp with all its bells and whistles.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscri
On Thu, Oct 23, 2008 at 4:13 AM, Trent Waddington
<[EMAIL PROTECTED]> wrote:
> On Thu, Oct 23, 2008 at 8:39 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> If you consider programming an AI social activity, you very
>> unnaturally generalized this term, confusing othe
On Thu, Oct 23, 2008 at 2:22 AM, Trent Waddington
<[EMAIL PROTECTED]> wrote:
> On Wed, Oct 22, 2008 at 8:24 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>> Current AIs learn chess without engaging in social activities ;-).
>> And chess might be a good drosophila for
you receive, but how
fast you can improve your model.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modi
uired a
little bit of familiarity with algorithms on graphs/discrete math.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/ar
se Lojban allows for more ambiguity (as well as Cyc-L level precision,
> depending on speaker's choice) ... and of course Lojban is intended for
> interactive conversation rather than knowledge entry
>
(as tools towards improving bandwidth of experience, they do the same thing)
--
oof-of-concept level results about efficiency
without resorting to Cycs and Lojbans, and after that they'll turn out
to be irrelevant.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/me
emember about it 10 years later, retouch the most
annoying holes with simple statistical techniques, and continue as
before.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RS
learn chess without engaging in social activities ;-).
And chess might be a good drosophila for AI, if it's treated as such (
http://www-formal.stanford.edu/jmc/chess.html ).
This was uncalled for.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
t combinations of sets, it's a filter on the
individual sets from the total of C(N,S).
>
> --Sixth, if C(S,X)*C(N-S,S-X) enumerates all possible combinations having an
> overlap of X, why can't one calculate A as follows?
>
> A = SUM FROM X = 0 T
es can be added without conflicts, since
T(N,S,O) is the maximum number of assemblies that each one in the pool
is able to subtract from the total pool of assemblies.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
xcel.
>
Your spreadsheet doesn't catch it for S=100 and O=1, it explodes when
you try to increase N.
But at S=10, O=2, you can see how lower bound increases as you
increase N. At N=5000, lower bound is 6000, at N=10^6, it's 2.5*10^8,
and at N=10^9 it's 2.5*10^14.
--
Vl
IVELY LARGE, which would be
> equivalent to node assemblies with undesirably high cross talk.
Ed, find my reply where I derive a lower bound. Even if overlap must
be no more than 1 node, you can still have a number of assemblies as
much more than N as necessary, if N is big enough, given fixed S
r we are
going in the right direction on at least good enough level to persuade
other people (which is NOT good enough in itself, but barring that,
who are we kidding).
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives:
n between language model and D.
>
"In any case" isn't good enough, Why does it even make sense to say
that brain "sends" "entities"? From "L"? So far, all of this is
completely unjustified, and probably not even wrong.
--
Vl
nction between them. As a given,
interaction happens at the narrow I/O interface, and anything else is
a design decision for a specific AI (even invariability of I/O is, a
simplifying assumption that complicates semantics of time and more
radical self-improvement). Sufficiently flexible cognitive
aning to aspects of operation of AGI, and to relations
between AGI and what it models, in your own head, but this perspective
loses technical precision, although to some extent it's necessary.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
--
t it's easy to see how our
technology, as physical medium, transfers information ready for
translation. This outward appearance has little bearing on semantic
models.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archiv
ut of context). Thus reasoning with contextual
information is an instance of analogical reasoning. Looking at it the
other way around, relational similarity is superior to attributional
similarity because the former is more robust than the latter when
there is contextua
l oscillations during a working memory task.
by: DS Rizzuto, JR Madsen, EB Bromfield, A Schulze-Bonhage, D Seelig,
R Aschenbrenner-Scheibe, MJ Kahana
Proceedings of the National Academy of Sciences of the United States
of America, Vol. 100, No. 13. (24 June 2003), p
a fixed size ... but it makes slightly more sense to assume an upper
> bound on their size...
>
Which is why I don't like this whole fuss about cell assemblies in the
first place, and prefer free exploration of Hamming space. ;-)
--
Vladimir Nesov
[EMAIL PROTEC
ght (w) code, every word in the code has the same
weight, w. In a bounded-weight (w) code, every word has at most w
ones."
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/mem
nt-weight binary code, not bounded-weight though).
My lower bound is trivial, and answers the question. It's likely
somewhere in the references there.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://ww
dle of nowhere.
Brain is able to capture the feedback loop through environment
starting from a single cell, and to include the activity of that cell
in goal-directed control process, based on the effect on the
environment.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://caus
l problem, O-1 is the maximum allowed overlap, in my
last reply I used O incorrectly in the first paragraph, but checked
and used in correctly in the second with the lower bound.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
order S-O with respect to N, whereas
C(S,N) is bound by polynomial of order S. Thus, even if you only allow
the overlap of O=1 (so that no two cell assemblies are allowed to have
even two nodes in common), you can get arbitrarily large number of
cell assemblies (including in proportion to N) by choosing bi
and should be shown to make sense in
this context.
Based on "confabulation" papers, I find Hecht-Nielsen deeply confused.
If his results even mean anything, he does a poor job of explaining
what it is and why, and what makes what he says new. Another slim
possibility is that hi
equences on fluid representation,
information and goals, and inference surface.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/mem
mplexity. There is plenty
of ground to cover in the space of simple things, limitations on
complexity are pragmatically void.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/3
list also forces better coherence to the discussion.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modif
I expected to feel when you say the words
> 'Intellectual Property'? (that's a rhetorical question, just in case there
> was any doubt!)
>
> I'd like to suggest that the COMP=false thread be considered a completely
> mis-placed, undebatable and dead topic o
nvironment with algorithmic complexity K, the agent must be able
> to simulate the environment, so it must also have algorithmic complexity K. An
> agent with higher complexity can guess a superset of environments that a lower
> complexity agent could, and therefore cannot do worse in accumula
ment is weaker than
original informal argument it was invented to support, there is no
point in technical argument. Using the fact of 2+2=4 won't give
technical support to e.g. philosophy of solipsism.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
which to look at less well
understood hacks, to feel the underlying structure.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/
lmqg0.jpg
>
Don't you know that only clown suit interacts with probability theory
in the true Bayesian way? ;-)
http://www.overcomingbias.com/2007/12/cult-koans.html
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
A
e that is thousands of miles
away, years ago, and only ever existed virtually. You can't adapt
known physics to do THAT. You'd need an intelligent meddler. And you
can't escape flaws in your reasoning by wearing a lab coat.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.w
ience.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?memb
On Tue, Sep 23, 2008 at 12:23 AM, Eric Burton <[EMAIL PROTECTED]> wrote:
>> Creativity machine: http://www.imagination-engines.com/cm.htm
>
> Six layers, though? Perhaps the result is magic!
>
Yes, and magic only works in the la-la land.
--
Vladimir Nesov
[
shuman level
intelligence."
Hilarious -- with a sad, dull way. See a picture of a 6-layer neural
network in the link below.
Stephen Thaler
Creativity machine: http://www.imagination-engines.com/cm.htm
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
once and for all, deciding how to solve
the problems, designing appropriate tools, learning required facts,
deploying the solutions.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/
every grain
harvester combine for 30 years about harvesting.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/3
On Mon, Sep 22, 2008 at 5:40 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Sun, 9/21/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
>> So, do you think that there is at least, say, 99%
>> probability that AGI
>> won't be developed by a reaso
On Mon, Sep 22, 2008 at 3:56 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Sun, 9/21/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
>> Hence the question: you are making a very strong assertion by
>> effectively saying that there is no shortcut, period (in
we
> would
> have figured it out by now.
>
Hence the question: you are making a very strong assertion by
effectively saying that there is no shortcut, period (in the
short-term perspective, anyway). How sure are you in this assertion?
--
Vladimir Nesov
[EMA
et
you'd take for it)? This is an easily falsifiable statement, if a
small group implements AGI, you'll be proven wrong.
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archives: https://www.listbox.com/member/ar
aning steam engine has a place in your heart, you
need to stop writing a science fiction novel with yourself as the main
character, and ask yourself who you want to be. "
--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/
---
agi
Archiv
my mind when it seems like my current ideas are inadequate. And
> of course, to provide the same kind of feedback for others when I have
> something to contribute. In that spirit, I'm grateful for your feedback. I'm
> also very curious to see the results of your approach, and th
1 - 100 of 437 matches
Mail list logo