Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
go. Richard, even if your concerns are somewhat valid, why is it interesting here? It's not like neuroscience is dominated by discussions of (mis)interpretation of results, they are collecting data, and with that they are steadily getting somewhere. -- Vladimir Nesov robot...@gmail.com http

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
little to do with neuroscience. The field as a whole is hardly mortally afflicted with that problem (whether it's even real or not). If you look at any field large enough, there will be bad science. How is it relevant to study of AGI? -- Vladimir Nesov robot...@gmail.com http

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Thu, Jan 15, 2009 at 4:34 AM, Richard Loosemore r...@lightlink.com wrote: Vladimir Nesov wrote: On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com wrote: The whole point about the paper referenced above is that they are collecting (in a large number of cases) data

Re: [agi] fuzzy-probabilistic logic again

2009-01-13 Thread Vladimir Nesov
On Tue, Jan 13, 2009 at 7:50 AM, YKY (Yan King Yin) generic.intellige...@gmail.com wrote: On Tue, Jan 13, 2009 at 6:19 AM, Vladimir Nesov robot...@gmail.com wrote: I'm more interested in understanding the relationship between inference system and environment (rules of the game) that it allows

Re: [agi] just a thought

2009-01-13 Thread Vladimir Nesov
the capabilities of not one human mind, but a system of 10^10 minds. That is why my AGI proposal is so hideously expensive. http://www.mattmahoney.net/agi2.html Let's fire Matt and hire 10 chimps instead. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com

Re: [agi] fuzzy-probabilistic logic again

2009-01-12 Thread Vladimir Nesov
too wrapped up in themselves, and their development as ways to AI turns into a wild goose chase. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Vladimir Nesov
of simulation in the previous message (that includes a special format for request for simulation), no contradictions, and you've got an example. -- Vladimir Nesov --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] Identity abstraction

2009-01-09 Thread Vladimir Nesov
, that'd be your abstraction. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

Re: [agi] Identity abstraction

2009-01-09 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 8:48 PM, Harry Chesley ches...@acm.org wrote: On 1/9/2009 9:28 AM, Vladimir Nesov wrote: You need to name those parameters in a sentence only because it's linear, in a graph they can correspond to unnamed nodes. Abstractions can have structure

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Vladimir Nesov
don't pay enough attention to formal definitions: what this has a description means, and which reference TMs specific Kolmogorov complexities are measured in. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Vladimir Nesov
that length(P)length(Q), and longer strings can easily have smaller programs that output them. If P is 10^(10^10) symbols X, and Q is some random number of X smaller than 10^(10^10), it's probably K(P)K(Q), even though Q is a substring of P. -- Vladimir Nesov robot...@gmail.com http

Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-30 Thread Vladimir Nesov
average science reporter. ;-) Here is a critique of the article: http://neurocritic.blogspot.com/2008/12/deal-no-deal-or-dots.html -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com

Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-24 Thread Vladimir Nesov
so, it'll make your own thinking clearer if nothing else. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303

Re: [agi] Should I get a PhD?

2008-12-18 Thread Vladimir Nesov
creativity is one of the aspects of the quest of AGI research, and understood and optimized algorithms of creativity should allow to build ideas that are strong from the beginning, verification part of the process. Although it all sounds kinda warped in this language. -- Vladimir Nesov robot

[agi] CopyCat

2008-12-17 Thread Vladimir Nesov
specified parameters and narrow domain. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

Re: [agi] CopyCat

2008-12-17 Thread Vladimir Nesov
(slippages, temperature, salience, structural analogy), even though algorithm on the low level is different. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed

Re: [agi] AIXI

2008-12-07 Thread Vladimir Nesov
. Was this text even supposed to be coherent? -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your

Re: [agi] AIXI

2008-12-01 Thread Vladimir Nesov
. See ( http://www.scholarpedia.org/article/Algorithmic_probability ) for introduction. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] The Future of AGI

2008-11-26 Thread Vladimir Nesov
of final product of expression are relatively loose. These are rules of the game, that enable the complexity of skill to emerge, not square bounds on imagination. Most of the work comes from creative process, not from formality. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
don't quite see what you are criticizing, apart from specific examples of apparent confusion. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
not completely explicit. So, at this point I see at least this item in your paper as a strawman objection (given that I didn't revisit other items). -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
in a different language you don't like or consider meaningless, but it's a question of definitions and style, not essence, as long as the audience of the paper doesn't get confused. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi

Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
Here's a link to the paper: http://wpcarey.asu.edu/pubs/index.cfm?fct=detailsarticle_cobid=2216410author_cobid=1039524journal_cobid=2216411 -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com

Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
and nontechnical assertions. As a result, in his own example (at the very end of section 2), a doctor is considered in control of treating a patient only if he can prescribe *arbitrary* treatment that doesn't depend on the patient (or his illness). -- Vladimir Nesov [EMAIL PROTECTED] http

Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
of the controlled system S in any way, including the destruction of S. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
give some references to be specific in what you mean? Examples of what you consider outdated cognitive theory and better cognitive theory. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Vladimir Nesov
just to a lesser degree than this program? A concept, like any other. Also, some shades of gray are so thin you'd run out of matter in the Universe to track all the things that light. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
Referencing your own work is obviously not what I was asking for. Still, something more substantial than neuron is not a concept, as an example of cognitive theory? On Fri, Nov 21, 2008 at 4:35 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Vladimir Nesov wrote: Could you give some references

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
overall picture exists. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Vladimir Nesov
perceptual input. You didn't argue about a general case of AGI, so how does it follow that any AGI is bound to be conscious? -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Vladimir Nesov
connects to what, what can be inferred from what, what indicates what. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member

[agi] Re: Causality and science

2008-10-26 Thread Vladimir Nesov
are written up here: http://causalityrelay.wordpress.com/2008/08/01/causal-rules/ http://causalityrelay.wordpress.com/2008/08/04/causation-and-actions/ -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https

Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 3:17 AM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 7:42 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: This general sentiment doesn't help if I don't know what to do specifically. Well, given a C/C++ program that does have buffer overrun or stray

Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
interesting again, in an entirely new light). Maybe I'll understand this area better in months to come. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 1:17 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Sat, Oct 25, 2008 at 9:57 AM, Vladimir Nesov [EMAIL PROTECTED] wrote: Note that people are working on this specific technical problem for 30 years, (see the scary amount of work by Cousot's lab, http://www.di.ens.fr

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Vladimir Nesov
you've gotten there? If you don't believe in ad-hoc then you must have an algorithmic solution . . . . I pointed out only that it doesn' follow from AIXI that ad-hoc is justified. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Vladimir Nesov
solution to x*3=7, but you can only use integers, the perfect solution is impossible, but it doesn't mean that we are justified in using x=3 that looks good enough, as x=2 is the best solution given limitations. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
lisp with all its bells and whistles. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 1:36 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 10:24 AM, Vladimir Nesov [EMAIL PROTECTED] wrote: Russel, in what capacity do you use that language? In all capacities, for both hand written and machine generated content. Why mix AI-written

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 2:16 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 10:56 AM, Vladimir Nesov [EMAIL PROTECTED] wrote: Needing many different features just doesn't look like a natural thing for AI-generated programs. No, it doesn't, does it? And then you run

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 5:42 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 11:49 AM, Vladimir Nesov [EMAIL PROTECTED] wrote: Well, my point was that maybe the mistake is use of additional language constructions and not their absence? You yourself should be able

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 5:54 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 2:50 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: I'd write it in a separate language, developed for human programmers, but keep the language with which AI interacts minimalistic, to understand how

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 6:16 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 3:04 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: I'd write this specification in language it understands, including a library that builds more convenient primitives from that foundation

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
not really the point, the point is simplicitly of this process. Where simplicity matters is the question that needs to be answered before that. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 7:24 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 4:09 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: If that allows AI to understand the code, without directly helping it. In this case teaching it to understand these other languages might

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 8:29 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 5:26 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: It's a specific problem: jumping right to the code generation to specification doesn't work, because you'd need too much specification. At the same

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 8:47 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 5:37 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: You are describing it as a step one, with writing huge specifications by hand in formally interpretable language. I skipped a lot of details

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 9:28 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 6:00 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: If it's not supposed to be a generic language war, that becomes relevant. Fair point. On the other hand, I'm not yet ready to write a detailed

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 10:30 PM, Russell Wallace [EMAIL PROTECTED] wrote: On Fri, Oct 24, 2008 at 6:49 PM, Vladimir Nesov [EMAIL PROTECTED] wrote: I write software for analysis of C/C++ programs to find bugs in them (dataflow analysis, etc.). Where does AI come into this? I'd really like

Re: [agi] constructivist issues

2008-10-24 Thread Vladimir Nesov
. ;-) -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Vladimir Nesov
On Thu, Oct 23, 2008 at 4:13 AM, Trent Waddington [EMAIL PROTECTED] wrote: On Thu, Oct 23, 2008 at 8:39 AM, Vladimir Nesov [EMAIL PROTECTED] wrote: If you consider programming an AI social activity, you very unnaturally generalized this term, confusing other people. Chess programs do learn

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Vladimir Nesov
activities ;-). And chess might be a good drosophila for AI, if it's treated as such ( http://www-formal.stanford.edu/jmc/chess.html ). This was uncalled for. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https

Re: [agi] constructivist issues

2008-10-22 Thread Vladimir Nesov
the same thing) -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com

Re: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Vladimir Nesov
without conflicts, since T(N,S,O) is the maximum number of assemblies that each one in the pool is able to subtract from the total pool of assemblies. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https

Re: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Vladimir Nesov
calculate A as follows? A = SUM FROM X = 0 TO O OF C(S,X)*C(N-S,S-X) Because some of these sets intersect with each other, you can't include them all. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https

Re: [agi] Re: Meaning, communication and understanding

2008-10-20 Thread Vladimir Nesov
and D. In any case isn't good enough, Why does it even make sense to say that brain sends entities? From L? So far, all of this is completely unjustified, and probably not even wrong. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

[agi] Re: Value of philosophy

2008-10-20 Thread Vladimir Nesov
enough level to persuade other people (which is NOT good enough in itself, but barring that, who are we kidding). -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS

Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Vladimir Nesov
be equivalent to node assemblies with undesirably high cross talk. Ed, find my reply where I derive a lower bound. Even if overlap must be no more than 1 node, you can still have a number of assemblies as much more than N as necessary, if N is big enough, given fixed S. -- Vladimir Nesov [EMAIL

Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Vladimir Nesov
, it explodes when you try to increase N. But at S=10, O=2, you can see how lower bound increases as you increase N. At N=5000, lower bound is 6000, at N=10^6, it's 2.5*10^8, and at N=10^9 it's 2.5*10^14. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

[agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
for translation. This outward appearance has little bearing on semantic models. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive

Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
and what it models, in your own head, but this perspective loses technical precision, although to some extent it's necessary. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive

Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
that complicates semantics of time and more radical self-improvement). Sufficiently flexible cognitive algorithm should be able to integrate facts about any domain, becoming able to generate appropriate behavior in corresponding contexts. -- Vladimir Nesov [EMAIL PROTECTED] http

Re: [agi] Reasoning by analogy recommendations

2008-10-17 Thread Vladimir Nesov
is an instance of analogical reasoning. Looking at it the other way around, relational similarity is superior to attributional similarity because the former is more robust than the latter when there is contextual variation. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
mix with each other and establish transitions conditional on external input, thus creating combined trajectories. And so on. I'll work my way up to this in maybe a couple of months on the blog, after sequences on fluid representation, information and goals, and inference surface. -- Vladimir Nesov

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
results even mean anything, he does a poor job of explaining what it is and why, and what makes what he says new. Another slim possibility is that his theory is way over my background, but all the cues point the other way. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
is the maximum allowed overlap, in my last reply I used O incorrectly in the first paragraph, but checked and used in correctly in the second with the lower bound. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https

[agi] Brain-muscle interface helps paralysed monkeys move

2008-10-16 Thread Vladimir Nesov
to capture the feedback loop through environment starting from a single cell, and to include the activity of that cell in goal-directed control process, based on the effect on the environment. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
lower bound is trivial, and answers the question. It's likely somewhere in the references there. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
has the same weight, w. In a bounded-weight (w) code, every word has at most w ones. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
sense to assume an upper bound on their size... Which is why I don't like this whole fuss about cell assemblies in the first place, and prefer free exploration of Hamming space. ;-) -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] Networks, memory capacity, grid cells...

2008-10-16 Thread Vladimir Nesov
Seelig, R Aschenbrenner-Scheibe, MJ Kahana Proceedings of the National Academy of Sciences of the United States of America, Vol. 100, No. 13. (24 June 2003), pp. 7931-7936. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Vladimir Nesov
with higher complexity can guess a superset of environments that a lower complexity agent could, and therefore cannot do worse in accumulated reward. Interstellar void must be astronomically intelligent, with all its incompressible noise... -- Vladimir Nesov [EMAIL PROTECTED] http

Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Vladimir Nesov
Property'? (that's a rhetorical question, just in case there was any doubt!) I'd like to suggest that the COMP=false thread be considered a completely mis-placed, undebatable and dead topic on the AGI list. That'd be great. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Vladimir Nesov
are more difficult, and I don't want another workflow to worry about. Using notifications complicates access, and transparent notifications that post all the content to e-mail make forum equivalent to a mailing list anyway. Mailing list also forces better coherence to the discussion. -- Vladimir

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Vladimir Nesov
, limitations on complexity are pragmatically void. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Vladimir Nesov
argument. Using the fact of 2+2=4 won't give technical support to e.g. philosophy of solipsism. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] a mathematical explanation of AI algorithms?

2008-10-08 Thread Vladimir Nesov
structure. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com

Re: [agi] COMP = false

2008-10-06 Thread Vladimir Nesov
reasoning by wearing a lab coat. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

Re: [agi] COMP = false

2008-10-06 Thread Vladimir Nesov
with probability theory in the true Bayesian way? ;-) http://www.overcomingbias.com/2007/12/cult-koans.html -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed

Re: [agi] COMP = false

2008-10-04 Thread Vladimir Nesov
. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id

[agi] Improve generators, not products.

2008-09-22 Thread Vladimir Nesov
facts, deploying the solutions. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription

Re: [agi] Perceptrons Movie

2008-09-22 Thread Vladimir Nesov
, dull way. See a picture of a 6-layer neural network in the link below. Stephen Thaler Creativity machine: http://www.imagination-engines.com/cm.htm -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https

Re: [agi] Perceptrons Movie

2008-09-22 Thread Vladimir Nesov
On Tue, Sep 23, 2008 at 12:23 AM, Eric Burton [EMAIL PROTECTED] wrote: Creativity machine: http://www.imagination-engines.com/cm.htm Six layers, though? Perhaps the result is magic! Yes, and magic only works in the la-la land. -- Vladimir Nesov [EMAIL PROTECTED] http

[agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
for it)? This is an easily falsifiable statement, if a small group implements AGI, you'll be proven wrong. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
the question: you are making a very strong assertion by effectively saying that there is no shortcut, period (in the short-term perspective, anyway). How sure are you in this assertion? -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 3:56 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote: Hence the question: you are making a very strong assertion by effectively saying that there is no shortcut, period (in the short-term perspective, anyway). How

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 5:40 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote: So, do you think that there is at least, say, 99% probability that AGI won't be developed by a reasonably small group in the next 30 years? Yes

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
about harvesting. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https

Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Vladimir Nesov
with yourself as the main character, and ask yourself who you want to be. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss

Re: [agi] self organization

2008-09-16 Thread Vladimir Nesov
grateful for your feedback. I'm also very curious to see the results of your approach, and those of others here... I may be critical of what you're trying to do, but that doesn't mean I think you shouldn't do it (in most cases anyway :-] ). -- Vladimir Nesov [EMAIL PROTECTED] http

Re: [agi] self organization

2008-09-15 Thread Vladimir Nesov
take an algorithm currently fueled by intelligence (human economy), take intelligence out of it and hope that there will be enough traces of intelligence essence left to do the work regardless. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] self organization

2008-09-15 Thread Vladimir Nesov
keys are incorrect! This is a big discovery, therefore this first bit of information must be really important. Nope. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now

Re: [agi] self organization

2008-09-15 Thread Vladimir Nesov
iron before you win this lottery blindly. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your

Re: [agi] What is Friendly AI?

2008-09-04 Thread Vladimir Nesov
not important, unless these people start to pose a serious threat to the project. You need to care about what is the correct answer, not what is a popular one, in the case where popular answer is dictated by ignorance. P.S. AGI? I'm again not sure what we are talking about here. -- Vladimir Nesov [EMAIL

Re: [agi] What is Friendly AI?

2008-09-03 Thread Vladimir Nesov
or the cook. Sorry Terren, I don't understand what you are trying to say in the last two sentences. What does considering itself Friendly means and how it figures into FAI, as you use the phrase? What (I assume) kind of experiment or arbitrary decision are you talking about? -- Vladimir Nesov

Re: [agi] What is Friendly AI?

2008-09-03 Thread Vladimir Nesov
it to be Friendly, you don't generate an arbitrary AI and then test it. The latter, if not outright fatal, might indeed prove impossible as you suggest, which is why there is little to be gained from AI-boxes. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Vladimir Nesov
On Thu, Aug 28, 2008 at 8:29 PM, Terren Suydam [EMAIL PROTECTED] wrote: Vladimir Nesov [EMAIL PROTECTED] wrote: AGI doesn't do anything with the question, you do. You answer the question by implementing Friendly AI. FAI is the answer to the question. The question is: how could one

Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Vladimir Nesov
On Thu, Aug 28, 2008 at 9:08 PM, Terren Suydam [EMAIL PROTECTED] wrote: --- On Wed, 8/27/08, Vladimir Nesov [EMAIL PROTECTED] wrote: One of the main motivations for the fast development of Friendly AI is that it can be allowed to develop superintelligence to police the human space from

[agi] What is Friendly AI?

2008-08-30 Thread Vladimir Nesov
On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam [EMAIL PROTECTED] wrote: --- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote: You start with what is right? and end with Friendly AI, you don't start with Friendly AI and close the circular argument. This doesn't answer the question

Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Vladimir Nesov
On Sat, Aug 30, 2008 at 9:18 PM, Terren Suydam [EMAIL PROTECTED] wrote: --- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote: Given the psychological unity of humankind, giving the focus of right to George W. Bush personally will be enormously better for everyone than going in any

  1   2   3   4   >