Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Thu, Jan 15, 2009 at 4:34 AM, Richard Loosemore wrote: > Vladimir Nesov wrote: >> >> On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore >> wrote: >>> >>> The whole point about the paper referenced above is that they are >>> collecting >>&

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
e to do with neuroscience. The field as a whole is hardly mortally afflicted with that problem (whether it's even real or not). If you look at any field large enough, there will be bad science. How is it relevant to study of AGI? -- Vladimir Nesov robot...@gmail.com http://causalityre

Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
; The short version of the overall story is that neuroscience is out of > control as far as overinflated claims go. > Richard, even if your concerns are somewhat valid, why is it interesting here? It's not like neuroscience is dominated by discussions of (mis)interpretation of results,

Re: [agi] just a thought

2009-01-13 Thread Vladimir Nesov
economy, we have to replicate the capabilities of not one human mind, but a > system of 10^10 minds. That is why my AGI proposal is so hideously expensive. > http://www.mattmahoney.net/agi2.html > Let's fire Matt and hire 10 chimps instead. --

Re: [agi] fuzzy-probabilistic logic again

2009-01-13 Thread Vladimir Nesov
On Tue, Jan 13, 2009 at 7:50 AM, YKY (Yan King Yin) wrote: > On Tue, Jan 13, 2009 at 6:19 AM, Vladimir Nesov wrote: > >> I'm more interested in understanding the relationship between >> inference system and environment (rules of the game) that it allows to >> reas

Re: [agi] fuzzy-probabilistic logic again

2009-01-12 Thread Vladimir Nesov
ful. It looks like many logics become too wrapped up in themselves, and their development as ways to AI turns into a wild goose chase. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/membe

Re: [agi] Fuzzy Logic in a General Artificial Intelligent Opponent Processing machine.

2009-01-10 Thread Vladimir Nesov
Ronald, It is NOT OK to post utter nonsense. Don't. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rs

Re: [agi] Identity & abstraction

2009-01-09 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 8:48 PM, Harry Chesley wrote: > On 1/9/2009 9:28 AM, Vladimir Nesov wrote: >> >> You need to name those parameters in a sentence only because it's >> linear, in a graph they can correspond to unnamed nodes. Abstractions >> can have struct

Re: [agi] Identity & abstraction

2009-01-09 Thread Vladimir Nesov
ntion, that'd be your abstraction. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Su

Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Vladimir Nesov
ng to your definition of simulation in the previous message (that includes a special format for request for simulation), no contradictions, and you've got an example. -- Vladimir Nesov --- agi Archives: https://www.listbox.com/member/archive/303/=now

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 6:04 AM, Matt Mahoney wrote: > > Your earlier counterexample was a trivial simulation. It simulated itself but > did > nothing else. If P did something that Q didn't, then Q would not be > simulating P. My counterexample also bragged, outside the input format that request

Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Vladimir Nesov
seudomathematical assertion of yours once. You don't pay enough attention to formal definitions: what this "has a description" means, and which reference TMs specific Kolmogorov complexities are measured in. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com

Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-30 Thread Vladimir Nesov
f unfounded, > simplistic hyperbole I'd expect from your average science reporter. > ;-) > Here is a critique of the article: http://neurocritic.blogspot.com/2008/12/deal-no-deal-or-dots.html -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --

Re: [agi] Introducing Steve's "Theory of Everything" in cognition.

2008-12-24 Thread Vladimir Nesov
t'll make your own thinking clearer if nothing else. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/r

Re: [agi] Should I get a PhD?

2008-12-18 Thread Vladimir Nesov
only 1e-30 chance of working, it's no use. Developing synthetic creativity is one of the aspects of the quest of AGI research, and understood and optimized algorithms of creativity should allow to build ideas that are strong from the beginning, verification part of the process. Although it all sounds k

Re: [agi] CopyCat

2008-12-17 Thread Vladimir Nesov
lly described using lexicon from CopyCat (slippages, temperature, salience, structural analogy), even though algorithm on the low level is different. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.

[agi] CopyCat

2008-12-17 Thread Vladimir Nesov
tend this style of algorithm to anything interesting, too much gets projected into manually specified parameters and narrow domain. -- Vladimir Nesov robot...@gmail.com http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/

Re: [agi] AIXI

2008-12-07 Thread Vladimir Nesov
sible is a problem for higher intelligence, not present > day computer intelligence. > Was this text even supposed to be coherent? -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/memb

Re: [agi] AIXI

2008-12-01 Thread Vladimir Nesov
lts, but the idea of simple hypotheses prior and proof that it does good at learning are Solomonoff's. See ( http://www.scholarpedia.org/article/Algorithmic_probability ) for introduction. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/

Re: [agi] The Future of AGI

2008-11-26 Thread Vladimir Nesov
final product of expression are relatively loose. These are rules of the game, that enable the complexity of skill to emerge, not square bounds on imagination. Most of the work comes from creative process, not from formality. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
as strongly asserting anything. They are just saying the same thing in a different language you don't like or consider meaningless, but it's a question of definitions and style, not essence, as long as the audience of the paper doesn't get confus

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
presentation is much more explicit than in the extremely distributed case. Or course, it's not completely explicit. So, at this point I see at least this item in your paper as a strawman objection (given that I didn't revisit other items). -- Vladimir Nesov [EMAIL PROTECTED] http://c

Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
nvalidate analysis considering individual cells or small areas of cortex, just as gravitation pull from the Mars doesn't invalidate approximate calculations made on Earth according to Newton's laws. I don't quite see what you are criticizing, apart from specific examples of apparen

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
em with that. Still, it's so murky even for simple correlates that no good overall picture exists. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
Referencing your own work is obviously not what I was asking for. Still, something more substantial than "neuron is not a concept", as an example of "cognitive theory"? On Fri, Nov 21, 2008 at 4:35 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Vladimir Nesov wro

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Vladimir Nesov
oes a rock compute Fibonacci numbers just to a lesser degree than this program? A concept, like any other. Also, some shades of gray are so thin you'd run out of matter in the Universe to track all the things that light. -- Vladimir Nesov [EMAIL PROTECTED] ht

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
is 30 or 40 years out of date. > Could you give some references to be specific in what you mean? Examples of what you consider outdated cognitive theory and better cognitive theory. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --

Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 2:03 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > On Fri, Nov 21, 2008 at 1:40 AM, Richard Loosemore <[EMAIL PROTECTED]> wrote: >> >> The main problem is that if you interpret spike timing to be playing the >> role that you (and they) im

Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
use this notion of "freedom" to establish asymmetry: "The controller C may change the state of the controlled system S in any way, including the destruction of S." -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ -

Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
quot;, and, having trouble with free will-like issues, produces a combination of brittle and nontechnical assertions. As a result, in his own example (at the very end of section 2), a doctor is considered "in control" of treating a patient only if he can prescribe *arbitrary* treatment th

Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
Here's a link to the paper: http://wpcarey.asu.edu/pubs/index.cfm?fct=details&article_cobid=2216410&author_cobid=1039524&journal_cobid=2216411 -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- ag

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Vladimir Nesov
pts, it > will be aware of its consciousness. > > I will take that argument further in another paper, because we need to > understand animal minds, for example. It's hard and iffy business trying to recast a different architecture in the language that involves these bottomless concepts and

Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Vladimir Nesov
language of perceptual wiring, with correspondence between qualia and areas implementing modalities/receiving perceptual input. You didn't argue about a general case of AGI, so how does it follow that any AGI is bound to be conscious? -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.

[agi] Re: Causality and science

2008-10-26 Thread Vladimir Nesov
expected variations of context, which distinguishes it from mere correlation, when you can set up a context that breaks it. My thoughts on this subject are written up here: http://causalityrelay.wordpress.com/2008/08/01/causal-rules/ http://causalityrelay.wordpress.com/2008/08/04/causation-and-actions

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Vladimir Nesov
ng step is wrong. If you need to find the best solution to x*3=7, but you can only use integers, the perfect solution is impossible, but it doesn't mean that we are justified in using x=3 that looks good enough, as x=2 is the best solution given limitations. -- Vladimir Nesov [EMAIL PROT

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Vladimir Nesov
tion under uncertainty? How do you know when > you've gotten there? > > If you don't believe in ad-hoc then you must have an algorithmic solution . > . . . > I pointed out only that it doesn' follow from AIXI that ad-hoc is justified. -- Vla

Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 1:17 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Sat, Oct 25, 2008 at 9:57 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> Note that people are working on this specific technical problem for 30 >> years, (see the scary amount of wo

Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
ow, it looks like a long way there. I'm currently shifting towards probabilistic analysis of huge formal systems in my thinking about AI (which is why chess looks interesting again, in an entirely new light). Maybe I'll understand this area better in months to come. -- Vladimir Nesov

Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 3:17 AM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 7:42 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> This general sentiment doesn't help if I don't know what to do specifically. > > Well, given a C/C++ pro

Re: [agi] constructivist issues

2008-10-24 Thread Vladimir Nesov
here and exploit its computational potential on industrial scale. ;-) -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/ar

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Vladimir Nesov
ect solution is impossible, you could still have an optimal approximation under given limitations. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: http

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 10:30 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 6:49 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> I write software for analysis of C/C++ programs to find bugs in them >> (dataflow analysis, etc.). Where does AI

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 9:28 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 6:00 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> If it's not supposed to be a generic language war, that becomes relevant. > > Fair point. On the other ha

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 8:47 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 5:37 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> You are describing it as a step one, with writing huge specifications >> by hand in formally interpretable langu

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 8:29 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 5:26 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> It's a specific problem: jumping right to the code generation to >> specification doesn't work, beca

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 7:24 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 4:09 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> If that allows AI to understand the code, without directly helping it. >> In this case teaching it to understand

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
ut you can get hold of internal representation of any language and emulate/compile/analyze it. It's not really the point, the point is simplicitly of this process. Where simplicity matters is the question that needs to be answered before that. -- Vladimir Nesov [EMAIL PROTECTED] http://causality

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 6:39 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 3:24 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> Again, specifics. What is this "specification" thing? What kind of >> task are to be specified in it?

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 6:16 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 3:04 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> I'd write this specification in language it understands, including a >> library that builds more convenient pr

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 5:54 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 2:50 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> I'd write it in a separate language, developed for human programmers, >> but keep the language with whi

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 5:42 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 11:49 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> Well, my point was that maybe the mistake is use of additional >> language constructions and not their abse

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 2:16 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 10:56 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > >> Needing many different >> features just doesn't look like a natural thing for AI-generated >> p

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 1:36 PM, Russell Wallace <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 10:24 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> Russel, in what capacity do you use that language? > > In all capacities, for both hand written and machine gen

Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
real lisp with all its bells and whistles. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscri

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Vladimir Nesov
On Thu, Oct 23, 2008 at 4:13 AM, Trent Waddington <[EMAIL PROTECTED]> wrote: > On Thu, Oct 23, 2008 at 8:39 AM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> If you consider programming an AI social activity, you very >> unnaturally generalized this term, confusing othe

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Vladimir Nesov
On Thu, Oct 23, 2008 at 2:22 AM, Trent Waddington <[EMAIL PROTECTED]> wrote: > On Wed, Oct 22, 2008 at 8:24 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote: >> Current AIs learn chess without engaging in social activities ;-). >> And chess might be a good drosophila for

Re: [agi] constructivist issues

2008-10-22 Thread Vladimir Nesov
you receive, but how fast you can improve your model. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modi

Re: [agi] Who is smart enough to answer this question?

2008-10-22 Thread Vladimir Nesov
uired a little bit of familiarity with algorithms on graphs/discrete math. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/ar

Re: [agi] constructivist issues

2008-10-22 Thread Vladimir Nesov
se Lojban allows for more ambiguity (as well as Cyc-L level precision, > depending on speaker's choice) ... and of course Lojban is intended for > interactive conversation rather than knowledge entry > (as tools towards improving bandwidth of experience, they do the same thing) --

Re: [agi] constructivist issues

2008-10-22 Thread Vladimir Nesov
oof-of-concept level results about efficiency without resorting to Cycs and Lojbans, and after that they'll turn out to be irrelevant. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/me

Re: [agi] constructivist issues

2008-10-22 Thread Vladimir Nesov
emember about it 10 years later, retouch the most annoying holes with simple statistical techniques, and continue as before. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RS

Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Vladimir Nesov
learn chess without engaging in social activities ;-). And chess might be a good drosophila for AI, if it's treated as such ( http://www-formal.stanford.edu/jmc/chess.html ). This was uncalled for. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ ---

Re: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Vladimir Nesov
t combinations of sets, it's a filter on the individual sets from the total of C(N,S). > > --Sixth, if C(S,X)*C(N-S,S-X) enumerates all possible combinations having an > overlap of X, why can't one calculate A as follows? > > A = SUM FROM X = 0 T

Re: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Vladimir Nesov
es can be added without conflicts, since T(N,S,O) is the maximum number of assemblies that each one in the pool is able to subtract from the total pool of assemblies. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi

Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Vladimir Nesov
xcel. > Your spreadsheet doesn't catch it for S=100 and O=1, it explodes when you try to increase N. But at S=10, O=2, you can see how lower bound increases as you increase N. At N=5000, lower bound is 6000, at N=10^6, it's 2.5*10^8, and at N=10^9 it's 2.5*10^14. -- Vl

Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Vladimir Nesov
IVELY LARGE, which would be > equivalent to node assemblies with undesirably high cross talk. Ed, find my reply where I derive a lower bound. Even if overlap must be no more than 1 node, you can still have a number of assemblies as much more than N as necessary, if N is big enough, given fixed S

[agi] Re: Value of philosophy

2008-10-20 Thread Vladimir Nesov
r we are going in the right direction on at least good enough level to persuade other people (which is NOT good enough in itself, but barring that, who are we kidding). -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives:

Re: [agi] Re: Meaning, communication and understanding

2008-10-20 Thread Vladimir Nesov
n between language model and D. > "In any case" isn't good enough, Why does it even make sense to say that brain "sends" "entities"? From "L"? So far, all of this is completely unjustified, and probably not even wrong. -- Vl

Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
nction between them. As a given, interaction happens at the narrow I/O interface, and anything else is a design decision for a specific AI (even invariability of I/O is, a simplifying assumption that complicates semantics of time and more radical self-improvement). Sufficiently flexible cognitive

Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
aning to aspects of operation of AGI, and to relations between AGI and what it models, in your own head, but this perspective loses technical precision, although to some extent it's necessary. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --

[agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
t it's easy to see how our technology, as physical medium, transfers information ready for translation. This outward appearance has little bearing on semantic models. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archiv

Re: [agi] Reasoning by analogy recommendations

2008-10-17 Thread Vladimir Nesov
ut of context). Thus reasoning with contextual information is an instance of analogical reasoning. Looking at it the other way around, relational similarity is superior to attributional similarity because the former is more robust than the latter when there is contextua

Re: [agi] Networks, memory capacity, grid cells...

2008-10-16 Thread Vladimir Nesov
l oscillations during a working memory task. by: DS Rizzuto, JR Madsen, EB Bromfield, A Schulze-Bonhage, D Seelig, R Aschenbrenner-Scheibe, MJ Kahana Proceedings of the National Academy of Sciences of the United States of America, Vol. 100, No. 13. (24 June 2003), p

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
a fixed size ... but it makes slightly more sense to assume an upper > bound on their size... > Which is why I don't like this whole fuss about cell assemblies in the first place, and prefer free exploration of Hamming space. ;-) -- Vladimir Nesov [EMAIL PROTEC

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
ght (w) code, every word in the code has the same weight, w. In a bounded-weight (w) code, every word has at most w ones." -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/mem

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
nt-weight binary code, not bounded-weight though). My lower bound is trivial, and answers the question. It's likely somewhere in the references there. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://ww

[agi] Brain-muscle interface helps paralysed monkeys move

2008-10-16 Thread Vladimir Nesov
dle of nowhere. Brain is able to capture the feedback loop through environment starting from a single cell, and to include the activity of that cell in goal-directed control process, based on the effect on the environment. -- Vladimir Nesov [EMAIL PROTECTED] http://caus

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
l problem, O-1 is the maximum allowed overlap, in my last reply I used O incorrectly in the first paragraph, but checked and used in correctly in the second with the lower bound. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ ---

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
order S-O with respect to N, whereas C(S,N) is bound by polynomial of order S. Thus, even if you only allow the overlap of O=1 (so that no two cell assemblies are allowed to have even two nodes in common), you can get arbitrarily large number of cell assemblies (including in proportion to N) by choosing bi

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
and should be shown to make sense in this context. Based on "confabulation" papers, I find Hecht-Nielsen deeply confused. If his results even mean anything, he does a poor job of explaining what it is and why, and what makes what he says new. Another slim possibility is that hi

Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
equences on fluid representation, information and goals, and inference surface. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/mem

Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Vladimir Nesov
mplexity. There is plenty of ground to cover in the space of simple things, limitations on complexity are pragmatically void. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/3

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Vladimir Nesov
list also forces better coherence to the discussion. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modif

Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Vladimir Nesov
I expected to feel when you say the words > 'Intellectual Property'? (that's a rhetorical question, just in case there > was any doubt!) > > I'd like to suggest that the COMP=false thread be considered a completely > mis-placed, undebatable and dead topic o

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Vladimir Nesov
nvironment with algorithmic complexity K, the agent must be able > to simulate the environment, so it must also have algorithmic complexity K. An > agent with higher complexity can guess a superset of environments that a lower > complexity agent could, and therefore cannot do worse in accumula

Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Vladimir Nesov
ment is weaker than original informal argument it was invented to support, there is no point in technical argument. Using the fact of 2+2=4 won't give technical support to e.g. philosophy of solipsism. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/

Re: [agi] a mathematical explanation of AI algorithms?

2008-10-08 Thread Vladimir Nesov
which to look at less well understood hacks, to feel the underlying structure. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/

Re: [agi] COMP = false

2008-10-06 Thread Vladimir Nesov
lmqg0.jpg > Don't you know that only clown suit interacts with probability theory in the true Bayesian way? ;-) http://www.overcomingbias.com/2007/12/cult-koans.html -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi A

Re: [agi] COMP = false

2008-10-06 Thread Vladimir Nesov
e that is thousands of miles away, years ago, and only ever existed virtually. You can't adapt known physics to do THAT. You'd need an intelligent meddler. And you can't escape flaws in your reasoning by wearing a lab coat. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.w

Re: [agi] COMP = false

2008-10-04 Thread Vladimir Nesov
ience. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?memb

Re: [agi] Perceptrons Movie

2008-09-22 Thread Vladimir Nesov
On Tue, Sep 23, 2008 at 12:23 AM, Eric Burton <[EMAIL PROTECTED]> wrote: >> Creativity machine: http://www.imagination-engines.com/cm.htm > > Six layers, though? Perhaps the result is magic! > Yes, and magic only works in the la-la land. -- Vladimir Nesov [

Re: [agi] Perceptrons Movie

2008-09-22 Thread Vladimir Nesov
shuman level intelligence." Hilarious -- with a sad, dull way. See a picture of a 6-layer neural network in the link below. Stephen Thaler Creativity machine: http://www.imagination-engines.com/cm.htm -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/

[agi] Improve generators, not products.

2008-09-22 Thread Vladimir Nesov
once and for all, deciding how to solve the problems, designing appropriate tools, learning required facts, deploying the solutions. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
every grain harvester combine for 30 years about harvesting. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/3

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 5:40 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Sun, 9/21/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > >> So, do you think that there is at least, say, 99% >> probability that AGI >> won't be developed by a reaso

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 3:56 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Sun, 9/21/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > >> Hence the question: you are making a very strong assertion by >> effectively saying that there is no shortcut, period (in

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
we > would > have figured it out by now. > Hence the question: you are making a very strong assertion by effectively saying that there is no shortcut, period (in the short-term perspective, anyway). How sure are you in this assertion? -- Vladimir Nesov [EMA

[agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
et you'd take for it)? This is an easily falsifiable statement, if a small group implements AGI, you'll be proven wrong. -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archives: https://www.listbox.com/member/ar

Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Vladimir Nesov
aning steam engine has a place in your heart, you need to stop writing a science fiction novel with yourself as the main character, and ask yourself who you want to be. " -- Vladimir Nesov [EMAIL PROTECTED] http://causalityrelay.wordpress.com/ --- agi Archiv

Re: [agi] self organization

2008-09-16 Thread Vladimir Nesov
my mind when it seems like my current ideas are inadequate. And > of course, to provide the same kind of feedback for others when I have > something to contribute. In that spirit, I'm grateful for your feedback. I'm > also very curious to see the results of your approach, and th

  1   2   3   4   5   >