Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Wed, Jan 14, 2009 at 10:59 PM, Richard Loosemore r...@lightlink.com wrote:

 For anyone interested in recent discussions of neuroscience and the level of
 scientific validity to the various brain-scann claims, the study by Vul et
 al, discussed here:

 http://www.newscientist.com/article/mg20126914.700-doubts-raised-over-brain-scan-findings.html

 and available here:

 http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf

 ... is a welcome complement to the papers by Trevor Harley (and myself).


 The title of the paper is Voodoo Correlations in Social Neuroscience, and
 that use of the word voodoo pretty much sums up the attitude of a number
 of critics of the field.

 We've attacked from a different direction, but we had a wide range of
 targets to choose, believe me.

 The short version of the overall story is that neuroscience is out of
 control as far as overinflated claims go.


Richard, even if your concerns are somewhat valid, why is it
interesting here? It's not like neuroscience is dominated by
discussions of (mis)interpretation of results, they are collecting
data, and with that they are steadily getting somewhere.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com wrote:

 The whole point about the paper referenced above is that they are collecting
 (in a large number of cases) data that is just random noise.


So what? The paper points out a methodological problem that in itself
has little to do with neuroscience. The field as a whole is hardly
mortally afflicted with that problem (whether it's even real or not).
If you look at any field large enough, there will be bad science. How
is it relevant to study of AGI?

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] Doubts raised over brain scan findings

2009-01-14 Thread Vladimir Nesov
On Thu, Jan 15, 2009 at 4:34 AM, Richard Loosemore r...@lightlink.com wrote:
 Vladimir Nesov wrote:

 On Thu, Jan 15, 2009 at 3:03 AM, Richard Loosemore r...@lightlink.com
 wrote:

 The whole point about the paper referenced above is that they are
 collecting
 (in a large number of cases) data that is just random noise.


 So what? The paper points out a methodological problem that in itself
 has little to do with neuroscience.

 Not correct at all:  this *is* neuroscience.  I don't understand why you say
 that it is not.

From what I got from the abstract and by skimming the paper, it's a
methodological problem in handling data from neuroscience experiments
(bad statistics).


 The field as a whole is hardly
 mortally afflicted with that problem

 I mentioned it because there is a context in which this sits.  The context
 is that an entire area - which might be called deriving psychological
 conclusions from barin scan data - is getting massive funding and massive
 attention, and yet it is quite arguably in an Emperor's New Clothes state.
  In other words, the conclusions being drawn are (for a variety of reasons)
 of very dubious quality.

 If you look at any field large enough, there will be bad science.

 According to the significant number of people who criticize it, this field
 appears to be dominated by bad science.  This is not just an isolated case.


That's a whole new level of alarm, relevant for anyone trying to learn
from neuroscience, but it requires stronger substantiation, mere 50
papers that got confused with statistics don't do it justice.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] fuzzy-probabilistic logic again

2009-01-13 Thread Vladimir Nesov
On Tue, Jan 13, 2009 at 7:50 AM, YKY (Yan King Yin)
generic.intellige...@gmail.com wrote:
 On Tue, Jan 13, 2009 at 6:19 AM, Vladimir Nesov robot...@gmail.com wrote:

 I'm more interested in understanding the relationship between
 inference system and environment (rules of the game) that it allows to
 reason about,

 Next thing I'll work on is the planning module.  That's where the AGI
 interacts with the environment.

 ... about why and how a given approach to reasoning is
 expected to be powerful.

 I think if PZ logic can express a great variety of uncertain
 phenomena, that's good enough.  I expect it to be very efficient too.


Phenomena are not uncertain, you may as well regard the problem as
inference over deterministic and insanely detailed physics. To give a
hint of why logic alone doesn't seem to address important questions:
You can use concepts to capture sets of configurations, weighted with
probability, but the main trick is in capturing the structure
(=specific inference schemes). You can model something by a HMM, with
one opaque hidden state, but even if it abstracts away most of the
physical details, the state usually has its internal structure that
you have to learn in order to cope with data sparsity. This structure
can be represented by multiple individual concepts that look at the
state of the system from multiple points of view, each concept
describing an element of the structure of the system. It might be a
good idea to select your concepts so that they arrange in something
like a Bayesian network then, of variety that allows inferences you
need (or any other inference scheme, for that matter). But it's a
static view, where you focus on decomposition of a specific state. In
reality, you'd want to reuse your concepts, recognizing them again and
again in different situations, and reassemble the model from them.
Each concept applies to many different situations, and its
relationship to other concepts is context-dependent. This interaction
between the structure of environment and custom reassembling of models
to describe the focus of attention while reusing past knowledge seems
to be the most interesting part at this point.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] just a thought

2009-01-13 Thread Vladimir Nesov
On Wed, Jan 14, 2009 at 4:40 AM, Matt Mahoney matmaho...@yahoo.com wrote:
 --- On Tue, 1/13/09, Valentina Poletti jamwa...@gmail.com wrote:

 Anyways my point is, the reason why we have achieved so much technology, so 
 much knowledge in this time is precisely the we, it's the union of several 
 individuals together with their ability to communicate with one-other that 
 has made us advance so much.

 I agree. A machine that is 10 times as smart as a human in every way could 
 not achieve much more than hiring 10 more people. In order to automate the 
 economy, we have to replicate the capabilities of not one human mind, but a 
 system of 10^10 minds. That is why my AGI proposal is so hideously expensive.
 http://www.mattmahoney.net/agi2.html


Let's fire Matt and hire 10 chimps instead.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] fuzzy-probabilistic logic again

2009-01-12 Thread Vladimir Nesov
On Mon, Jan 12, 2009 at 4:43 PM, YKY (Yan King Yin)
generic.intellige...@gmail.com wrote:
 I have refined my P(Z) logic a bit.  Now the truth values are all
 unified to one type, probability distribution over Z, which has a
 pretty nice interpretation.  The new stuff are at sections 4.4.2 and
 4.4.3.

 http://www.geocities.com/genericai/P-Z-logic-excerpt-12-Jan-2009.pdf

 I'm wondering if anyone is interested in helping me implement the
 logic or develop an AGI basing on it?  I have already written part of
 the inference engine in Lisp.

 Also, is anyone here working on fuzzy or probabilistic logics, other
 than Ben and Pei and me?


I'm more interested in understanding the relationship between
inference system and environment (rules of the game) that it allows to
reason about, about why and how a given approach to reasoning is
expected to be powerful. It looks like many logics become too wrapped
up in themselves, and their development as ways to AI turns into a
wild goose chase.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-09 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 6:34 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Well, it is true that you can find |P|  |Q| for some cases of P nontrivially
 simulating Q depending on the choice of language. However, it is not true on
 average. It is also not possible for P to nontrivially simulate itself 
 because it is
 a contradiction to say that P does everything that Q does and at least one 
 thing
 that Q doesn't do if P = Q.


What you write above is a separate note unrelated to one about
complexity. P simulating P and doing something else is well-defined
according to your definition of simulation in the previous message
(that includes a special format for request for simulation), no
contradictions, and you've got an example.

-- 
Vladimir Nesov


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Identity abstraction

2009-01-09 Thread Vladimir Nesov
You need to name those parameters in a sentence only because it's
linear, in a graph they can correspond to unnamed nodes. Abstractions
can have structure, and their applicability can depend on how their
structure matches the current scene. If you retain in a scene graph
only relations you mention, that'd be your abstraction.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Identity abstraction

2009-01-09 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 8:48 PM, Harry Chesley ches...@acm.org wrote:
 On 1/9/2009 9:28 AM, Vladimir Nesov wrote:

  You need to name those parameters in a sentence only because it's
  linear, in a graph they can correspond to unnamed nodes. Abstractions
  can have structure, and their applicability can depend on how their
  structure matches the current scene. If you retain in a scene graph
  only relations you mention, that'd be your abstraction.

 I'm not sure if you mean a graph in the sense of nodes and edges, or in a
 visual sense.

 If the former, any implementation requires that the edges identify or link
 somehow to the appropriate nodes -- so how is this done in humans and what
 experiments reveal it? If the later, the location in space of the node in
 the abstract graph is effectively it's identity -- are you suggesting that
 human abstraction is always visual, and if so what experimental evidence is
 there?

 I don't mean to include or exclude your theory of abstraction, but the
 question is whether you know of experiments that shed light on this area.


Graph as with nodes. It's more a reply to your remark that you have to
introduce names in order to communicate the abstraction than to the
rest. AFAIK, neuroscience is far from answering or even formulating
properly questions like this, but you can analyze theoretical models
of cognitive algorithms that answer your questions.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 12:19 AM, Matt Mahoney matmaho...@yahoo.com wrote:
 Mike,

 Your own thought processes only seem mysterious because you can't predict 
 what you will think without actually thinking it. It's not just a property of 
 the human brain, but of all Turing machines. No program can non-trivially 
 model itself. (By model, I mean that P models Q if for any input x, P can 
 compute the output Q(x). By non-trivial, I mean that P does something else 
 besides just model Q. (Every program trivially models itself). The proof is 
 that for P to non-trivially model Q requires K(P)  K(Q), where K is 
 Kolmogorov complexity, because P needs a description of Q plus whatever else 
 it does to make it non-trivial. It is obviously not possible for K(P)  K(P)).


Matt, please stop. I even constructed an explicit counterexample to
this pseudomathematical assertion of yours once. You don't pay enough
attention to formal definitions: what this has a description means,
and which reference TMs specific Kolmogorov complexities are measured
in.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-08 Thread Vladimir Nesov
On Fri, Jan 9, 2009 at 6:04 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Your earlier counterexample was a trivial simulation. It simulated itself but 
 did
 nothing else. If P did something that Q didn't, then Q would not be 
 simulating P.

My counterexample also bragged, outside the input format that
requested simulation. ;-)


 This applies regardless of your choice of universal TM.

 I suppose I need to be more precise. I say P simulates Q if for all x,
 P(what is Q(x)?) = Q(x)=y iff Q(x)=y (where x and y are arbitrary 
 strings).
 When I say that P does something else, I mean that it accepts at least one
 input not of the form what is Q(x)?.

This is a step in the right direction.
What does it mean for P to NOT accept some input? Must it hang? What
it P outputs I understand you perfectly for each input not in the
form what is Q(x)?? (Which was my counterexample IIRC.)

 I claim that K(P)  K(Q) because any description of P must include
 a description of Q plus a description of what P does for at least one other 
 input.


Even if you somehow must represent P as concatenation of Q and
something else (you don't need to), it's not true that always
K(P)K(Q). It's only true that length(P)length(Q), and longer strings
can easily have smaller programs that output them. If P is
10^(10^10) symbols X, and Q is some random number of X smaller
than 10^(10^10), it's probably K(P)K(Q), even though Q is a
substring of P.


-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-30 Thread Vladimir Nesov
On Tue, Dec 30, 2008 at 12:44 AM, Kaj Sotala xue...@gmail.com wrote:
 On Mon, Dec 29, 2008 at 10:15 PM, Lukasz Stafiniak lukst...@gmail.com wrote:
 http://www.sciencedaily.com/releases/2008/12/081224215542.htm

 Nothing surprising ;-)

 So they have a result saying that we're good at subconsciously
 estimating the direction in which dots on a screen are moving in.
 Apparently this can be safely generalized into Our Unconscious Brain
 Makes The Best Decisions Possible (implied: always).

 You're right, nothing surprising. Just the kind of unfounded,
 simplistic hyperbole I'd expect from your average science reporter.
 ;-)


Here is a critique of the article:

http://neurocritic.blogspot.com/2008/12/deal-no-deal-or-dots.html

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-24 Thread Vladimir Nesov
On Thu, Dec 25, 2008 at 9:33 AM, Steve Richfield
steve.richfi...@gmail.com wrote:

 Any thoughts?


I can't tell this note from nonsense. You need to work on
presentation, if your idea can actually hold some water. If you think
you understand the idea enough to express it as math, by all means do
so, it'll make your own thinking clearer if nothing else.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Should I get a PhD?

2008-12-18 Thread Vladimir Nesov
On Thu, Dec 18, 2008 at 11:55 PM, Ben Goertzel b...@goertzel.org wrote:

 I don't think there's any lack of creativity in the AGI world ... and I
 think it's pretty clear that rationality and creativity work together in all
 really good scientific work.

 Creativity is about coming up with new ideas.  Rationality is about
 validating ideas, and deriving their natural consequences.

 They're complementary, not contradictory, within a healthy scientific
 thought process...


I'd say that logic, statistics, etc. are particular well-understood
algorithms of intelligence, which we optimized and can now reliably
apply to generate and verify knowledge. Creativity lies in the land of
unknown algorithms, which people run in their heads without reflective
understanding. The fact that we applied our intelligence to
optimization of these particular algorithms made them strong enough to
make significant contribution to performance of people, even though
they don't play significant role in the thought process itself. Coming
up with new ideas is fundamentally the same as verifying ideas, since
if you come up with ideas that have only 1e-30 chance of working, it's
no use. Developing synthetic creativity is one of the aspects of the
quest of AGI research, and understood and optimized algorithms of
creativity should allow to build ideas that are strong from the
beginning, verification part of the process. Although it all sounds
kinda warped in this language.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] CopyCat

2008-12-17 Thread Vladimir Nesov
[repost from opencog list; I should've posted it on AGI in the first
place, instead of opencog]

On Wed, Dec 17, 2008 at 7:01 AM, Ben Goertzel b...@goertzel.org wrote:

 First thing: CopyCat doesn't work.  Not just in the sense that it's not AGI
 ... in the sense that it can't even solve hardly any of the simple, narrow
 analogy problems it was designed to solve.   It's basically a
 non-operational thought experiment.   Run the code yourself and see, there
 are some online versions  It occasionally solves some simple problem,
 but most of Hofstadter's simple analogy problems, it just will never
 solve...

 And, there is no coherent theory backing up why a Copycat-like system would
 ever work.


Do you mean that examples that Hofstadter/Mitchell used in their
papers for CopyCat did not in fact work on their codebase? I remember
downloading second copycat implementations (in Java IIRC), it seemed
to be working. Besides, they don't claim anything grandiose for this
model, and it seems like it shouldn't be too hard to make it work.

Another story is that it's not obvious how to extend this style of
algorithm to anything interesting, too much gets projected into
manually specified parameters and narrow domain.

--
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] CopyCat

2008-12-17 Thread Vladimir Nesov
On Wed, Dec 17, 2008 at 6:03 PM, Ben Goertzel b...@goertzel.org wrote:

 I happened to use CopyCat in a university AI class I taught years ago, so I
 got some experience with it

 It was **great** as a teaching tool, but I wouldn't say it shows anything
 about what can or can't work for AGI, really...


CopyCat gives a general feel of self-assembling representation and
operations performed on reflexive level.  It captures intuitions about
high-level perception better than any other self-contained description
I've seen (which is rather sad, especially given that CopyCat only
touches on using hand-made shallow multilevel representations, without
inventing them, without learning). Some of the things happening in my
model of high-level representation (on the rights of description of
what's happening, not as elements of model itself) can be naturally
described using lexicon from CopyCat (slippages, temperature,
salience, structural analogy), even though algorithm on the low level
is different.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AIXI

2008-12-07 Thread Vladimir Nesov
On Sun, Dec 7, 2008 at 7:59 PM, Jim Bromer [EMAIL PROTECTED] wrote:
 I think my criticism of Hutter's theorem may not have been that
 strong.  I do think that Hutter's theorem may shed some light on why
 the problem is difficult.  More importantly it helps us to think
 outside the box.  For instance, it might be the case that an effective
 AI program cannot be completely defined.  It might need to be
 constantly changing, in that the program itself can never be defined.
 I am not saying that is the case, just that it is a possibility.

 But, in one sense a general AI program is not going to typically halt.
  It just keeps going until someone shuts it off.  So perhaps the
 halting problem is fly in the ointment.  On the other hand, the
 halting problem does hinge around the question whether a function can
 be defined, and this issue is most definitely relevant to the problem.

 Whether or not an effective AGI program can be defined is not a
 feasible present-day computational problem.  So in that sense the
 halting problem is relevant. The question of whether or not an AGI
 program is feasible is a problem for higher intelligence, not present
 day computer intelligence.


Was this text even supposed to be coherent?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] AIXI

2008-12-01 Thread Vladimir Nesov
On Mon, Dec 1, 2008 at 8:04 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 The value of AIXI is not that it solves the general intelligence problem, but 
 rather
 it explains why the problem is so hard.

It doesn't explain why it's hard (is impossible hard?). That you
can't solve a problem exactly, doesn't mean that there is no simple
satisfactory solution.


 It also justifies a general principle that is
 already used in science and in practical machine learning algorithms:
 to choose the simplest hypothesis that fits the data. It formally defines
 simple as the length of the shortest program that outputs a description
 of the hypothesis.

It's Solomonoff's universal induction, a much earlier result. Hutter
generalized Solomonoff's induction to decision-making and proved some
new results, but the idea of simple hypotheses prior and proof that it
does good at learning are Solomonoff's.

See ( http://www.scholarpedia.org/article/Algorithmic_probability )
for introduction.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] The Future of AGI

2008-11-26 Thread Vladimir Nesov
Formal reasoning can be thought of as medium, a canvas on which your
imagination draws structures serving your goals best, that solve your
problem or are simply aesthetically pleasing. There is an infinite
number of possible formal derivations, theorems and proofs;
limitations of formality of final product of expression are relatively
loose. These are rules of the game, that enable the complexity of
skill to emerge, not square bounds on imagination. Most of the work
comes from creative process, not from formality.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 8:09 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Ben Goertzel wrote:

 Richard,

 My point was that there are essentially no neuroscientists out there
 who believe that concepts are represented by single neurons.  So you
 are in vehement agreement with the neuroscience community on this
 point.

 The idea that concepts may be represented by cell assemblies, or
 attractors within cell assemblies, are more prevalent.  I assume
 you're familiar with the thinking/writing of for instance Walter
 Freeman and Susan Greenfield on these issues.   You may consider them
 wrong, but they are not wrong due to obvious errors or due to
 obliviousness to cog sci data.

 So let me see if I've got this straight:  you are saying that there are
 essentially no neuroscientists who talk about spiking patterns in single
 neurons encoding relationships between concepts?

 Not low-level features, as we discussed before, but medium- to high-level
 concepts?

 You are saying that when they talk about the spike trains encoding bayesian
 contingencies, they NEVER mean, or imply, contingencies between concepts?


What's a concept in this context, Richard? For example, place cells
activate on place fields, pretty palpable correlates, one could say
they represent concepts (and it's not a perceptual correlate). There
are relations between these concepts, prediction of their activity,
encoding of their sequences that plays role in episodic memory, and so
on. At the same time, the process by which they are computed is
largely unknown, individual cells perform some kind of transformation
on other cells, but how much of the concept is encoded in cells
themselves rather than in cells they receive input from is also
unknown. Since they jump on all kinds of contextual cues, it's likely
that their activity to some extent depends on activity in most of the
brain, but it doesn't invalidate analysis considering individual cells
or small areas of cortex, just as gravitation pull from the Mars
doesn't invalidate approximate calculations made on Earth according to
Newton's laws. I don't quite see what you are criticizing, apart from
specific examples of apparent confusion.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 8:34 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

 No, object-concepts and the like.  Not place, motion or action 'concepts'.

 For example, Quiroga et al showed their subjects pictures of famous places
 and people, then made assertions about how those things were represented.


Now that I have a bit better understanding of neuroscience than a year
ago, I reread relevant part of your paper and skimmed the Quiroga et
al's paper (Invariant visual representation by single neurons in the
human brain, for those who don't want to look it up in Richard's
paper). I don't see a significant disagreement. They didn't mean to
imply obviously wrong assertion that there are only few cells
corresponding to each high-level concept (to quote: the fact that we
can discover in this short time some images -- such as photographs of
Jennifer Aniston -- that drive the cells, suggests that each cell
might represent more than one class of images). Sparse and
distributed representations are mentioned as extreme perspectives, not
a dichtomy. Results certainly have some properties of sparse
representation, as opposed to extremely distributed, which doesn't
mean that results imply extremely sparse representation. Observed
cells as correlates of high-level concepts were surprisingly invariant
to the form in which that high-level concept was presented, which does
suggest that representation is much more explicit than in the
extremely distributed case. Or course, it's not completely explicit.

So, at this point I see at least this item in your paper as a strawman
objection (given that I didn't revisit other items).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-21 Thread Vladimir Nesov
On Sat, Nov 22, 2008 at 12:30 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 They want some kind of mixture of sparse and multiply redundant and not
 distributed.  The whole point of what we wrote was that there is no
 consistent interpretation of what they tried to give as their conclusion.
  If you think there is, bring it out and put it side by side with what we
 said.


There is always a consistent interpretation that drops their
interpretation altogether and leaves the data. I don't see their
interpretation as strongly asserting anything. They are just saying
the same thing in a different language you don't like or consider
meaningless, but it's a question of definitions and style, not
essence, as long as the audience of the paper doesn't get confused.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
Here's a link to the paper:
http://wpcarey.asu.edu/pubs/index.cfm?fct=detailsarticle_cobid=2216410author_cobid=1039524journal_cobid=2216411

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
On Thu, Nov 20, 2008 at 7:04 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 BillK wrote:

 Nobody has mentioned this yet.

 http://www.physorg.com/news146319784.html

 I got a draft version of the paper earlier this year, and after a quick scan
 I filed it under 'junk'.

 I just read it through again, and the filing stays the same.


I have to agree. The paper attacks a strawman by blanket assertions.
Even worse, the attack itself is flawed: in section 2 he tries to
define the concept of control, and, having trouble with free
will-like issues, produces a combination of brittle and nontechnical
assertions. As a result, in his own example (at the very end of
section 2), a doctor is considered in control of treating a patient
only if he can prescribe *arbitrary* treatment that doesn't depend on
the patient (or his illness).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Vladimir Nesov
On Thu, Nov 20, 2008 at 7:56 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 And btw, the notion that control is a key concept in the brain goes
 back at least to Norbert Wiener's book Cybernetics from the 1930's !!
 ... Principia Cybernetica has a simple but clear webpage on the
 control concept in cybernetics...

 http://pespmc1.vub.ac.be/CONTROL.html


I don't like that definition for basically the same reason, but it
maybe explains where Asim Roy comes from. At least they are not
literally insisting on control being a property of the system itself,
according to this remark:

Of course, two systems can be in a state of mutual control, but this
will be a different, more complex, relation, which we will still
describe as a combination of two asymmetric control relations.

Controller-controlled relation is a model assigned to the system, not
an intrinsic property of the system itself. Also, there is no may or
could apart from semantics of search algorithm, which is a thing to
keep in mind when making claims like the following, about freedom of
controller, and especially when trying to use this notion of freedom
to establish asymmetry:

The controller C may change the state of the controlled system S in
any way, including the destruction of S.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 1:40 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 The main problem is that if you interpret spike timing to be playing the
 role that you (and they) imply above, then you are commiting yourself to a
 whole raft of assumptions about how knowledge is generally represented and
 processed.  However, there are *huge* problems with that set of implicit
 assumptions  not to put too fine a point on it, those implicit
 assumptions are equivalent to the worst, most backward kind of cognitive
 theory imaginable.  A theory that is 30 or 40 years out of date.


Could you give some references to be specific in what you mean?
Examples of what you consider outdated cognitive theory and better
cognitive theory.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 2:23 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 well, what does feel mean to you ... what is feeling that a slug can
 do but a rock or an atom cannot ... are you sure this is an absolute
 distinction rather than a matter of degree?


Does a rock compute Fibonacci numbers just to a lesser degree than
this program? A concept, like any other. Also, some shades of gray are
so thin you'd run out of matter in the Universe to track all the
things that light.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
Referencing your own work is obviously not what I was asking for.
Still, something more substantial than neuron is not a concept, as
an example of cognitive theory?


On Fri, Nov 21, 2008 at 4:35 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Vladimir Nesov wrote:

 Could you give some references to be specific in what you mean?
 Examples of what you consider outdated cognitive theory and better
 cognitive theory.


 Well, you could start with the question of what the neurons are supposed to
 represent, if the spikes are coding (e.g.) bayesian contingencies. Are the
 neurons the same as concepts/symbols?  Are groups of neurons redundantly
 coding for concepts/symbols?

 One or other of these possibilties is usually assumed by default, but this
 leads to glaring inconsistencies in the interpretation of neuroscience data,
 as well as begging all of the old questions about how grandmother cells
 are supposed to do their job.  As I said above, cognitive scientists already
 came to the conclusion, 30 or 40 years ago, that it made no sense to stick
 to a simple identification of one neuron per concept.  And yet many
 neuroscientists are *implictly* resurrecting this broken idea, without
 addressing the faults that were previously found in it.  (In case you are
 not familiar with the faults, they include the vulnerability of neurons, the
 lack of connectivity between arbitrary neurons, the problem of assigning
 neurons to concepts, the encoding of variables, relationships and negative
 facts .. ).

 For example, in Loosemore  Harley (in press) you can find an analysis of a
 paper by Quiroga, Reddy, Kreiman, Koch, and Fried (2005) in which the latter
 try to claim they have evidence in favor of grandmother neurons (or sparse
 collections of grandmother neurons) and against the idea of distributed
 representations.

 We showed their conclusion to be incoherent.  It was deeply implausible,
 given the empirical data they reported.

 Furthermore, we used my molecular framework (the same one that was outlined
 in the consciousness paper) to see how that would explain the same data.  It
 turns out that this much more sophisticated model was very consistent with
 the data (indeed, it is the only one I know of that can explain the results
 they got).

 You can find our paper at www.susaro.com/publications.



 Richard Loosemore


 Loosemore, R.P.W.  Harley, T.A. (in press). Brains and Minds:  On the
 Usefulness of Localisation Data to Cognitive Psychology. In M. Bunzl  S.J.
 Hanson (Eds.), Foundations of Functional Neuroimaging. Cambridge, MA: MIT
 Press.

 Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C.  Fried, I. (2005).
 Invariant visual representation by single-neurons in the human brain.
 Nature, 435, 1102-1107.




-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Hunting for a Brainy Computer

2008-11-20 Thread Vladimir Nesov
On Fri, Nov 21, 2008 at 5:14 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Lastly, I did not say that the neuroscientists picked old, broken theories
 AND that they could have picked a better, not-broken theory  I only said
 that they have gone back to old theories that are known to be broken.
  Whether anyone has a good replacement yet is not relevant:  it does not
 alter the fact that they are using broken theories.  The neuron = concept
 'theory' is extremely broken:  it is so broken, that when neuroscientists
 talk about bayesian contingencies being calculated or encoded by spike
 timing mechanisms, that claim is incoherent.


Well, you know I read that paper ;-)
A theory that is 30 or 40 years out of date, you said -- which
suggested something that is up to date, hence the question.

Neural code can be studied from the areas where we know the
correlates. You could assign concepts to neurons and theorize about
their structure as dictated by dynamic of neural substrate. They will
be no word-level concepts, and you'd probably need to build bigger
abstractions on top, but there is no inherent problem with that.
Still, it's so murky even for simple correlates that no good overall
picture exists.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Vladimir Nesov
Some notes/review.

Whether AGI is conscious is independent from whether it'll
rebel/be dangerous. Answering any kind of question about
consciousness doesn't answer a question about safety.

How is the situation with p-zombies atom-by-atom identical to
conscious beings not resolved by saying that in this case
consciousness is an epiphenomenon, meaninglessness?
http://www.overcomingbias.com/2008/04/zombies.html
http://www.overcomingbias.com/2008/04/zombies-ii.html
http://www.overcomingbias.com/2008/04/anti-zombie-pri.html

Jumping into molecular framework as describing human cognition is
unwarranted. It could be a description of AGI design, or it could be a
theoretical description of more general epistemology, but as presented
it's not general enough to automatically correspond to the brain.
Also, semantics of atoms is tricky business, for all I know it keeps
shifting with the focus of attention, often dramatically. Saying that
self is a cluster of atoms doesn't cut it.

Bottoming out of explanation of experience is a good answer, but you
don't need to point to specific moving parts of a specific cognitive
architecture to give it (I don't see how it helps with the argument).
If you have a belief (generally, a state of mind), it may indicate
that the world has a certain property, that world having that property
caused you to have this belief, or it can indicate that you have a
certain cognitive quirk that caused this belief, a loophole in
cognition. There is always a cause, the trick is in correctly
dereferencing the belief.
http://www.overcomingbias.com/2008/03/righting-a-wron.html

Subjective phenomena might be unreachable for meta-introspection, but
it doesn't place them on different level, making them unanalyzeable,
you can in principle inspect them from outside, using tools other then
one's mind itself. You yourself just presented a model of what's
happening.

Meaning/information is relative, it can be represented within a basis,
for example within a mind, and communicated to another mind. Like
speed, it has no absolute, but the laws of relativity, of conversion
between frames of reference, between minds, are precise and not
arbitrary. Possible-worlds semantics is one way to establish a basis,
allowing to communicate concepts, but maybe not a very good one.
Grounding in common cognitive architecture is probably a good move,
but it doesn't have fundamental significance.

Predictions are not described carefully enough to appear as
following from your theory. They use some terminology, but on a level
that allows literal translation to a language of perceptual wiring,
with correspondence between qualia and areas implementing
modalities/receiving perceptual input.

You didn't argue about a general case of AGI, so how does it follow
that any AGI is bound to be conscious?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-14 Thread Vladimir Nesov
 be an analysis
 mechanism that allows the system to unpack its own concepts.  Even though I
 gae a visualization for how it works in my own AGI design, that was just for
 convenience, because exactly the same *type* of mechanism must exist in any
 AGI that is powerful enough to do extremely flexible things with its
 thoughts.

 Basically, if a system can reflect on the meanings of its own concepts, it
 will be aware of its consciousness.

 I will take that argument further in another paper, because we need to
 understand animal minds, for example.

It's hard and iffy business trying to recast a different architecture
in the language that involves these bottomless concepts and qualia.
How do you apply your argument to AIXI? It doesn't map even on my
design notes, where architecture looks much more like yours, with
elements of description flying around and composing a scene or a plan
(in one of the high-level perspectives). In my case, the problem is
with semantics of elements of description being too fleeting,
context-dependent, and with description not being hierarchical, so
that when you get to the bottom, you find yourself on the top, in the
description of the same scene seen now from a different aspect.
Inference goes across the events in the environment+mind system
considered in time, so there is no intuitive counterpart to unpacking,
it all comes down to inference of events, what connects to what, what
can be inferred from what, what indicates what.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] Re: Causality and science

2008-10-26 Thread Vladimir Nesov
On Sun, Oct 26, 2008 at 6:51 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Ben,

 My first thought here is that  - ironically given recent discussion - this
 is entirely a *philosophical* POV.

 Yes, a great deal of science takes the form below, i.e. of establishing
 correlations - and v. often between biological or environmental factors
 and diseases.

 However, it is understood that this is only provisional knowledge. The aim
 of science is always to move beyond it and to establish causal relations -
 and for example eliminate some correlations as not  causal. That science is
 about causality is decidedly not up to you.  What is at stake here is
 science's mechanistic worldview, which sees things as machines and matter in
 motion, one part moving [or causing] another. That is not, as you imply,
 optional. It is the foundation of science. Nor is it optional in technology
 or AI.

 Of course if you just want to be a philosopher...


As Ben noted, there is no established consensus on what causality is
and how to discover it in various processes. There are several points
of view on how to formalize this intuition, but they are at odds with
each other and have their respective weaknesses.

Most laws that science discovers apply in a context-sensitive
manner, and part of this context can't be controlled or detected when
the laws are to apply. You can sometimes cheat the laws by selecting a
context in which they reliably break, but usually you are physically
or technologically unable to do so, or the law assumes that you don't
try that. Dependency is usually said to be causal if it can't start to
reliably break given expected variations of context, which
distinguishes it from mere correlation, when you can set up a context
that breaks it. My thoughts on this subject are written up here:

http://causalityrelay.wordpress.com/2008/08/01/causal-rules/
http://causalityrelay.wordpress.com/2008/08/04/causation-and-actions/

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 3:17 AM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 7:42 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 This general sentiment doesn't help if I don't know what to do specifically.

 Well, given a C/C++ program that does have buffer overrun or stray
 pointer bugs, there will typically be a logical proof of this fact;
 current theorem provers are typically not able to discover this proof,
 but that doesn't rule out the possibility of writing a program that
 can. (If this doesn't clarify, then I'm probably misunderstanding your
 question, in which case can you rephrase?)


There are systems that do just that, constructing models of a program
and representing conditions of absence of a bug as huge formulas. They
work with various limitations, theorem-prover based systems using
counterexample-driven abstraction refinement (the most semantically
accurate brute force models) able to work with programs of up to about
tens of thousands lines of code. They don't scale. And they don't even
handle loops well. Then there are ways to make anaylsis more scaleable
or precise, usually in a tradeoff. The most of what used to be AI that
enters this scene are theorem provers (that don't promise to solve all
the problems), and cosmetic statistical analyses here and there.

What I see as potential way of AI in program analysis is cracking
abstract interpretation, automatically inventing invariants and
proving that they hold, using these invariants to interface between
results of analysis in different parts of the program and to answer
the questions posed before analysis. This task has interesting
similarities with summarizing world-model, where you need to perform
inference on a huge network of elements of physical reality (start
with physical laws, if they were simple, or chess rules in a chess
game), basically by dynamically applying summarizing events, matching
simplified models. But it all looks almost AI-complete.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 12:40 PM, Russell Wallace
[EMAIL PROTECTED] wrote:

 What I see as potential way of AI in program analysis is cracking
 abstract interpretation, automatically inventing invariants and
 proving that they hold, using these invariants to interface between
 results of analysis in different parts of the program and to answer
 the questions posed before analysis. This task has interesting
 similarities with summarizing world-model, where you need to perform
 inference on a huge network of elements of physical reality (start
 with physical laws, if they were simple, or chess rules in a chess
 game), basically by dynamically applying summarizing events, matching
 simplified models.

 Yes, that's the sort of thing I have in mind.

Note that people are working on this specific technical problem for 30
years, (see the scary amount of work by Cousot's lab,
http://www.di.ens.fr/~cousot/COUSOTpapers/ ), and they are still
tackling fixed invariants, finding ways to summarize program code as
transformations on domains containing families of assertions about
program state, to handle loops, to work with more features of
programming languages they analyze. And it all is still imprecise and
is able to find only relatively weak assertions. Open-ended invention
of assertions to reflect the effect of program code in a more adaptive
way in even on a horizon.


 But it all looks almost AI-complete.

 It's a very hard problem, but it's a long way short of AI complete. I
 think it's worth aiming for as an intermediate stage between the
 current state of the art and good morning Dr. Chandra.


I don't know, it looks like a long way there. I'm currently shifting
towards probabilistic analysis of huge formal systems in my thinking
about AI (which is why chess looks interesting again, in an entirely
new light). Maybe I'll understand this area better in months to come.


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-25 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 1:17 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Sat, Oct 25, 2008 at 9:57 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Note that people are working on this specific technical problem for 30
 years, (see the scary amount of work by Cousot's lab,
 http://www.di.ens.fr/~cousot/COUSOTpapers/ ), and they are still
 tackling fixed invariants, finding ways to summarize program code as
 transformations on domains containing families of assertions about
 program state, to handle loops, to work with more features of
 programming languages they analyze. And it all is still imprecise and
 is able to find only relatively weak assertions. Open-ended invention
 of assertions to reflect the effect of program code in a more adaptive
 way in even on a horizon.

 Look at it this way: at least we're agreed it's not such a trivial
 problem as to be unworthy of a prototype AGI :-)

Except at this point I see nothing in common between this problem of
scalable analysis of huge formal systems and generation of code to
hand-written specification (at least if roadmap starts from code
geneneration and not the other way around, in which case code
generation can be said to be control guided by a model constructed
using summarization of possible programs having required properties).


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Vladimir Nesov
On Sun, Oct 26, 2008 at 12:17 AM, Mark Waser [EMAIL PROTECTED] wrote:
 No, it doesn't justify ad-hoc, even when perfect solution is
 impossible, you could still have an optimal approximation under given
 limitations.

 So what is an optimal approximation under uncertainty?  How do you know when
 you've gotten there?

 If you don't believe in ad-hoc then you must have an algorithmic solution .
 . . .


I pointed out only that it doesn' follow from AIXI that ad-hoc is justified.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-25 Thread Vladimir Nesov
On Sun, Oct 26, 2008 at 1:19 AM, Mark Waser [EMAIL PROTECTED] wrote:

 You are now apparently declining to provide an algorithmic solution without
 arguing that not doing so is a disproof of your statement.
 Or, in other words, you are declining to prove that Matt is incorrect in
 saying that we have no choice -- You're just simply repeating your
 insistence that your now-unsupported point is valid.


This is tedious. I didn't try to prove that the conclusion is wrong, I
pointed to a faulty reasoning step by showing that in general that
reasoning step is wrong. If you need to find the best solution to
x*3=7, but you can only use integers, the perfect solution is
impossible, but it doesn't mean that we are justified in using x=3
that looks good enough, as x=2 is the best solution given limitations.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
Russel, in what capacity do you use that language? Do AI algorithms
write in it? How it's run? Where primitive operations come from? From
what you described, depending on the answers, it looks like a simple
hand-written lambda-calculus-like language with interpreter might be
better than a real lisp with all its bells and whistles.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 1:36 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 10:24 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Russel, in what capacity do you use that language?

 In all capacities, for both hand written and machine generated content.

Why mix AI-written code and your own code?


 Where primitive operations come from?

 An appropriately chosen subset of the Lisp primitives.

I meant: where the need for primitives come from? What determines the
choice of primitive operations you need?


 From
 what you described, depending on the answers, it looks like a simple
 hand-written lambda-calculus-like language with interpreter might be
 better than a real lisp with all its bells and whistles.

 Yes, that's an intuitively appealing idea, which is why I started off
 there. But it turns out there is no natural boundary; the simple
 interpreted language always ends up needing more features until one is
 forced to acknowledge that it does, in fact, have to be a full
 programming language. Furthermore, much of the runtime ends up being
 spent in the object language; while machine efficiency isn't important
 enough to spend project resources implementing a compiler, given that
 other people have already implemented highly optimizing Lisp
 compilers, it's advantageous to use them.


You can always compile your own language into an existing language
where there's an existing optimizing compiler. Needing many different
features just doesn't look like a natural thing for AI-generated
programs.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 2:16 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 10:56 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:

 Needing many different
 features just doesn't look like a natural thing for AI-generated
 programs.

 No, it doesn't, does it? And then you run into this requirement that
 wasn't obvious on day one, and you cater for that, and then you run
 into another requirement, that has to be dealt with in a different
 way, and then you run into another... and you end up realizing you've
 wasted a great deal of irreplaceable time for no good reason
 whatsoever.

 So I figure I might as well document the mistake, in case it saves
 someone having to repeat it.


Well, my point was that maybe the mistake is use of additional
language constructions and not their absence? You yourself should be
able to emulate anything in lambda-calculus (you can add interpreter
for any extension as a part of a program), and so should your AI, if
it's to ever learn open-ended models.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 5:42 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 11:49 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Well, my point was that maybe the mistake is use of additional
 language constructions and not their absence? You yourself should be
 able to emulate anything in lambda-calculus (you can add interpreter
 for any extension as a part of a program), and so should your AI, if
 it's to ever learn open-ended models.

 Would you choose to program in raw lambda calculus if you were writing
 a Web server or an e-mail client? If not, why would you choose to do
 so when writing an AGI? It's not like it's an easier problem to start
 with -- it's harder, so being handicapped with bad tools is an even
 bigger problem.


I'd write it in a separate language, developed for human programmers,
but keep the language with which AI interacts minimalistic, to
understand how it's supposed to grow, and not be burdened by technical
details in the core algorithm or fooled by appearance of functionality
where there is none but a simple combination of sufficiently
expressive primitives. Open-ended learning should be open-ended from
the start. It's a general argument of course, but you need specifics
to fight it.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 5:54 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 2:50 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 I'd write it in a separate language, developed for human programmers,
 but keep the language with which AI interacts minimalistic, to
 understand how it's supposed to grow, and not be burdened by technical
 details in the core algorithm or fooled by appearance of functionality
 where there is none but a simple combination of sufficiently
 expressive primitives. Open-ended learning should be open-ended from
 the start. It's a general argument of course, but you need specifics
 to fight it.

 Okay, I'll repeat the specific example from earlier; how would you
 handle it following your strategy?

 Example: you want the AI to generate code to meet a spec, which you
 provided in the form of a fitness function. If the problem isn't
 trivial and you don't have a million years to spare, you want the AI
 to read and understand the spec so it can produce code targeted to
 meet it, rather than rely on random trial and error.


I'd write this specification in language it understands, including a
library that builds more convenient primitives from that foundation if
necessary.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 6:16 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 3:04 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 I'd write this specification in language it understands, including a
 library that builds more convenient primitives from that foundation if
 necessary.

 Okay, so you'd waste a lot of irreplaceable time creating a homebrew
 language running on a slow interpreter stack when there are good
 efficient languages already available. In other words, you'd make the
 same mistake I did, and probably end up years down the line writing
 posts on mailing lists to try to steer other people away from it :-)


Again, specifics. What is this specification thing? What kind of
task are to be specified in it? Where does it lead, where does it end?
In the context of my general argument I don't assume that you'd have
to write that much. If you have to write so much, that is a deviation
from my default, and you'd need to explain it to connect to this
argument. Basically, it's a tradeoff between adding complexity in a
core AI algorithm and adding complexity in a message that AI mush
handle, in which I'd prefer to keep the core simple.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 7:02 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 3:58 PM, J Marlow [EMAIL PROTECTED] wrote:
 You can get the parse tree for an arbitrary string of Python (and even make
 it somewhat human readable), but I'm not sure if you can get it for
 underlying tree.  Once you have a parse tree, I believe that you can execute
 it.

 Josh
 Look into the parser module.

 Ah! In that case, Python might be a good choice of language for an AGI 
 project.


But you can get hold of internal representation of any language and
emulate/compile/analyze it. It's not really the point, the point is
simplicitly of this process. Where simplicity matters is the question
that needs to be answered before that.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 7:24 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 4:09 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 If that allows AI to understand the code, without directly helping it.
 In this case teaching it to understand these other languages might be
 a better first step.

 And to do that you need to give it a specification of those languages,
 and the ability to reason about the properties of a program given the
 code plus the specification of what it's written in; and you need a
 language in which to write the code to do all that; which brings us
 back to where I started this thread.

Again, if that helps.


 But, speaking of application to debugging software, I long ago came to
 conclusion that you'd need to include unreasonable amount of
 background information which you won't even be able to guess relevant
 to make AI do what you need with things that are not completely
 defined.

 It's a hard problem isn't it? Science fiction about Friendly AI
 rewriting the solar system is entertaining, but to really get to grips
 with the matter, start with trying to figure out how to write one that
 understands how to make the Firefox option always perform this
 action work for all file types.

 Where (if anywhere) do you see AGI going in our lifetimes, if you
 think software debugging will remain too difficult an application for
 the foreseeable future?


It's a specific problem: jumping right to the code generation to
specification doesn't work, because you'd need too much specification.
At the same time, a human programmer will need much less
specification, so it's a question of how to obtain and use background
knowledge, a general question of AI. The conclusion is that this is
not the way.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 8:29 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 5:26 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 It's a specific problem: jumping right to the code generation to
 specification doesn't work, because you'd need too much specification.
 At the same time, a human programmer will need much less
 specification, so it's a question of how to obtain and use background
 knowledge, a general question of AI. The conclusion is that this is
 not the way.

 Oh, it's not step one or step two, that's for sure! I did say it was a
 prospect for the longer term.


You are describing it as a step one, with writing huge specifications
by hand in formally interpretable language.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 8:47 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 5:37 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 You are describing it as a step one, with writing huge specifications
 by hand in formally interpretable language.

 I skipped a lot of details because this thread is on programming
 languages not my roadmap to AGI :-)


If it's not supposed to be a generic language war, that becomes relevant.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 9:28 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 6:00 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 If it's not supposed to be a generic language war, that becomes relevant.

 Fair point. On the other hand, I'm not yet ready to write a detailed
 road map out as far as fix user interface bugs in Firefox. Okay,
 here are some nearer term examples:

 Verification of digital hardware against formal models. (Narrow AI
 theorem provers, for all their limitations, are already making
 significant contributions in this area.)
 Better solutions to NP problems.
 Finding buffer overrun, bad pointer and memory leak bugs in C/C++ programs.

 All of these things can be formally defined without relying on large
 amounts of ill-defined background knowledge.


I write software for analysis of C/C++ programs to find bugs in them
(dataflow analysis, etc.). Where does AI come into this? I'd really
like to know.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] On programming languages

2008-10-24 Thread Vladimir Nesov
On Fri, Oct 24, 2008 at 10:30 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Fri, Oct 24, 2008 at 6:49 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 I write software for analysis of C/C++ programs to find bugs in them
 (dataflow analysis, etc.). Where does AI come into this? I'd really
 like to know.

 Wouldn't you find AI useful? Aren't there bugs that slip past your
 software because it's not smart enough at figuring out what the code
 is doing?


This general sentiment doesn't help if I don't know what to do specifically.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-24 Thread Vladimir Nesov
On Sat, Oct 25, 2008 at 3:01 AM, Eric Baum [EMAIL PROTECTED] wrote:

 For example, to make this concrete and airtight, I can add a time element.
 Say I compute offline the answers to a large number of
 problems that, if one were to solve them with a computation,
 provably could only be solved by extremely long sequential
 computations, each longer than any sequential computation
 that a computer that could
 possibly be built out of the matter in your brain could compute in an hour,
 and I present you these problems and you answer 1 of them in half
 an hour. At this point, I am going, I think, to be pursuaded that you
 are doing something that can not be captured by a Turing machine.


Maybe your brain patches into a huge ultrafast machine concealed in an
extra dimension. We'd just need to find a way to hack in there and
exploit its computational potential on industrial scale. ;-)

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Vladimir Nesov
On Thu, Oct 23, 2008 at 4:13 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
 On Thu, Oct 23, 2008 at 8:39 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 If you consider programming an AI social activity, you very
 unnaturally generalized this term, confusing other people. Chess
 programs do learn (certainly some of them, and I guess most of them),
 not everything is hardcoded.

 They may learn tactics or even how to prune their tree better, but I
 know of no chess AI that learns how to play the same way you would
 say a person learns how to play.

Of course.

 And that's the whole point of this
 general AI thing we're trying to get across.. learning how to do a
 task given appropriate instruction and feedback by a teacher is the
 golden goose here..


Not necessarily. The ultimate teacher is our real environment in general.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Vladimir Nesov
On Wed, Oct 22, 2008 at 2:10 PM, Trent Waddington
[EMAIL PROTECTED] wrote:

 No-one can learn chess from playing chess alone.

 Chess is necessarily a social activity.

 As such, your suggestion isn't even sensible, let alone reasonable.


Current AIs learn chess without engaging in social activities ;-).
And chess might be a good drosophila for AI, if it's treated as such (
http://www-formal.stanford.edu/jmc/chess.html ).
This was uncalled for.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] constructivist issues

2008-10-22 Thread Vladimir Nesov
On Wed, Oct 22, 2008 at 7:47 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 The problem is to gradually improve overall causal model of
 environment (and its application for control), including language and
 dynamics of the world. Better model allows more detailed experience,
 and so through having a better inbuilt model of an aspect of
 environment, such as language, it's possible to communicate richer
 description of other aspects of environment. But it's not obvious that
 bandwidth of experience is the bottleneck here.

 No, but nor is it obvious that this *isn't* one of the major bottlenecks...


My intuition is that it's very easy to steadily increase bandwidth of
experience, the more you know the more you understand. If you start
from simple sensors/actuators (or even chess or Go), progress is
gradual and open-ended.



 It's probably just
 limitations of the cognitive algorithm that simply can't efficiently
 improve its model, and so feeding it more experience through tricks
 like this is like trying to get a hundredfold speedup in the
 O(log(log(n))) algorithm by feeding it more hardware.

 Hard to say...

 Remember, we humans have a load of evolved inductive bias for
 understanding human language ... AGI's don't ...  so using Lojban
 to talk to an AGI could be a way to partly make up for this deficit in
 inductive bias...


Any language at all is a way of increasing experiential bandwidth
about environment. If bandwidth isn't essential, bootstrapping this
process through a language is equally irrelevant. At some point,
however inefficiently, language can be learned if system allows
open-ended learning.

This is a question of not doing premature optimization of a program
that is not even designed yet, not talking about being implemented and
profiled.


 It should be
 possible to get a proof-of-concept level results about efficiency
 without resorting to Cycs and Lojbans, and after that they'll turn out
 to be irrelevant.

 Cyc and Lojban are not comparable, one is a  knowledge-base, the other
 is a language

 Cyc-L and Lojban are more closely comparable, though still very different
 because Lojban allows for more ambiguity (as well as Cyc-L level precision,
 depending on speaker's choice) ... and of course Lojban is intended for
 interactive conversation rather than knowledge entry


(as tools towards improving bandwidth of experience, they do the same thing)

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Vladimir Nesov
C(N,S) is the total number of assemblies of size S that fit in the N
nodes, if you forget about overlaps.

Each assembly overlaps in X places with other C(S,X)*C(N-S,S-X)
assemblies: if another assembly overlaps with our assembly in X
places, then X nodes are inside S nodes of our assembly, which gives
C(S,X) possible combinations, and the remaining S-X of the nodes are
outside the assembly, in remaining N-S nodes, which gives C(N-S,S-X)
combinations, totaling to C(S,X)*C(N-S,S-X). Thus, the total number of
assemblies that overlap with our assembly in O to S places (including
our assembly itself) is
T(N,S,O)=
C(S,S)*C(N-S,S-S)+
C(S,S-1)*C(N-S,S-(S-1))+
...+
C(S,O)*C(N-S,S-O)

Let's apply a trivial algorithm to our problem, adding an arbitrary
assembly to the working set merely if it doesn't conflict with any of
the assemblies already in the working set. Adding a new assembly will
ban other T(N,S,O) assemblies from the total pool of C(N,S)
assemblies, thus each new assembly in the working set lowers the
number of remaining assemblies that we'll be able to add later. Some
assemblies from this pool will be banned multiple times, but at least
C(N,S)/T(N,S,O) assemblies can be added without conflicts, since
T(N,S,O) is the maximum number of assemblies that each one in the pool
is able to subtract from the total pool of assemblies.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-21 Thread Vladimir Nesov
(I agree with the points I don't quote here)

General reiteration on notation: O-1 is the maximum allowed overlap,
overlap of O is already not allowed (it was this way in your first
message).

On Wed, Oct 22, 2008 at 3:08 AM, Ed Porter [EMAIL PROTECTED] wrote:

 T(N,S,O) = SUM FROM X = 0 TO S-O OF C(S, S-X)*C(N-S, X)


To match with the explanation of the size of the overlap, I intended
T(N,S,O)= C(S,S)*C(N-S,S-S)+ C(S,S-1)*C(N-S,S-(S-1))+ ...+C(S,O)*C(N-S,S-O)
to be parsed as
T(N,S,O) = SUM FROM X =O TO S OF C(S,X)*C(N-S,S-X)



 Comparing this to C(S,X)*C(N-S,S-X) --- it appears that T(N,S,O) is equal to
 the number of all combinations calculated by C(S,X)*C(N-S,S-X) where X is
 greater than O, Thus it is an attempt to enumerate all such combinations in
 which the overlap is more than O and thus which should be excluded from A.


I don't exclude them from A, as I don't know which of them will go to
A and which will get banned multiple times. I exclude them from
overall pool of C(N,S).



 --First, yes, each new assembly of length S added to the working set lowers
 the number of remaining assemblies that we'll be able to add later, but
 adding a given new assembly will ban not T(N,S,O) assemblies, but rather
 only all those assemblies that overlap with it by more than O nodes.


But T(N,S,O) IS the number of all those assemblies that overlap with a
given assembly by O or more nodes (having from X=O to X=S nodes of
overlap).



 --Second, what are the cases where assemblies will be banned multiple times
 that you mentioned in the above text?


It's one of the reasons it's a lower bound: in reality, some of the
assemblies are banned multiple times, which leaves more free
assemblies that could be added to the working set later.



 --Third --- as mentioned in my last group of comments --- why doesn't A =
 C(N,S) – T(N,S,O), since C(N,S) is the total number of combinations of
 length S that can be formed from N nodes, and T(N,S,O) appears to enumerate
 all the combinations that occur with each possible overlap value greater O.


It's only overlap with one given assembly, blind to any other
interactions, it says nothing about ideal combination of assemblies
that manages to keep the overlap between each pair in check.



 --Fifth, is possible that even though T(N,S,O) appears to enumerate all
 possible combinations in which all sets overlap by more than O, that it
 fails to take into account possible combinations of sets of size S in which
 some sets overlap by more than O and others do not?

 --in which case T(N,S,O) would be smaller than the number of all prohibited
 combinations of sets of length S.  Or would all the possible sets of length
 S which overlap be have been properly taken into account in the above
 formula for T?


T doesn't reason about combinations of sets, it's a filter on the
individual sets from the total of C(N,S).



 --Sixth, if C(S,X)*C(N-S,S-X) enumerates all possible combinations having an
 overlap of X, why can't one calculate A as follows?

 A = SUM FROM X = 0 TO O OF C(S,X)*C(N-S,S-X)


Because some of these sets intersect with each other, you can't
include them all.


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Meaning, communication and understanding

2008-10-20 Thread Vladimir Nesov
On Sun, Oct 19, 2008 at 11:50 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 But in any case there is a complete distinction between D and L. The brain
 never sends entities of D to its output region but it sends entities of L.
 Therefore there must be a strict separation between language model and D.


In any case isn't good enough, Why does it even make sense to say
that brain sends entities? From L? So far, all of this is
completely unjustified, and probably not even wrong.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Re: Value of philosophy

2008-10-20 Thread Vladimir Nesov
On Mon, Oct 20, 2008 at 2:33 AM, Samantha Atkins [EMAIL PROTECTED] wrote:
 Hmm.  After the recent discussion it seems this list has turned into the
 philosophical musings related to AGI list.   Where is the AGI engineering
 list?


The problem isn't philosophy, but bad philosophy (the prevalent
variety). Good philosophy is necessary for AI, and philosophy in some
sense always focused on the questions of AI. Even if most of the
existing philosophy is bunk, we need to build our own philosophy.
Frankly, I don't remember any engineering discussions on this list
that didn't fall on deaf ears of most of the people not believing that
the direction is worthwhile, and for good reasons (barring occasional
discussions of this or that logic, which might be interesting, but
again).

We need to work more on the foundations, to understand whether we are
going in the right direction on at least good enough level to persuade
other people (which is NOT good enough in itself, but barring that,
who are we kidding).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Vladimir Nesov
On Mon, Oct 20, 2008 at 6:37 PM, Ed Porter [EMAIL PROTECTED] wrote:

 The tables at http://www.research.att.com/~njas/codes/Andw/index.html#dist16
  indicates the number of cell assemblies would, in fact be much larger than
 the number of nodes, WHERE THE OVERLAP WAS RELATIVELY LARGE, which would be
 equivalent to node assemblies with undesirably high cross talk.

Ed, find my reply where I derive a lower bound. Even if overlap must
be no more than 1 node, you can still have a number of assemblies as
much more than N as necessary, if N is big enough, given fixed S.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-20 Thread Vladimir Nesov
On Tue, Oct 21, 2008 at 12:07 AM, Ed Porter [EMAIL PROTECTED] wrote:

 I built an excel spread sheet to calculate this for various values of N,S,
 and O.  But when O = zero, the value of C(N,S)/T(N,S,O) doesn't make sense
 for most values of N and S.  For example if N = 100 and S = 10, and O =
 zero, then A should equal 10, not one as it does on the spread sheet.


It's a lower bound.


 I have attached the excel spreadsheet I made to play around with your
 formulas, and a PDF of one page of it, in case you don't have access to
 Excel.


Your spreadsheet doesn't catch it for S=100 and O=1, it explodes when
you try to increase N.
But at S=10, O=2, you can see how lower bound increases as you
increase N. At N=5000, lower bound is 6000, at N=10^6, it's 2.5*10^8,
and at N=10^9 it's 2.5*10^14.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
On Sun, Oct 19, 2008 at 11:58 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 The process of outwardly expressing meaning may be fundamental to any social
 intelligence but the process itself needs not much intelligence.

 Every email program can receive meaning, store meaning and it can express it
 outwardly in order to send it to another computer. It even can do it without
 loss of any information. Regarding this point, it even outperforms humans
 already who have no conscious access to the full meaning (information) in
 their brains.

 The only thing which needs much intelligence from the nowadays point of view
 is the learning of the process of outwardly expressing meaning, i.e. the
 learning of language. The understanding of language itself is simple.


Meaning is tricky business. As far as I can tell, meaning Y of a
system X is an external model that relates system X to its meaning Y
(where meaning may be a physical object, or a class of objects, where
each individual object figures into the model). Formal semantics works
this way (see http://en.wikipedia.org/wiki/Denotational_semantics ).
When you are thinking about an object, the train of though depends on
your experience about that object, and will influence your behavior in
situations depending on information about that objects. Meaning
propagates through the system according to rules of the model,
propagates inferentially in the model and not in the system, and so
can reach places and states of the system not at all obviously
concerned with what this semantic model relates them to. And
conversely, meaning doesn't magically appear where model doesn't say
it does: if system is broken, meaning is lost, at least until you come
up with another model and relate it to the previous one.

When you say that e-mail contains meaning and network transfers
meaning, it is an assertion about the model of content of e-mail, that
relates meaning in the mind of the writer to bits in the memory of
machines. From this point of view, we can legitemately say that
meaning is transferred, and is expressed. But the same meaning doesn't
exist in e-mails if you cut them from the mind that expressed the
meaning in the form of e-mails, or experience that transferred meaning
in the mind.

Understanding is the process of integrating different models,
different meanings, different pieces of information as seen by your
model. It is the ability to translate pieces of information that have
nontrivial structure, in your basis. Normal use of understanding
applies only to humans, everything else generalizes this concept in
sometimes very strange ways. When we say that person understood
something, in this language it's equivalent to person having
successfully integrated that piece in his mind, our model of that
person starting to attribute properties of that piece of information
to his thought and behavior.

So, you are cutting this knot at a trivial point. The difficulty is in
the translation, but you point on one side of the translation process
and say that this side is simple, then point to another than say that
this side is hard. The problem is that it's hard to put a finger on
the point just after translation, but it's easy to see how our
technology, as physical medium, transfers information ready for
translation. This outward appearance has little bearing on semantic
models.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
On Sun, Oct 19, 2008 at 3:09 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 regarding denotational semantics:
 I prefer to think of the meaning of X as the fuzzy set of patterns
 associated with X.  (In fact, I recall giving a talk on this topic at a
 meeting of the American Math Society in 1990 ;-)


I like denotational semantics as an example (even though it doesn't
suggest uncertainty), because it's a well-understood semantic model
with meaning assigned to deep intermediate steps, in nontrivial ways.
It's easier to see by analogy to this how abstract thought that
relates to misremembered experience of 20 years ago and that never
gets outwardly expressed still has meaning, and how to assign it which
meaning.

What form meaning takes depends on the model that assigns meaning to
the system, which when we cross the line into realm of human-level
understanding becomes a mind, and so meaning, in a technical sense,
becomes a functional aspect of AGI. If AGI works on something called
fuzzy set of patterns, then it's the meaning of what it models.
There is of course a second step when you yourself, as an engineer,
assign meaning to aspects of operation of AGI, and to relations
between AGI and what it models, in your own head, but this perspective
loses technical precision, although to some extent it's necessary.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Vladimir Nesov
On Sun, Oct 19, 2008 at 5:23 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 I agree that understanding is the process of integrating different models,
 different meanings, different pieces of information as seen by your
 model. But this integrating just matching and not extending the own model
 with new entities. You only match linguistic entities of received
 linguistically represented information with existing entities of your model
 (i.e. with some of your existing patterns). If you could manage the matching
 process successfully then you have understood the linguistic message.

 Natural communication and language understanding is completely comparable
 with common processes in computer science. There is an internal data
 representation. A subset of this data is translated into a linguistic string
 and transferred to another agent which retranslates the message before it
 possibly but not necessarily changes its database.

 The only reason why natural language understanding is so difficult is
 because it needs a lot of knowledge to resolve ambiguities which humans
 usually gain via own experience.

 But alone from being able to resolve the ambiguities and being able to do
 the matching process successfully you will know nothing about the creation
 of patterns and the way how to work intelligently with these patterns.
 Therefore communication is separated from these main problems of AGI in the
 same way as communication is completely separated from the structure and
 algorithms of the database of computers.

 Only the process of *learning* such a communication would be  AI (I am not
 sure if it is AGI). But you cannot learn to communicate if there is nothing
 to communicate. So every approach towards AGI via *learning* language
 understanding will need at least a further domain for the content of
 communication. Probably you need even more domains because the linguistic
 ambiguities can resolved only with broad knowledge .

 And this is my point why I say that language understanding would yield costs
 which are not necessary. We can build AGI just by concentrating all efforts
 to a *single* domain with very useful properties (i.e. domain of
 mathematics).
 This would reduce the immense costs of simulating real worlds and
 additionally concentrating on *at least two* domains at the same time.


I think I see what you are trying to communicate. Correct me if I got
something wrong here.
You assume a certain architectural decision for AIs in question when
you talk about this interpretation of process of communication.
Basically, AI1 communicates with AI2, and they both work with two
domains: D and L, D being internal domain and L being communication
domain, stuff that gets sent via e-mail. AI1 translates meaning D1
into message L1, which is transferred as L2 to AI2, which then
translates it to D2. You call a step L2-D2 understanding or
matching, also assuming that this process doesn't need to change
AI2, to make it change its model, to learn. You then suggest that L
doesn't need to be natural language, as D for language is the most
difficult real world, and then instead we need to pick easier L and D
and work on their interplay.

If AI1 already can translate between D and L, AI2 might need to learn
to translate between L and D on its own, knowing only D at the start,
and this ability you suggest as central challenge of intelligence.

I think that this model is overly simplistic, overemphasizing an
artificial divide between domains within AI's cognition (L and D), and
externalizing communication domain from the core of AI. Both world
model and language model support interaction with environment, there
is no clear cognitive distinction between them. As a given,
interaction happens at the narrow I/O interface, and anything else is
a design decision for a specific AI (even invariability of I/O is, a
simplifying assumption that complicates semantics of time and more
radical self-improvement). Sufficiently flexible cognitive algorithm
should be able to integrate facts about any domain, becoming able to
generate appropriate behavior in corresponding contexts.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Reasoning by analogy recommendations

2008-10-17 Thread Vladimir Nesov
On Fri, Oct 17, 2008 at 7:09 PM, Harry Chesley [EMAIL PROTECTED] wrote:
 I find myself needing to more thoroughly understand reasoning by analogy.
 (I've read/thought about it to a degree, but would like more.) Anyone have
 any recommendation for books and/or papers on the subject?


The classics: Hofstadter's Fluid concepts and creative analogies.,
Structure-Mapping Engine.
See Peter Turney's reading list:
http://apperceptual.wordpress.com/2007/12/20/readings-in-analogy-making/
There is much literature in analogy-making, playing with words,
puzzles and situations.

I'd like to see good analysis of why analogy makes sense, why it's
expected to work and why it works when it does. Basically, analogy is
a way to discover a certain kind of concepts, specified not by
arrangement of properties, but by relations between the properties.
Analogy establishes relational similarity, and by extension allows to
perform relational classification. Classification works because our
actual world supplies limited number of substantially different
patterns, so you can identify a myriad of properties by recognizing
only few ( http://causalityrelay.wordpress.com/2008/07/06/rules-of-thumb/
). Perter Turney recently made an interesting point about why analogy
works better than classification by collections of properties (
http://apperceptual.wordpress.com/2008/10/13/context/ ):

Context is about the relation between foreground and background. The
way we use contextual information to make predictions is that we
represent the relations between the foreground and the background
(e.g., the relation between weather and engine temperature), instead
of representing only the properties of the foreground (e.g., the
engine temperature out of context). Thus reasoning with contextual
information is an instance of analogical reasoning. Looking at it the
other way around, relational similarity is superior to attributional
similarity because the former is more robust than the latter when
there is contextual variation.



-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
On Thu, Oct 16, 2008 at 7:01 PM, Ed Porter [EMAIL PROTECTED] wrote:

 The answer to this question would provide a rough indication of the
 representational capacity of using node assemblies to represent concepts vs
 using separate individual node, for a given number of nodes.  Some people
 claim the number of cell assemblies that can be created with say a billion
 nodes that can distinctly represent different concepts far exceeds the
 number of nodes.  Clearly the number of possible subsets of say size 10K out
 of 1G nodes is a combinatorial number much larger than the number of
 particles in the observable universe, but I have never seen any showing of
 how many such subsets can be created that would have a sufficiently low
 degree of overlap with all other subsets as to support reliable
 representation for separate individual concepts.

 If the number such node subsets that can be created with sufficiently low
 overlap with any other node to clearly and reliably represent individual
 concepts is much, much larger than the number of nodes, it means cell
 assemblies might be extremely valuable to creating powerful AGI's.  If not,
 not.


Ed,

Clearly, the answer is huge (you can just play with a simple special
case to get a feel of the lower bound), and it hardly matters how
huge. It's much more important what are you going to do with it, what
does it mean to you. How do these assemblies form, what do they do,
how they learn, how they react to input, how do you make them
implement a control algorithm, how do you direct them to do what you
want. And what does it matter how many of them are potentially there,
if you are calculating this estimate based on constraints divorced
from the algorithms, which should be the source of any constraints in
the first place, and which would probably make the whole concept of
separate assemblies meaningless.

I found cell assembly language not very helpful, although the idea of
representing many patterns by few nodes is important. For example, the
state of a set of nodes (cells) can be regarded as a point in Hamming
space (space of binary vectors), and the dynamic of this set of nodes
as operation on this space, taking the state to a new point depending
on a previous point. The operation works in such a way that many
points are mapped to one point, thus trajectory of the state is stable
(so much for redundancy). The length of this trajectory before it
loops is the number of different states, which could be on the order
of powerset. Some of the nodes may be controlled by external input,
shifting the state, interfering with trajectory. Since points in
neighborhoods are attracted to the trajectory, it reduces the volume
that trajectory can span accordingly. States along a trajectory
enumerate temporal code that can be used to learn temporal codes (by
changing the direction of the trajectory, or attracted neighborhoods),
and by extension any other codes. Multiple separate trajectories are
separate states, which can mix with each other and establish
transitions conditional on external input, thus creating combined
trajectories. And so on. I'll work my way up to this in maybe a couple
of months on the blog, after sequences on fluid representation,
information and goals, and inference surface.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
On Fri, Oct 17, 2008 at 12:46 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Vlad,

 They could be used much like normal nodes, except that a given set of basic
 nodes that form a conceptual node would be auto-associative within their own
 population, and they would have some of the benefits of redundancy,
 robustness, resistance to noise, and gradual forgetting, that I mentioned
 earlier.

 Robert Hecht-Nielsens Mechanization of Cognition particularly in Appendix
 section 3.A.3 and 3.A.4 gives a good description of how a particular type of
 neural assemblies can be used for semantic representation and imagination.
 This article text used to be available on the web, but I don't see it
 anymore.  It was published as chapter 3 in Bar-Cohen, Y. [Ed.] Biomimetics:
 Biologically Inspired Technologies, CRC Press, Boca Raton, FL (2006).


They can't work quite like normal nodes, because there are too many of
them and you can't learn so many mappings (edges between these
nodes) independently of each other. Associations will be heavily
interdependent, which is one example of a thing that makes the concept
of cell assemblies bad, unless clearly shown otherwise. When
assemblies are heavily dependent on each other, assembly-as-node
metaphor just breaks down. Besides, you can't have a dynamic that
separately implements association and redundancy between different
configurations of nodes: redundancy within assemblies, and association
between assemblies. These pressures will also be entangled, blurring
the boundaries of assemblies. The main theme is that assemblies don't
float in the aether, but are properties of cognitive dynamic
implemented in terms of nodes, and should be shown to make sense in
this context.

Based on confabulation papers, I find Hecht-Nielsen deeply confused.
If his results even mean anything, he does a poor job of explaining
what it is and why, and what makes what he says new. Another slim
possibility is that his theory is way over my background, but all the
cues point the other way.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
On Fri, Oct 17, 2008 at 5:04 AM, charles griffiths
[EMAIL PROTECTED] wrote:
 I think A = floor((N-O)/(S-O)) * C(N,O) / (O+1).


Doesn't work for O=2 and S=2 where A=C(N,2).

P.S. Is it a normal order to write arguments of C(,) this way? I used
the opposite.

P.P.S. In original problem, O-1 is the maximum allowed overlap, in my
last reply I used O incorrectly in the first paragraph, but checked
and used in correctly in the second with the lower bound.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] Brain-muscle interface helps paralysed monkeys move

2008-10-16 Thread Vladimir Nesov
See this article:
http://scienceblogs.com/neurophilosophy/2008/10/brain_muscle_interface_helps_paralysed_monkeys_move.php

paper:
http://www.nature.com/nature/journal/vaop/ncurrent/full/nature07418.html
Direct control of paralysed muscles by cortical neurons
Chet T. Moritz, Steve I. Perlmutter  Eberhard E. Fetz
From abstract:
Here we show that Macaca nemestrina monkeys can directly control
stimulation of muscles using the activity of neurons in the motor
cortex, thereby restoring goal-directed movements to a transiently
paralysed arm. Moreover, neurons could control functional stimulation
equally well regardless of any previous association to movement, a
finding that considerably expands the source of control signals for
brain-machine interfaces. Monkeys learned to use these artificial
connections from cortical cells to muscles to generate bidirectional
wrist torques, and controlled multiple neuron–muscle pairs
simultaneously.


What is remarkable is that readout from a single cell made it adapt to
perform a specific action. It looks that the only indication brain had
about the fact that this cell is now controlling the wrist, is sensory
feedback from all the usual sensors. I'd say it's a challenge to
models of knowledge representation, to be able to learn such
dependencies. The output is collected not in some sort of prewired
attractor that collects input from many cells, routed from pretrained
processing stages, but just from a cell in the middle of nowhere.
Brain is able to capture the feedback loop through environment
starting from a single cell, and to include the activity of that cell
in goal-directed control process, based on the effect on the
environment.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
On Fri, Oct 17, 2008 at 5:31 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 I still think this combinatorics problem is identical to the problem of
 calculating the  efficiency of bounded-weight binary codes, as I explained
 in a prior email...


Yes, it seems to be a well-known problem.
http://en.wikipedia.org/wiki/Constant-weight_code

(2 Charles: Apart from some trivial observations, it is generally
impossible to compute these numbers in a straightforward way.)

A(N, 2*(S-O+1), S) is the answer to Ed's problem (it's maximum size of
constant-weight binary code, not bounded-weight though).

My lower bound is trivial, and answers the question. It's likely
somewhere in the references there.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
On Fri, Oct 17, 2008 at 6:05 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Right, but his problem is equivalent to bounded-weight, not constant-weight
 codes...


Why? Bounded-weight codes are upper-bounded by Hamming weight, which
corresponds to cell assemblies having size of S or less, whereas in
Ed's problem assemblies have fixed size of S, which corresponds to
constant Hamming weight.

From the article, http://www.jucs.org/jucs_5_12/a_note_on_bounded/Bent_R.html

The weight, w, of a binary word, x, is equal to the number of 1s in
x. For a constant-weight (w) code, every word in the code has the same
weight, w. In a bounded-weight (w) code, every word has at most w
ones.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Who is smart enough to answer this question?

2008-10-16 Thread Vladimir Nesov
On Fri, Oct 17, 2008 at 6:26 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Oh, you're right...

 I was mentally translating his problem into one that made more sense to me
 biologically, as I see no reason why one would assume all cell assemblies to
 have a fixed size ... but it makes slightly more sense to assume an upper
 bound on their size...


Which is why I don't like this whole fuss about cell assemblies in the
first place, and prefer free exploration of Hamming space. ;-)

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Networks, memory capacity, grid cells...

2008-10-16 Thread Vladimir Nesov
On Fri, Oct 17, 2008 at 6:38 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Well, coding theory does let you derive upper bounds on the memory capacity
 of Hopfield-net type memory  models...

 But, the real issue for Hopfield nets is not theoretical memory capacity,
 it's tractable incremental learning algorithms

 Along those lines, this work is really nice...

 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.817

 I wonder how closely that method lets you achieve the theoretical upper
 bound.  Unfortunately, current math seems inadequate to discover this, but
 empirics could tell us.  If anyone wants to explore it, we have a Java
 implementation of Storkey's palimpsest learning scheme for Hopfield nets,
 specialized for simple experiments with character arrays.


I'm currently experimenting with a kind of oscillator-based network,
that contains binary oscillators of different periods (to collectively
enumerate big portion of Hamming space) and adaptive phase reset that
navigates the state space and allows to capture patterns. Application
is different though, I'm trying to capture continuous features by
tracks of trajectories in Hamming space, both in time (sequences of
inputs) and in input feature space (among different instances of input
that can be separated in time). Can't find relevant literature for
this one (idea derives from neuroscience, and their models look like
they try to actively prevent theoretical understanding of what's
really going on).


[1] Microstructure of a spatial map in the entorhinal cortex
by: Torkel Hafting, Marianne Fyhn, Sturla Molden, May-Britt Moser,
Edvard I Moser
Nature (19 June 2005)

[2] Reset of human neocortical oscillations during a working memory task.
by: DS Rizzuto, JR Madsen, EB Bromfield, A Schulze-Bonhage, D Seelig,
R Aschenbrenner-Scheibe, MJ Kahana
Proceedings of the National Academy of Sciences of the United States
of America, Vol. 100, No. 13. (24 June 2003), pp. 7931-7936.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-15 Thread Vladimir Nesov
On Wed, Oct 15, 2008 at 5:38 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Tue, 10/14/08, Terren Suydam [EMAIL PROTECTED] wrote:

 Matt,

 Your measure of intelligence seems to be based on not much
 more than storage capacity, processing power, I/O, and
 accumulated knowledge. This has the advantage of being
 easily formalizable, but has the disadvantage of missing a
 necessary aspect of intelligence.

 Usually when I say intelligence I mean amount of knowledge, which can
 be measured in bits. (Well not really, since Kolmogorov complexity is not
 computable). The other measures reduce to it. Increasing memory allows more
 knowledge to be stored. Increasing processing power and I/O bandwidth allows
 faster learning, or more knowledge accumulation over the same time period.

 Actually, amount of knowledge is just an upper bound. A random string has
 high algorithmic complexity but is not intelligent in any meaningful sense. My
 justification for this measure is based on the AIXI model. In order for an 
 agent
 to guess an environment with algorithmic complexity K, the agent must be able
 to simulate the environment, so it must also have algorithmic complexity K. An
 agent with higher complexity can guess a superset of environments that a lower
 complexity agent could, and therefore cannot do worse in accumulated reward.


Interstellar void must be astronomically intelligent, with all its
incompressible noise...

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: COMP = false? (was Re: [agi] Advocacy Is no Excuse for Exaggeration)

2008-10-15 Thread Vladimir Nesov
On Wed, Oct 15, 2008 at 2:18 PM, David Hart [EMAIL PROTECTED] wrote:
 On Wed, Oct 15, 2008 at 5:52 PM, Colin Hales [EMAIL PROTECTED]
 wrote:

 So you'll just have to wait. Sorry. I also have patent/IP issues.

 Exactly what qualia am I expected to feel when you say the words
 'Intellectual Property'? (that's a rhetorical question, just in case there
 was any doubt!)

 I'd like to suggest that the COMP=false thread be considered a completely
 mis-placed, undebatable and dead topic on the AGI list.

That'd be great.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Vladimir Nesov
On Thu, Oct 16, 2008 at 12:30 AM, Derek Zahn [EMAIL PROTECTED] wrote:
 How about this:

 Those who *do* think it's worthwhile to move to the forum:  Instead of
 posting email responses to the mailing list, post them to the forum and then
 post a link to the response to the email list, thus encouraging threads to
 continue in the more advanced venue.

 I shall do this myself from now on.  I have not participated much on this
 list lately due to my current work schedule but will make an effort to do
 so.  If used, I do think the forum could help solve some of these META
 issues.


I prefer mailing list, because it has a convenient mechanism for
receiving and managing messages. Messages are grouped by threads
through the magic of gmail, I see every update and know which threads
are boring and which are not, I have filters set up to mark the
potentially more interesting threads. Forums are more difficult, and I
don't want another workflow to worry about. Using notifications
complicates access, and transparent notifications that post all the
content to e-mail make forum equivalent to a mailing list anyway.
Mailing list also forces better coherence to the discussion.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: RSI without input (was Re: [agi] Updated AGI proposal (CMR v2.1))

2008-10-15 Thread Vladimir Nesov
On Thu, Oct 16, 2008 at 12:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Among other reasons: Because, in the real world, the scientist with an IQ of
 200 is **not** a brain in a vat with the inability to learn from the
 external world.

 Rather, he is able to run experiments in the external world (which has a far
 higher algorithmic information than him, by the way), which give him **new
 information** about how to go about making the scientist with an IQ of 220.

 Limitations on the rate of self-improvement of scientists who are brains in
 vats, are not really that interesting

 (And this is separate from the other critique I made, which is that using
 algorithmic information as a proxy for IQ is a very poor choice, given the
 critical importance of runtime complexity in intelligence.  As an aside,
 note there are correlations between human intelligence and speed of neural
 processing!)


Brain in a vat self-improvement is also interesting and worthwhile
endeavor. One problem to tackle, for example, is to develop more
efficient optimization algorithms, that will be able to faster find
better plans according to the goals (and naturally apply these
algorithms to decision-making during further self-improvement).
Advances in algorithms can bring great efficiency, and looking at what
modern computer science came up with, this efficiency rarely requires
an algorithm of in the least significant complexity. There is plenty
of ground to cover in the space of simple things, limitations on
complexity are pragmatically void.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] Updated AGI proposal (CMR v2.1)

2008-10-14 Thread Vladimir Nesov
On Tue, Oct 14, 2008 at 8:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Ben,
 If you want to argue that recursive self improvement is a special case of
 learning, then I have no disagreement with the rest of your argument.

 But is this really a useful approach to solving AGI? A group of humans can
 generally make better decisions (more accurate predictions) by voting than any
 member of the group can. Did these humans improve themselves?

 My point is that a single person can't create much of anything, much less an
 AI smarter than himself. If it happens, it will be created by an organization 
 of
 billions of humans. Without this organization, you would probably not think to
 create spears out of sticks and rocks.

 That is my problem with the seed AI approach. The seed AI depends on the
 knowledge and resources of the economy to do anything. An AI twice as smart
 as a human could not do any more than 2 people could. You need to create an
 AI that is billions of times smarter to get anywhere.

 We are already doing that. Human culture is improving itself by accumulating
 knowledge, by becoming better organized through communication and
 specialization, and by adding more babies and computers.



You are slipping from strained interpretation of the technical
argument to the informal point that argument was intended to
rationalize. If interpretation of technical argument is weaker than
original informal argument it was invented to support, there is no
point in technical argument. Using the fact of 2+2=4 won't give
technical support to e.g. philosophy of solipsism.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] a mathematical explanation of AI algorithms?

2008-10-08 Thread Vladimir Nesov
On Thu, Oct 9, 2008 at 1:36 AM, Valentina Poletti [EMAIL PROTECTED] wrote:
 And here is your first question on AGI.. actually rather on AI. It's not so
 trivial though.
 Some researchers are telling me that no-one has actually figured out how AI
 algorithms, such as ANNs and genetic algorithms work.. in other words there
 is no mathematical explanation to prove their behavior. I am simply not
 happy with this answer. I always look for mathematical explanations.
 Particularly, I am not talking about something as complex as AGI, but
 something as simple as a multi-layer perceptron. Anybody knows anything that
 contradicts this?
 Thanks

Read an introductory text on machine learning to get up to speed --
it's the math of AI, and there's lots of it. Statistics, information
theory. It's an important perspective from which to look at less well
understood hacks, to feel the underlying structure.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-06 Thread Vladimir Nesov
On Mon, Oct 6, 2008 at 3:19 AM, Colin Hales
[EMAIL PROTECTED] wrote:
 Hi Vladimir,
 I did not say the physics was unknown. I said that it must exist. The
 physics is already known.Empirically and theoretically. It's just not
 recognised in-situ and by the appropriate people. It's an implication of the
 quantum non-locality underpinning electrodynamics. Extensions of the physics
 model to include the necessary effects are not part of the discussion and
 change nothing. This does not alter the argument, which is empirical. Please
 accept and critique it on this basis. I am planning an experiment as a
 post-doc to validate the basic principle as it applies in a neural context.
 It's under development now. It involves electronics and lasers and all the
 usual experimental dross.

 BTW I don't do non-science. Otherwise I'd just be able to sit back and
 declare my world view complete and authoritative, regardless of the
 evidence, wouldn't I? That is so not me. I am an engineer If I can't
 build it then I know I don't understand it. Nothing is sacred. At no point
 ever will I entertain any fictional/untestable/magical solutions. Like
 assuming an unproven conjecture is true. Nor will I idolise the 'received
 view' as having all the answers and force the natural world to fit my
 prejudices in respect of what 'explanation' entails. Especially when major
 mysteries persist in the face of all explanatory attempts. That's the worst
 non-science you can have... so I'm rather more radically empirical and dry,
 evidenced based but realistic in expectations of our skills as explorers of
 the natural world ...than it might appear. In being this way I hope to be
 part of the solution, not part of the problem.


You can understand a scene when you watch animated movies on TV, for
pete's sake! There is no physics in reductionist universe that would
know how to patch information about a scene that is thousands of miles
away, years ago, and only ever existed virtually. You can't adapt
known physics to do THAT. You'd need an intelligent meddler. And you
can't escape flaws in your reasoning by wearing a lab coat.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-06 Thread Vladimir Nesov
On Mon, Oct 6, 2008 at 9:14 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

  And you
 can't escape flaws in your reasoning by wearing a lab coat.

 Maybe not a lab coat... but how about my trusty wizard's hat???  ;-)

 http://i34.tinypic.com/14lmqg0.jpg


Don't you know that only clown suit interacts with probability theory
in the true Bayesian way? ;-)
http://www.overcomingbias.com/2007/12/cult-koans.html

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] COMP = false

2008-10-04 Thread Vladimir Nesov
Basically, you are saying that there is some unknown physics mojo
going on. The mystery of mind looks as mysterious as mystery of
physics, therefore it requires mystery of physics and can derive
further mysteriousness from it, becoming inherently mysterious. It's
bad, bad non-science.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Improve generators, not products.

2008-09-22 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 10:37 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 An organization is most efficient when its members specialize. It is true we
 don't need to build millions of schools to train child AI's. But every person 
 has
 some knowledge that is unique to their job. For example, they know their
 customers, vendors, and co-workers, who to go to for information. It costs a
 company a couple year's salary to replace a white collar employee, including
 the hidden costs of the new employee repeating all the mistakes made by the
 previous employee in learning the job. I estimate that about 5% of the 
 knowledge
 useful to your job cannot be copied from someone else. This fraction will 
 increase
 as the economy becomes more efficient and there is more specialization of job
 functions.


But where did this unique knowledge, all the details of our culture
and economy, come from? We figured it out by applying our intelligence
to solving the human needs. Our solutions might even be vastly
suboptimal, even if good enough for the time being. (Also see
Yudkowsky's essay The Power of Intelligence
http://www.singinst.org/blog/2007/07/10/the-power-of-intelligence/ )

You can construct a mechanical hammer, transferring the form of
action. You can design a mechanical hammer factory, introducing mass
production of cheap hammers, applied in a suboptimal way by armies of
construction workers to every problem. You can open an institution and
teach engineers who design appropriate tools for every problem and
instruct the machines of the factory to cheaply produce what you need.
You can open a university that trains engineers of different
specialities. You can set up a market that figures out which
specialties and organizations are needed.

Introducing AIs at the construction worker stage, where you give them
a standard hammer and teach them to strike nails, misses the point.
You need to apply AI at the highest level, where it starts to solve
the problems from the root, once and for all, deciding how to solve
the problems, designing appropriate tools, learning required facts,
deploying the solutions.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Perceptrons Movie

2008-09-22 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 11:40 PM, Eric Burton [EMAIL PROTECTED] wrote:
 CREATIVITY MACHINE

 http://video.google.ca/videoplay?docid=4007105149032380914ei=PvTXSJONKI_8-gHFhOi-Agq=artificial+lifevt=lf


From the video:
The creativity machine is a model of cognition and intelligence. You
basically need these two pieces: you need perception, in a form of
perceptron, you need imagination in a form of an imagitron; they need
to get together into a brainstorming session. So, they have the
essential features required to create a conscious, transhuman level
intelligence.

Hilarious -- with a sad, dull way. See a picture of a 6-layer neural
network in the link below.

Stephen Thaler
Creativity machine: http://www.imagination-engines.com/cm.htm

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Perceptrons Movie

2008-09-22 Thread Vladimir Nesov
On Tue, Sep 23, 2008 at 12:23 AM, Eric Burton [EMAIL PROTECTED] wrote:
 Creativity machine: http://www.imagination-engines.com/cm.htm

 Six layers, though? Perhaps the result is magic!


Yes, and magic only works in the la-la land.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 2:14 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 I'm not building AGI. (That is a $1 quadrillion problem).


How do you estimate your confidence in this assertion that developing
AGI (singularity capable) requires this insane effort (odds of the bet
you'd take for it)? This is an easily falsifiable statement, if a
small group implements AGI, you'll be proven wrong.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 3:37 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 Another possibility is that we will discover some low cost shortcut to AGI.
 Recursive self improvement is one example, but I showed that this won't work.
 (See http://www.mattmahoney.net/rsi.pdf ). So far no small group (or even a
 large group like Google) has produced AGI, in spite of efforts in this 
 direction
 since the 1950's. In fact, there has been very little theoretical or practical
 progress since 1965. It seems like if there was a simple way to do it, we 
 would
 have figured it out by now.


Hence the question: you are making a very strong assertion by
effectively saying that there is no shortcut, period (in the
short-term perspective, anyway). How sure are you in this assertion?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 3:56 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 Hence the question: you are making a very strong assertion by
 effectively saying that there is no shortcut, period (in the
 short-term perspective, anyway). How sure are you in this
 assertion?

 I can't prove it, but the fact that thousands of smart people have worked on
 AI for decades without results suggests that an undiscovered shortcut is about
 as likely as proving P = NP. Not that I expect people to stop trying to solve
 either of these...


So, do you think that there is at least, say, 99% probability that AGI
won't be developed by a reasonably small group in the next 30 years?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 5:40 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- On Sun, 9/21/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 So, do you think that there is at least, say, 99%
 probability that AGI
 won't be developed by a reasonably small group in the
 next 30 years?

 Yes, but in the way that the internet was not developed by a small
 group. A small number of people designed the basic architecture
 (TCP/IP, HTTP, HTML, etc), but it took a huge number of people to
 develop it.


Sure. But this confidence is too high: you possess no technical
argument, only some trends and sketchy descriptions, you basically
rely on your intuition to integrate the facts into a prediction. It's
known not to work. In much more empirically grounded domains, experts
give 80% confidence in their prediction and are right only 40% of the
time. You can't trust yourself in cases like this. It gets only worse
when you add nontrivial details to your prediction.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Vladimir Nesov
On Mon, Sep 22, 2008 at 5:54 AM, Matt Mahoney [EMAIL PROTECTED] wrote:

 As I explained, my cost estimate is based on the value of the
 global economy and the assumption that AGI would automate it
 by replacing human labor.


The cost of (in modern sense) automation of agriculture isn't equal to
the cost of manual labor of nearly 100% of population that was
required before it was automated. The Nature isn't fare, there is no
law that says that a project provides benefit close to its cost. You
don't need groups of farmers all over the Earth to teach every grain
harvester combine for 30 years about harvesting.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Vladimir Nesov
On Fri, Sep 19, 2008 at 1:31 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
 On Fri, Sep 19, 2008 at 3:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Lets distinguish between the two major goals of AGI. The first is to automate
 the economy. The second is to become immortal through uploading.

 Umm, who's goals are these?  Who said they are the [..] goals of
 AGI?  I'm pretty sure that what I want AGI for is going to be
 different to what you want AGI for as to what anyone else wants AGI
 for.. and any similarities are just superficial.


And to boot, both of you don't really know what you want. You may try
to present plans as points designating a certain level of utility you
want to achieve through AI, by showing feasible plans that are quite
good in themselves. But these are neither the best scenarios
available, nor what will actually come to pass.

See this note by Yudkowsky:

http://www.sl4.org/archive/0212/5957.html

So if you're thinking that what you want involves chrome and steel,
lasers and shiny buttons to press, neural interfaces, nanotechnology,
or whatever great groaning steam engine has a place in your heart, you
need to stop writing a science fiction novel with yourself as the main
character, and ask yourself who you want to be. 

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-16 Thread Vladimir Nesov
Well, I didn't write in this thread about Friendliness (apart from the
last two sentences of the last message, which is a hypothetical so
impossible I had no right to draw it, really). It is bad terminology
to call evolution intelligence, it is a completely different
optimization process even if it can eventually lead to intelligence. I
didn't merely imply that self-organization is far from being a key
to Friendliness (kind of obvious, this), but far from being a key to
intelligence in general. If you are trying to create open-ended
evolutionary process, it might be an important stepstone, but again
for reasons different from aesthetic properties. At least with
evolution, you have known regularities to start from.

Don't approach debate as a combat, don't fight the arguments. Improve
on them, and help your opponent in destroying your own mistakes.


On Tue, Sep 16, 2008 at 5:05 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Vlad,

 At this point, we ought to acknowledge that we just have different 
 approaches. You're trying to hit a very small target accurately and 
 precisely. I'm not. It's not important to me the precise details of how a 
 self-organizing system would actually self-organize, what form that would 
 take or what goals would emerge for that system beyond 
 persistence/replication. We've already gone over the Friendliness debate so I 
 won't go any further with that here.

 My approach is to try and recreate the processes that led to the emergence of 
 life, and of intelligence. I see life and intelligence as strongly 
 interrelated, yet I don't see either as dependent on our particular 
 biological substrate. Life I define as a self-organizing process that does 
 work (expends energy) to maintain its self-organization (which is to say it 
 maintains a boundary between itself and the environment, in which the entropy 
 inside is lower than the entropy outside). Life at the simplest possible 
 level is therefore a kind of hard-coded intelligence. My hunch is that 
 anything sufficiently advanced to be considered generally intelligent needs 
 to be alive in the above sense. But suffice it to say, pursuit of AGI is not 
 in my short term plans.

 Just as an aside, because sometimes this feels combative, or overly 
 defensive: I have not come on to this list to try and persuade anyone to 
 adopt my approach, or to dissuade others from theirs. Rather, I came here to 
 gather feedback and criticism of my thoughts, to defend them when challenged, 
 and to change my mind when it seems like my current ideas are inadequate. And 
 of course, to provide the same kind of feedback for others when I have 
 something to contribute. In that spirit, I'm grateful for your feedback. I'm 
 also very curious to see the results of your approach, and those of others 
 here... I may be critical of what you're trying to do, but that doesn't mean 
 I think you shouldn't do it (in most cases anyway :-] ).


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-15 Thread Vladimir Nesov
On Mon, Sep 15, 2008 at 7:23 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 Hi all,

 Came across this article called Pencils and Politics. Though a bit of a
 tangent, it's the clearest explanation of self-organization in economics I've
 encountered.

 http://www.newsweek.com/id/158752

 I send this along because it's a great example of how systems that
 self-organize can result in structures and dynamics that are more complex
 and efficient than anything we can purposefully design. The applicability to
 the realm of designed intelligence is obvious.


We do design these systems. Even if there is no top manager of the
design and production process, even if nobody holds the whole process
in one mind, it is a result of application of optimization pressure of
individual people. I don't see how ability to create economically
driven processes fundamentally differs from complicated engineering
projects like putting a man on the moon of a Boeing. People can
organize processes more powerful than individual humans, and there are
limitations to how far you can improve individual performance, where
you can run things in parallel and iterate over accumulated results.

Applicability to designing intelligence is nontrivial (do you mean the
process of designing an intelligence or the workings of designed
intelligence?). We don't have a reliable process that takes us closer
and closer to having AI design, apart from usual science. You can't
create an intelligent process from using known economic processes on
stupid agents -- it might be a good direction to research what
economy-like processes will result in powerful optimization, but it's
not like currently known economic processes obviously lead to that
(maybe they obviously don't to someone more familiar with the field).
You can't take an algorithm currently fueled by intelligence (human
economy), take intelligence out of it and hope that there will be
enough traces of intelligence essence left to do the work regardless.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-15 Thread Vladimir Nesov
I guess that intuitively, argument goes like this:
1) economy is more powerful than individual agents, it allows to
increase the power of intelligence in individual agents;
2) therefore, economy has an intelligence-increasing potency;
3) so, we can take stupid agents, apply the economy potion to them and
get powerful intelligence as a result.

But it's easy to see how this kind of argument may be invalid. Adding
gasoline to the fire makes the fire stronger, more firey, therefore
it contains fire-potency, therefore applying sufficient amount of
gasoline to water, which is originally much less firey, will create as
strong fire as necessary. Doesn't work? You didn't add enough
gasoline, is all.

When you consider a system as complex as a human economy, you can't
just take one aspect apart from all other aspects, and declare it the
essence of the process. There are too many alternatives, you can't win
this lottery blindfolded. Some small number of aspects may in fact be
the essence, but you can't find these elements before you factored out
other functional parts of the process and showed that your model works
without them. You can't ignore the spark, this *obviously*
insignificant tiny fluke in the blazing river of fire, and accept only
the gasoline into your model. Why are you *obviously* allowed to
ignore human intelligence, the most powerful force in the known
universe, in your model of what makes human economy intelligent? This
argument is void, it must not move you, you must not rationalize your
thinking by it. If you are to know the conclusion to be valid, there
needs to be a valid force to convince you.

Now, consider evolution. Evolution is understood, and technically so.
It has *no* mind. It has no agents, no goals, no desires. It doesn't
think its designs, it is a regularity in the way designs develop, a
property of physics that explains why complicated functional systems
such as eye are *likely* to develop. Its efficiency comes from
incremental improvement and massively parallel exploration. It is a
society of simple mechanisms, with no purposeful design. The
evolutionary process is woven from the threads of individual
replicators, an algorithm steadily converting these threads into the
new forms. This process is blind to the structure of the threads, it
sees not beauty or suffering, speed or strength, it remains the same
irrespective of the vehicles weaving the evolutionary regularity,
unless the rules of the game fundamentally change. It doesn't matter
for evolution whether a rat is smarter than the butterfly.
Intelligence is irrelevant for evolution, you can safely take it out
of the picture as just another aspect of phenotype contributing to the
rates of propagation of the genes.

What about economy? Is it able to ignore intelligence like evolution
does? Can you invent a dinosaur in a billion years with it, or is it
faster? Why? Does it invent a dinosaur or a pencil? If the theory of
economics doesn't give you a technical answer to it, not a description
that fits the human society, but a separate, self-contained algorithm
that has the required property, who is to say that theory found the
target? You know that the password to the safe is more than zero but
less than a million, and you have an experimentally confirmed theory
that it's also less than 500 thousand. This theory doesn't allow you
to find the key, even if it correctly describes the properties of the
key. You can't throw the key away, you merely made a first step and 19
more are to endure. You made impressive progress, you were able to
show that 500 thousands keys are incorrect! This is a big discovery,
therefore this first bit of information must be really important.
Nope.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-15 Thread Vladimir Nesov
On Tue, Sep 16, 2008 at 2:50 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Once again, I'm not saying that modeling an economy is all that's necessary
 to explain intelligence. I'm not even saying it's a necessary condition of 
 it. What
 I am saying is that it looks very likely that the brain/mind is 
 self-organized, and
 for those of us looking to biological intelligence for inspiration, this may 
 be important.

Fare enough, but what makes this aesthetic property of
self-organization processes so important in the design of
optimization? You can't win this lottery by relying on intuition
pointing to the right bet 50 times in a row, you need to check and
recheck the validity of nearly every one of your design decisions. It
might be a good hunch for the next step, but unless you succeed at
cashing it out as a step that helps, the hunch doesn't count for much.


 There are a class of highly complex, unstable (in the thermodynamic sense)
 systems that self-organize in such a way as to most efficiently dissipate the
 imbalances inherent in the environment (hurricanes, tornadoes, watersheds, 
 life
 itself, the economy).  And, perhaps, the brain/mind is such a system. If so,
 that description is obviously not enough to guess the password to the safe.
 But that doesn't mean that self-organization has no value at all. The value 
 of it
 is to show that efficient design can emerge spontaneously, and perhaps we
 can take advantage of that.

If this property is common between brains and hurricanes, why is it
more relevant than the property of, say, being made out of living
cells (that at least doesn't include hurricanes)? I'm not asserting
that it's an irrelevant property, I'm not inverting your assertion,
but to assert either way you need a valid reason.


 By your argumentation, it would seem you won't find any argument about
 intelligence of worth unless it explains everything. I've never understood the
 strong resistance of many in the AI community to the concepts involved with
 complexity theory, particularly as applied to intelligence. It would seem to 
 me
 to be a promising frontier for exploration and gathering insight.

I will find an argument worth something if it explains something, or
if it serves as a tool that I expect to be useful in explaining
something down the road. As for complexity, it looks like a
consequence of efficient coding of representation in my current model,
so it's not out of the question, but in my book it is a side effect
rather than a guiding principle. It is hard to decipher an intelligent
algorithm in motion, even if it can be initiated to have known
consequences. Just as you may be unable to guess the individual moves
of a grandmaster or a chess computer, but guess the outcome (you
lose), you may be completely at a loss measuring the firing rates of
transistors of CPU running an optimized chess program and trying to
bridge the gap to its design, goals and thoughts (which will turn out
to be anthropomorphic concepts not describing the studied phenomenon).
But you can write the program initially, and know in advance that it
will win, or you can observe it in motion and notice that it wins.
Neither helps you to bridge the gap between the reliable outcome and
low-level dynamic, but there is in fact a tractable connection, hidden
in the causal history of the development of optimization process,
where you did design the low-level dynamic to implement the goal. If
self-organizing process hits a narrow target without guidance, it
doesn't hit *your* target, it hits an arbitrary target. The universe
turns to cold iron before you win this lottery blindly.


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-04 Thread Vladimir Nesov
On Thu, Sep 4, 2008 at 12:02 PM, Valentina Poletti [EMAIL PROTECTED] wrote:

 Vlad, this was my point in the control e-mail, I didn't express it quite as
 clearly, partly because coming from a different background I use a slightly
 different language.

 Also, Steve made another good point here: loads of people at any moment do
 whatever they can to block the advancement and progress of human beings as
 it is now. How will those people react to a progress as advanced as AGI?
 That's why I keep stressing the social factor in intelligence as very
 important part to consider.


No, it's not important, unless these people start to pose a serious
threat to the project. You need to care about what is the correct
answer, not what is a popular one, in the case where popular answer is
dictated by ignorance.

P.S. AGI? I'm again not sure what we are talking about here.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Vladimir Nesov
On Thu, Sep 4, 2008 at 12:46 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Hi Vlad,

 Thanks for the response. It seems that you're advocating an incremental
 approach *towards* FAI, the ultimate goal being full attainment of 
 Friendliness...
 something you express as fraught with difficulty but not insurmountable.
 As you know, I disagree that it is attainable, because it is not possible in
 principle to know whether something that considers itself Friendly actually
 is. You have to break a few eggs to make an omelet, as the saying goes,
 and Friendliness depends on whether you're the egg or the cook.


Sorry Terren, I don't understand what you are trying to say in the
last two sentences. What does considering itself Friendly means and
how it figures into FAI, as you use the phrase? What (I assume) kind
of experiment or arbitrary decision are you talking about?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] What is Friendly AI?

2008-09-03 Thread Vladimir Nesov
On Thu, Sep 4, 2008 at 1:34 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 I'm asserting that if you had an FAI in the sense you've described, it 
 wouldn't
 be possible in principle to distinguish it with 100% confidence from a rogue 
 AI.
 There's no Turing Test for Friendliness.


You design it to be Friendly, you don't generate an arbitrary AI and
then test it. The latter, if not outright fatal, might indeed prove
impossible as you suggest, which is why there is little to be gained
from AI-boxes.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Vladimir Nesov
On Thu, Aug 28, 2008 at 8:29 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 Vladimir Nesov [EMAIL PROTECTED] wrote:

 AGI doesn't do anything with the question, you do. You
 answer the
 question by implementing Friendly AI. FAI is the answer to
 the
 question.

 The question is: how could one specify Friendliness in such a way that an
 AI will be guaranteed-Friendly? Is your answer to that really just you build
 a Friendly AI?  Why do I feel like a dog chasing my own tail?

You start with what is right? and end with Friendly AI, you don't
start with Friendly AI and close the circular argument. This doesn't
answer the question, but it defines Friendly AI and thus Friendly AI
(in terms of right).


 I've been saying that Friendliness is impossible to implement because
 1) it's a moving target (as in, changes through time),

All things change through time, which doesn't make them cease to exist.


 since 2) its definition
 is dependent on context (situational context, cultural context, etc).  In 
 other
 words, Friendliness is not something that can be hardwired. It can't be
 formalized, coded, designed, implemented, or proved. It is an invention of
 the collective psychology of humankind, and every bit as fuzzy as that
 sounds. At best, it can be approximated.

Definition is part of the context. Your actions depend on the context,
are determined by context, determine the outcome. You can't use it as
a generally valid argument. If in situation A, pressing button 1 is
right thing to do, and in situation B, pressing button 2 is right
thing to do, does it make the procedure for choosing the right button
to press fuzzy, undefinable and impossible to implement? How do you
know when to press the button? Every decision needs to come from
somewhere, there are no causal miracles. Maybe it complicates the
procedure a little, making the decision procedure conditional, if(A)
press 1, else press 1, or maybe it complicates it much more, but it
doesn't make the challenge ill-defined.


  If you can't guarantee Friendliness, then
 self-modifying approaches to
  AGI should just be abandoned. Do we agree on that?

 More or less, but keeping in mind that
 guarantee doesn't need to be
 a formal proof of absolute certainty. If you can't show
 that a design
 implements Friendliness, you shouldn't implement it.

 What does guarantee mean if not absolute certainty?


There is no absolute certainty. (
http://www.overcomingbias.com/2008/01/infinite-certai.html ). When you
normally say I guarantee that I'll deliver X, you don't mean to
imply that it's impossible for you do die in a car accident in the
meantime, you just can't provide and by extension don't care about
this kind of distinction. Yet you don't say that if you can't provide
a *mathematical proof* of you delivering X (including the mathematical
proof that there will be no fatal car accidents), you should abandon
any attempts to implement X and do Y instead, and just hope that X
will emerge from big enough chaotic computers or whatever.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Vladimir Nesov
On Thu, Aug 28, 2008 at 9:08 PM, Terren Suydam [EMAIL PROTECTED] wrote:
 --- On Wed, 8/27/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 One of the main motivations for the fast development of
 Friendly AI is
 that it can be allowed to develop superintelligence to
 police the
 human space from global catastrophes like Unfriendly AI,
 which
 includes as a special case a hacked design of Friendly AI
 made
 Unfriendly.

 That is certainly the most compelling reason to do this kind of research.
 And I wish I had something more than disallow self-modifying approaches,
 as if that would be enforcible. But I just don't see Friendliness as 
 attainable,
 in principle, so I think we treat this like nuclear weaponry - we do our best 
 to
 prevent it.

Won't work, Moore's law is ticking, and one day a morally arbitrary
self-improving optimization will go FOOM. We have to try.


 If we can understand it and know that it does what we want,
 we don't
 need to limit its power, because it becomes our power.

 Whose power?  Who is referred to by our?  More importantly, whose
 agenda is served by this power? Power corrupts. One culture's good is
 another's evil. What we call Friendly, our political enemies might call
 Unfriendly. If you think no agenda would be served, you're naive. And if
 you think the AGI would somehow know to not serve its masters in service
 to Friendliness to humanity, then you believe in an objective morality...
 in a universally compelling argument.

Given the psychological unity of humankind, giving the focus of
right to George W. Bush personally will be enormously better for
everyone than going in any direction assumed by AI without the part of
Friendliness structure that makes it absorb the goals from humanity.
CEV is an attempt to describe how to focus AI on humanity as a whole,
rather than on a specific human.


 With simulated
 intelligence, understanding might prove as difficult as in
 neuroscience, studying resulting design that is unstable
 and thus in
 long term Unfriendly. Hacking it to a point of Friendliness
 would be
 equivalent to solving the original question of
 Friendliness,
 understanding what you want, and would in fact involve
 something close
 to hands-on design, so it's unclear how much help
 experiments can
 provide in this regard relative to default approach.

 Agreed, although I would not advocate hacking Friendliness. I'd advocate
 limiting the simulated environment in which the agent exists. The point of
 this line of reasoning is to avoid the Singularity, period. Perhaps that's 
 every
 bit as unrealistic as I believe Friendliness to be.

And you are assembling the H-bomb (err, evolved intelligence) in the
garage just out of curiosity, and occasionally to use it as a tea
table, all the while advocating global disarmament.


 It's self-improvement, not self-retardation. If
 modification is
 expected to make you unstable and crazy, don't do that
 modification,
 add some redundancy instead and think again.

 The question is whether its possible to know in advance that an modification
 won't be unstable, within the finite computational resources available to an 
 AGI.

If you write something redundantly 10^6 times, it won't all just
spontaneously *change*, in the lifetime of the universe. In the worst
case, it'll all just be destroyed by some catastrophe or another, but
it won't change in any interesting way.


 With the kind of recursive scenarios we're talking about, simulation is the 
 only
 way to guarantee that a modification is an improvement, and an AGI simulating
 its own modified operation requires exponentially increasing resources, 
 particularly
 as it simulates itself simulating itself simulating itself, and so on for N 
 future
 modifications.

Again, you are imagining an impossible or faulty strategy, pointing to
this image, and saying don't do that!. Doesn't mean there is no good
strategy.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] What is Friendly AI?

2008-08-30 Thread Vladimir Nesov
On Sat, Aug 30, 2008 at 8:54 PM, Terren Suydam [EMAIL PROTECTED] wrote:
 --- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 You start with what is right? and end with
 Friendly AI, you don't
 start with Friendly AI and close the circular
 argument. This doesn't
 answer the question, but it defines Friendly AI and thus
 Friendly AI
 (in terms of right).

 In your view, then, the AI never answers the question What is right?.
 The question has already been answered in terms of the algorithmic process
 that determines its subgoals in terms of Friendliness.

There is a symbolic string what is right? and what it refers to, the
thing that we are trying to instantiate in the world. The whole
process of  answering the question is the meaning of life, it is what
we want to do for the rest of eternity (it is roughly a definition of
right rather than over-the-top extrapolation from it). It is an
immensely huge object, and we know very little about it, like we know
very little about the form of a Mandelbrot set from the formula that
defines it, even though it entirely unfolds from this little formula.
What's worse, we don't know how to safely establish the dynamics for
answering this question, we don't know the formula, we only know the
symbolic string, formula, that we assign some fuzzy meaning to.

There is no final answer, and no formal question, so I use
question-answer pairs to describe the dynamics of the process, which
flows from question to answer, and the answer is the next question,
which then follows to the next answer, and so on.

With Friendly AI, the process begins with the question a human asks to
himself, what is right?. From this question follows a technical
solution, initial dynamics of Friendly AI, that is a device to make a
next step, to initiate transferring the dynamics of right from human
into a more reliable and powerful form. In this sense, Friendly AI
answers the question of right, being the next step in the process.
But initial FAI doesn't embody the whole dynamics, it only references
it in the humans and learns to gradually transfer it, to embody it.
Initial FAI doesn't contain the content of right, only the structure
of absorb it from humans.

Of course, this is simplification, there are all kinds of
difficulties. For example, this whole endeavor needs to be safeguarded
against mistakes made along the way, including the mistakes made
before the idea of implementing FAI appeared, mistakes in everyday
design that went into FAI, mistakes in initial stages of training,
mistakes in moral decisions made about what right means. Initial
FAI, when it grows up sufficiently, needs to be able to look back and
see why it turned out to be the way it did, was it because it was
intended to have a property X, or was it because of some kind of
arbitrary coincidence, was property X intended for valid reasons, or
because programmer Z had a bad mood that morning, etc. Unfortunately,
there is no objective morality, so FAI needs to be made good enough
from the start to eventually be able to recognize what is valid and
what is not, reflectively looking back at its origin, with all the
depth of factual information and optimization power to run whatever
factual queries it needs.

I (vainly) hope this answered (at least some of the) other questions as well.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Vladimir Nesov
On Sat, Aug 30, 2008 at 9:18 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 --- On Sat, 8/30/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Given the psychological unity of humankind, giving the
 focus of
 right to George W. Bush personally will be
 enormously better for
 everyone than going in any direction assumed by AI without
 the part of
 Friendliness structure that makes it absorb the goals from
 humanity.
 CEV is an attempt to describe how to focus AI on humanity
 as a whole,
 rather than on a specific human.

 Psychological unity of humankind?!  What of suicide bombers and biological
 weapons and all the other charming ways we humans have of killing one another?
 If giving an FAI to George Bush, or Barack Obama, or any other political 
 leader,
 is your idea of Friendliness, then I have to wonder about your grasp of human
 nature. It is impossible to see how that technology would not be used as a 
 weapon.

(Assuming you read my reply in What is Friendly AI? post)

Did you read the part about al-Qaeda programmers in CEV? The design of
initial dynamics of FAI needs to be good enough to be bootstrapped
from a rather deviant group of people and still turn out right. You
don't need to include the best achievements of the last thousands of
years in the core of dynamics that will define us for the next
billions of years. These achievements are factual information and they
won't be lost anyway. The only thing that you need to get right the
first time is the reference to the right concept that will be able to
unfold from there, and I believe there is little to add to this core
dynamics by specifying a particular human, having particular qualities
or knowledge. It exists on panhuman level, in evolutionarily
programmed complexity, that is pretty much the same in every one of
us. Now what *is* important is getting the initial dynamics right,
which might require much knowledge and understanding of the
bootstrapping process and the concept of right itself.


  The question is whether its possible to know in
 advance that an modification
  won't be unstable, within the finite computational
 resources available to an AGI.

 If you write something redundantly 10^6 times, it won't
 all just
 spontaneously *change*, in the lifetime of the universe. In
 the worst
 case, it'll all just be destroyed by some catastrophe
 or another, but
 it won't change in any interesting way.

 You lost me there - not sure how that relates to whether its possible to
 know in advance that an modification won't be unstable, within the finite
 computational resources available to an AGI.

You may be unable to know whether an alien artifact X will explode in
a next billion years, but you can build your own artifact that pretty
definitely won't.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


  1   2   3   4   >