[agi] Inching closer on the Singularity Clock.

2010-10-19 Thread A. T. Murray
Greeting to all Singularitarians. 
The Singularity, an event brought to you 
free-of-charge and open-source by 
Project Mentifex (mindmaker) has today 
updated the free open-source AI Mind in 
JavaScript for Microsoft Internet Explorer at

http://www.scn.org/~mentifex/AiMind.html

where the input box now invites users to 
 Enter subject + verb + object; 
 query knowledge base with subject + verb + [ENTER].

and the Tutorial display mode shows you 
what the AI Mind is thinking.

http://www.scn.org/~mentifex/mindforth.txt 
was updated in similar fashion yesterday, 
but MindForth can not be run by clicking 
on a single link (as AiMind.html can), so 
here is a sample interaction with MindForth:


First we type in five statements.
 tom writes jokes
 ben writes books
 jerry writes rants
 ben writes articles
 will writes poems

We then query the AI in Tutorial mode with the input 
 ben writes [ENTER] 
and the AI Mind shows us how it thinks about the query:

VerbAct calls SpreadAct with activation 80 for Psi #0
VerbAct calls SpreadAct with activation 76 for Psi #117 POEMS
VerbAct calls SpreadAct with activation 76 for Psi #117 POEMS
VerbAct calls SpreadAct with activation 80 for Psi #113 BOOKS
VerbAct calls SpreadAct with activation 80 for Psi #58 BE
VerbAct calls SpreadAct with activation 76 for Psi #115 RANTS
VerbAct calls SpreadAct with activation 76 for Psi #115 RANTS
VerbAct calls SpreadAct with activation 80 for Psi #113 BOOKS
VerbAct calls SpreadAct with activation 80 for Psi #113 BOOKS
VerbAct calls SpreadAct with activation 76 for Psi #111 JOKES
VerbAct calls SpreadAct with activation 76 for Psi #111 JOKES

Robot:  BEN  WRITES BOOKS

The AI selects a valid answer to the query by 
combining the activation on BEN and WRITES so as 
to spread a _cumulative_ activation to the word BOOKS.
Other potential answers are not sufficiently activated, 
because they are from other subjects of WRITE.

In Singularity solidarity,

Arthur
-- 
http://AiMind-i.com
http://cyborg.blogspot.com
http://code.google.com/p/mindforth
http://www.scn.org/~mentifex/AiMind.html


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: Re: [agi] Wbat would it take to move rapidly toward advanced AGI?

2010-10-18 Thread Andrii Zvorygin
On Sat, Oct 16, 2010 at 5:15 PM, Matt Mahoney matmaho...@yahoo.com wrote:
 However, programming
  languages are fundamentally different from natural language in that
 (1) they have a precise grammar and semantics,

That's an advantage.
Since you can precisely define what you wish to express.


 Only when talking to computers. Humans don't learn language that way. Humans
 learn incrementally yet are still able to use partially learned languages.

Ya, well I don't see any difference.
Computers also learn incrementally,
they start with bios, then kernel, then system services...
same thing compilers,
they have a base vocabulary (assembly),
that's extended by the standard-library,
which can be further extended by third-party libraries.

humans also learn the meaning of words,
by a complex set of sensual cues,
associated with a word.

Just as a computer can learn the meaning of a word,
by reading in a library or dictionary what it means.

partially learned languages can also be used,
since functional correlations between one word and another can be identified.

 and (3) the complexity is on the order of 10^5 to 10^6 bits vs. 10^9 bits 
 for

natural language.

Again it's an advantage,
as it would require less resources to do so.

 You can't express much with such a small language.


? I'd say it's difficult to express stuff with the gooey inconsistency
found in NLP.
simply due to people not having common dictionary,
or many people not using dictionaries.

It's far easier to express yourself precisely,
when there are standard words,
and a consistent grammar.

Another advantage of HSPL,
is that it's easy to learn,
due to it's simplicity.

So could be used as an intermediary language.
Especially for international negotiations,
where clarity and precision,
may be valued.


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: [GI] Digest for general-intellige...@googlegroups.com - 10 Messages in 2 Topics

2010-10-18 Thread Linas Vepstas
On 17 October 2010 18:20, Ben Goertzel b...@goertzel.org wrote:
 In other words, using formal grammar actually makes it harder to establish
 the connection at the NL-logic interface. IE, it is harder to translate NL
 sentences to formal grammar than to formal logic.

 KY

 Quite the opposite, actually.

 Translating an NL sentence to a *set* of grammatical trees,
 representing syntactically possible parses, is almost a solved
 problem.  E.g. the Stanford parser or the link parser do that.

 Then, translating each of these grammatical trees into a *set* of
 formal logic expressions, each representing a possible semantic
 interpretation of the tree, is a partially-solved problem.  E.g.
 OpenCog's RelEx and RelEx2Frame components and Cyc's NL subsystem both
 do that (in different ways), though not perfectly.

 So based on the current state of the art, it seems that turning NL
 into a formal grammar (e.g. a dependency grammar) is significantly
 less problematic than turning NL into logic, because forming the logic
 representation requires resolving additional ambiguity, beyond that
 which must be resolved to form the formal-grammar representation

Agree; but would like to add several remarks:

--part of the difficulty of applying logic of NL is the need to handle
spatial reasoning (A is next to B and B is next to C therefore ...? C is
not far from A)

-- part of the difficulty of applying logic of NL is the need to handle
more abstract reasoning (A is the major of B and majors are people
therefore  B is a person)  (opencyc does this ... not badly)

-- Some philosophers of mathematics e.g. Carlo cellucci (see 18
unconventional essays on the nature of mathematics) will stridently
point out that, while classical logic is the format in which proofs are
stated, it is not at all the method by which mathematicians generate
new ideas -- they use reasoning by analogy, by allegory, by induction,
and many others, to generate hypothesis which might be possible
solutions to problems.

I think that we should realize that the same techniques should be
applied in AGI: we use reasoning by analogy not because it gives
formally correct answers, but because it generates reasonable
hypothesis which may or may not be true, but which can be
examined in greater detail to see if they are true.   These other,
non-rigorous reasoning methods are all parts of what we might
call intuition --  a set of hard-to-explain reasons why we think
something might  be true -- which must then be subjected to more
rigorous analysis to see if yet more evidence can be found.

In short, real-life, just like mathematics, is all about problem-solving
and not theorem-proving (which is the last step of creating math,
not the first).

--linas


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: [GI] Digest for general-intellige...@googlegroups.com - 10 Messages in 2 Topics

2010-10-18 Thread Abram Demski
Linas,

It seems to me that analogy falls rather simply out of relational
probabilistic reasoning. Say we want to make an analogy between two entities
A and B. We essentially look for predicates that hold for both A and B; ie,
we look for a way to fill in the blank in A is like B, because _. Then, if
we want to predict something about B, we know A belongs to the same
reference class and can provide 1 piece if evidence concerning the
behavior of entities in that reference class.

--A

On Mon, Oct 18, 2010 at 5:10 PM, Linas Vepstas linasveps...@gmail.comwrote:

 On 17 October 2010 18:20, Ben Goertzel b...@goertzel.org wrote:
  In other words, using formal grammar actually makes it harder to
 establish
  the connection at the NL-logic interface. IE, it is harder to translate
 NL
  sentences to formal grammar than to formal logic.
 
  KY
 
  Quite the opposite, actually.
 
  Translating an NL sentence to a *set* of grammatical trees,
  representing syntactically possible parses, is almost a solved
  problem.  E.g. the Stanford parser or the link parser do that.
 
  Then, translating each of these grammatical trees into a *set* of
  formal logic expressions, each representing a possible semantic
  interpretation of the tree, is a partially-solved problem.  E.g.
  OpenCog's RelEx and RelEx2Frame components and Cyc's NL subsystem both
  do that (in different ways), though not perfectly.
 
  So based on the current state of the art, it seems that turning NL
  into a formal grammar (e.g. a dependency grammar) is significantly
  less problematic than turning NL into logic, because forming the logic
  representation requires resolving additional ambiguity, beyond that
  which must be resolved to form the formal-grammar representation

 Agree; but would like to add several remarks:

 --part of the difficulty of applying logic of NL is the need to handle
 spatial reasoning (A is next to B and B is next to C therefore ...? C is
 not far from A)

 -- part of the difficulty of applying logic of NL is the need to handle
 more abstract reasoning (A is the major of B and majors are people
 therefore  B is a person)  (opencyc does this ... not badly)

 -- Some philosophers of mathematics e.g. Carlo cellucci (see 18
 unconventional essays on the nature of mathematics) will stridently
 point out that, while classical logic is the format in which proofs are
 stated, it is not at all the method by which mathematicians generate
 new ideas -- they use reasoning by analogy, by allegory, by induction,
 and many others, to generate hypothesis which might be possible
 solutions to problems.

 I think that we should realize that the same techniques should be
 applied in AGI: we use reasoning by analogy not because it gives
 formally correct answers, but because it generates reasonable
 hypothesis which may or may not be true, but which can be
 examined in greater detail to see if they are true.   These other,
 non-rigorous reasoning methods are all parts of what we might
 call intuition --  a set of hard-to-explain reasons why we think
 something might  be true -- which must then be subjected to more
 rigorous analysis to see if yet more evidence can be found.

 In short, real-life, just like mathematics, is all about problem-solving
 and not theorem-proving (which is the last step of creating math,
 not the first).

 --linas




-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Where study to help the singularity

2010-09-25 Thread dlugie_konto
Hello, 
This mail is mostly to Ben Goertzel, because I can't reach him by his mail. But 
anybody can answer it and give their opinion.

My name is Mikolaj Jaroszewicz and I'm student of mathematics in Warsaw
University. I will be going to my 3 year 
and obtain my bachelors degree. I truly admire your work (Ben Goertzels work). 
I read almost all
your books and now I'm reading
the books from references from The Hidden Pattern. Recently I read your
Cosmist Manifesto and watched your talk about it.
   
I'm very interested on how the mind works and how to build one. I would
like to know how the brain works and then implement smart algorithms based
on that knowledge. I'm also interested in AGI theory and the approach you
have taken to build a mind. Here's my question: Given all your knowlege
could you please give an advice on where and what to study (I'm thinking
now on a master) to work on this subjects? 

I really appreciate your response and value highly your opinion.
Best regards, 
Mikolaj Jaroszewicz


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] MindForth Programming Journal (MFPJ) 2010 September 24

2010-09-24 Thread A. T. Murray
Fri.24.SEP.2010 -- Clamping Down on Stray Activations 

Yesterday we made sure to upload our 21sep10A.F MindForth AI 
code so that we could start fresh today with 24sep10A.F code. 
In the previous code we made some progress in the answering 
of what are you queries, but we noticed that the AI was not 
responding properly to what am i queries. There is probably 
some very simple hang-up in one of the pertinent mind-modules, 
so today we would like hunt down the offending bug. 

Aw, gee, the AI is actually losing track of the predicate 
nominatives that go with the I self-concept and the you 
concept of the non-self other. In the case of MindForth AI, 
the relationships between the I concept and a predicate 
nominative (such as the very name Andru by which the AI 
is known), are external to the I concept itself and are 
imminent in the assertions made about self and in the 
self-knowledge of the AI. We count on the associative 
mechanisms of the AI to keep the identity relationships 
straight. Accordingly we need to troubleshoot the associative 
mechanisms. 

We may have to start getting away from the idea that the 
final concept mentioned in an output sentence should retain 
a high activation. If we want the AI to be able to answer 
abrupt queries, such as What are you? followed immediately 
by What am I?, we may want to psi-damp the final concept 
in each utterance so as to prevent interference with the 
generation of a new thought. 

We were typing in you are software and getting as a reply, 
SOFTWARE IS A SOFTWARE. When we typed in what am i, 
we still got SOFTWARE IS A SOFTWARE, because the 
56=YOU concept was not high enough in activation to start 
a new sentence. So we went into the ReActivate module and 
we added some code to enhance the 56=YOU concept. 

   I  0 psi{ @  50 = IF \ emphasize I; 23sep2010
32 I  1 psi{ !  \ arbitrary from AI coder; 23sep2010
   THEN   \ test; 23sep2010

   I  0 psi{ @  56 = IF \ emphasize YOU; 24sep2010
32 I  1 psi{ !  \ arbitrary from AI coder; 24sep2010
   THEN   \ test; 24sep2010

However, we then got YOU ARE A SOFTWARE as a reply, 
apparently because the you are magic tidbit in the KB 
could not override the residual activation on SOFTWARE. 

We obtained at least a partial solution by psi-damping 
the old subject-psi SOFTWARE at the start of the 
NounPhrase module by inserting the following code. 


subjpsi @ urpsi !  \ test; 24sep2010
CR . NPhr. calls PsiDamp for urpsi  urpsi @ . \ 24sep2010
PsiDamp ( for sake of SubConscious 26nov2009 )
0 urpsi !  \ reset for safety; 24sep2010

We had noticed that the subjpsi SOFWARE from the previous 
sentence was still the subject-psi at the start of a new sentence, 
so it seemed convenient to psi-damp the old subject as a way 
of keeping it from interfering in a new thought. It worked, 
and we obtained the following initial conversation. 


Transcript of AI Mind interview at
6 1 12 o'clock on 24 September 2010.

Human: you are software
Robot:  SOFTWARE  IS A  SOFTWARE

Human: what am i
Robot:  YOU  ARE  MAGIC

Further attempts at conversation did not work perfectly 
well, but we could tell that we were on the right track, 
because the concepts that we were looking for were tending 
to surface eventually, even if other concepts interfered 
for a brief period. We are making progress. 


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: [singularity] MindForth Programming Journal (MFPJ) 2010 September 24

2010-09-24 Thread Chuck Esterbrook
On Fri, Sep 24, 2010 at 7:05 AM, A. T. Murray menti...@scn.org wrote:
 Fri.24.SEP.2010 -- Clamping Down on Stray Activations

 Yesterday we made sure to upload our 21sep10A.F MindForth AI
 code so that we could start fresh today with 24sep10A.F code.
 In the previous code we made some progress in the answering
 of what are you queries, but we noticed that the AI was not
 responding properly to what am i queries. There is probably
 some very simple hang-up in one of the pertinent mind-modules,
 so today we would like hunt down the offending bug.
...

I think it's time to move these ongoing posts to a MindForth Google
group (or Yahoo group or whatever you like) and discontinue them here.
Maybe you could post once in a great while when you think you have a
breakthrough and you want to remind people of the existence of the
other group.

OpenCog and Genifer have their own mailing lists for example.
MindForth should do the same.

-- 
http://charles-esterbrook.com/


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Mother of all Singularities

2010-09-22 Thread A. T. Murray
MindForth Programming Journal (MFPJ)

Wed.22.SEP.2010 -- Solving the Missing seq 

Yesterday we solved the problem of the missing seq tags 
rather quickly, when we noticed that each time point with 
a missing seq was just outside the search-range of ten 
time-points as specified in the InStantiate mind-module. 
When we increased the search-range by one time-point, 
from ten to eleven as seen in the code below, the problem 
disappeared. 


lackseq @ 1 = IF  \ if set one loop ago
\ t @ 10 -  t @ 4 -  DO  \ go back about ten engrams
\ t @ 11 -  t @ 4 -  DO  \ go further back; 21sep2010
  t @ 12 -  t @ 4 -  DO  \ go further back; 21sep2010

We added one more time-point to the search-range for the 
sake of safety. We realize that we may eventually need 
to declare something like a limitless search-range, 
which should serve quite well, since the search is 
abandoned after the first successful hit. 

3 Wed.22.SEP.2010 -- Mother of all Singularities

Yesterday's missing-seq bug was not showing up in the 
behavior of the AI, but its very presence was alarming 
and unsettling to us Singularitarian AI coders. Now we 
turn our relieved attention to the new bug du jour, 
the problem which we have already Web-published on 
20 September 2010 by posting the following exchange. 

Human: you are software
Robot: SOFTWARE  ARE  THE  SOFTWARE

The response of the robot AI is in violation of our 
long-standing mandate that the introduction of a previously 
unknown noun should cause the AI to ask a WhatIs question 
about the new concept. Apparently the conceptual activations 
are so out of whack that the WhatIs module is not being 
triggered by the input of the noun software above. 
We merely note this problem in passing on to a more serious 
problem, the fact that the word SOFTWARE is undergoing 
unwarranted neural inhibition during the clumsy AI response. 
We will ignore but not fail to notice the wrong be-verb form. 

As we troubleshoot the weak but world's most powerful AI Mind, 
we have eleven windows open on our screen so that we may call 
up a wide range of helpful files while we are coding off-line -- 
not currently connected to the 'Net. The first window is our 
current MFPJ page, which we are composing by typing into the 
second window. Window seven is a text file of our penultimate 
AI source code, in which we may examine the whole MindForth AI 
program as it stood in its most recent release to the Web, 
while we alternate among running Win32Forth in a twelfth window 
for MS-DOS, viewing the output in a thirteenth window, and now 
and then editing the newest source code in a fourteenth window. 
La forza del destino has placed on our non-Atlantic shoulders 
the task of coding the mother of all Singularities with extreme 
caution and with due diligence. Until it turns out that the 
Daughters of the American Revolution have been coding in secret 
a colossal Forbin-esque AI that will swamp all our puny efforts, 
we operate on the assumption that the future of AI evolution 
will not be in safe hands until so many AI labs are at work 
that we can no longer single-handedly ruin the AI emergence by 
taking a wrong turn into a blind AI alley. Therefore we now 
inspect the code in window seven and we look for a reason why 
our recently published output is unwarrantedly being subjected 
to neural inhibition. 

-- 
http://robots.net/person/AI4U/diary/45.html



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Russel: If you can figure out another way to do it, I'm all ears!

2010-09-22 Thread David Jones
Russel Said:
*Oh, I can figure out how to solve most specific problems. From an AGI
point of view, however, that leaves the question of how those individual
solutions are going to serve as sources of knowledge for a system, rather
than separate specific programs. My answer is to build something that can
reason about code, for which formal logic is a necessary ingredient. If you
can figure out another way to do it, I'm all ears!

*Well, there are at least two problems here. *1) How to gain initial
knowledge 2) How to use knowledge to achieve goals once we have it.
*

*1) How to gain initial knowledge*

Ah, this is something very cool that I've been working on lately. Pick a
particular example of initial knowledge from the example below and we can
trace how it is learned and how such learning mechanisms can be implemented.
There are many, so I'm not going to try to list them. I thought it would
also be more fun for you all to pick one and surprise me.


*Let's start with a simple example of 2 (using knowledge we already have and
learning more) : Creating a Hello World program*

Note that many of the details in how the reasoning is done are left out
because 1) they are yet to be determined in detail and 2) the email is long
enough without them.

*Initial Assumptions: *
The agent has some initial knowledge about programs, where one might find
information about programming. The agent might have a text book on it. The
agent understands what a hello world program is supposed to do.

So, what are we solving for if the agent has so many initial capabilities?
We're trying to show how the agent reasons about what it already knows to
achieve a goal.

The goal is to create a program that says hello world. The agent
understands this by reasons about statements made in a textbook about the
hello world example program.

The agent has to plan its actions to achieve the intention write a hello
world program.  The plan is not a complete step by step plan. It just tells
the general direction to go. This is the rough to fine heuristic that human
beings often use. From there, does mean's ends analysis, searches for and
finds information that might be relevant to the situation at hand, and
reasons about what they've done in the past that have help achieve parts of
such a goal.

The AGI knows that programs can be created through the visual studio's IDE,
based on reading about programming in C# (the book he/she has). So, it
realizes that it needs to achieve a subgoal of finding visual studio's IDE
to use it. It knows it can do this by getting to the computer and clicking
on the icon that it knows is associated with visual studio.
The program comes up. So, then we ask ourselves what's the next step?. Our
brain has marked memories associated with creating programs. It has recorded
the fact that we clicked on the file menu to create a new program and that
this was part of the process in achieving the goal. So, our memory pulls
this fact and executes the action because we have no reasons to not pursue
the action in memory.  So, to this we go to the file menu and click create
a new project. We also pull in relevant information, which says you have to
do this that and the other also if we want to create a program. We pull in
relevant info from what we read in the text book about what to be careful of
and what has to be done, etc.

What's next? We want to make the program print out hello world. we recall
that we can do this by using the command Console.WriteLine(). and we
recall that the thing printed out was in between the parantheses like so:
Console.WriteLine(something to print out);
So, we hypothesize that if replace what was printed out with hello world
that it will work.
so we try Console.WriteLine(hello world). it works! hurray. toda. done.

Yeah, I know. It's over simplified. But you can see the types of reasoning
that are required to achieve such a task. Do this thought experiment on
enough problems and generalize what it takes to achieve them (don't try to
overgeneralize though!).

DO NOT THROW OUT the requirements. You cannot throw out computer vision
because you don't know how to implement it. Sensory perception is a
requirement for AGI for many reasons. So, just make it an assumption in your
design until you can work out the details. We'll do the same thought
experiment on computer vision as well to see how it can be integrated with
the whole system. For now though, we're just focusing on this simple
programming task.



*
*



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Technological Singularity -- a work in progress

2010-09-21 Thread A. T. Murray
MindForth Programming Journal (MFPJ)

Tues.21.SEP.2010 -- (work in progress)

We are now in a strange situation as AI Mind coders. 
We have created an extremely powerful AI Mind at 
http://www.scn.org/~mentifex/mindforth.txt
but we have been so relentlessly in pursuit of basic 
AI functionality, that many facets of our AI creation 
remain totally unexplored. Our most recent achievement -- 
yesterday -- was KB-exhaustive searches of the AI Mind 
through input queries put to the knowledge base (KB) 
of the emerging artificial person. The MindForth AI can 
now discuss its own existence with human users, who 
may tell the AI about itself and question the AI about 
its own self-knowledge.

The AI Forthmind still exhibits quirky behaviour, but we 
have the opportunity now to track down each instance of 
quirkiness and to fix it on the most fundamental level. 
Simply put, the conceptual activations are out of whack. 
While the AI exhaustively searches its knowledge base (KB) 
for answers to questions, stray activations build up on 
peripheral concepts (not involved in the discussion) until 
suddenly the accumulating activations override the valid 
chain of thought and engender a mental aberration, 
a statement of nonsense. 

Let us try to solve one particular bug that looks serious. 
As we ask the newly KB-exhaustive AI What are you? 
and it answers with I am (this) and I am (that), 
we notice that, at some point, the verb AM in the 
responses starts to have zero (0) as its seq tag 
instead of the psi concept number for the noun at the 
end of the I am... idea. Such a situation is True-AI 
intolerable, because every thought of the AI needs to 
lay down associative tracks for future retrieval and 
re-assertion of the same idea in its current formulation. 

-- 
http://robots.net/person/AI4U/diary/45.html


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] How long until human-level AI?

2010-09-19 Thread Ben Goertzel
Our paper How long until human-level AI? Results from an expert
assessment (based on a survey done at AGI-09) was finally accepted
for publication, in the journal Technological Forecasting  Social
Change ...

See the preprint at

http://sethbaum.com/ac/fc_AI-Experts.html

-- Ben Goertzel

-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
Adjunct Professor of Cognitive Science, Xiamen University, China
b...@goertzel.org

My humanity is a constant self-overcoming -- Friedrich Nietzsche


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: David Jone's Design and Pseudo Tests Methodology: Was David Jone's Design and Psuedo Tests Methodology

2010-09-18 Thread Jim Bromer
On Fri, Sep 17, 2010 at 10:09 AM, Jim Bromer jimbro...@gmail.com wrote:

  Oh, by the way, it's description not decription.  (I am only having some
 fun!  But talk about a psychological slip.  What was on your mind when you
 wrote decription I wonder?)


It wasn't death, so it must have had something to do with interpreting
women.



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] David Jone's Design and Psuedo Tests Methodology

2010-09-15 Thread Jim Bromer
So give us an example of something that you are going to test, and how your
experimental methodology would make it clear that the dots can be connected.


On Tue, Sep 14, 2010 at 3:36 PM, David Jones davidher...@gmail.com wrote:

 Jim,
  I was trying to backup my claim that the current approach is wrong by
 suggesting the right approach and making a prediction about its prospects.
 Then, when I do make progress or don't, you can evaluate my claims and
 decide whether I was right or wrong.

  On Tue, Sep 14, 2010 at 2:21 PM, David Jones davidher...@gmail.comwrote:

 But, I also claim that while bottom-up testing of algorithms will give us
 experience, it also take a very long time to generate AGI solutions. If it
 takes a group of researchers 5 years to test out a design idea based on
 neural nets, we may learn something, but I can see from the nature and
 structure of the problem that it is unlikely to give us enough info to find
 a solution very quickly.

 So, my proposal is to adapt to the complexity of the problem and create
 pseudo designs that can be tested without having to fully implement the
 idea. I believe that there are design ideas that can be tested without
 implementing them and that it can be clear from the design and the pseudo
 tests that the design will work.

 The reason that many AGI designs must be implemented and can't be proven
 on paper very easily, is because they are not very good designs. So, it is
 even clear to the designer that there is a problem and they can't see how to
 connect the dots and make it work. Instead of realizing that it has flaws,
 they just think they can get it to work after some is implemented. I think
 there are better designs that do not have this problem. There are designs
 where you can see why it will work without having to resort to obscure
 mathematics and emergence.

 Those are the sorts of designs that I am confident I can create using my
 methodology.

 Dave





---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] MindForth Programming Journal (MFPJ) 2010 September 13

2010-09-14 Thread A. T. Murray
Mon.13.SEP.2010 -- Inhibition Revisited 

We need to go into the MindForth free AI source code and 
start removing inhibition code from verbs of being, because 
in clarifying our rules of activation, we have concluded that 
predicate nominatives should endure inhibition, but not the 
be-verbs that link to them. 

Tues.14.SEP.2010 -- Code, Test, Debug 

In our coding yesterday we switched many times back and forth 
between debug and test, debug and test. We were aided by our 
initial search for, and culling out, of current instances of 
neural inhibition implemented in the NounPhrase, BeVerb, 
VerbPhrase and EnCog mind-modules. Having formulated yesterday 
two rules -- that inhibition should occur in pairs, and that 
reasons for query-response variance should drive inhibition -- 
we are able now to focus more carefully now on where and why 
neural inhibition should occur in our open source code for 
artificial intelligence (AI). In the light of the new rules, 
we comment out the inhibition code in the EnCog thinking module, 
because we fear that it may interfere with our troubleshooting. 
Then we fire up the AI and we put the same question to the AI 
seven times, with the following results. 

Human: what are you
Robot:  I  AM  ANDRU

Human: what are you
Robot:  I  AM  ME

Human: what are you
Robot:  I  AM  ME

Human: what are you
Robot:  I  AM  ME

Human: what are you
Robot:  I  AM  ME

Human: what are you
Robot:  I  AM  ME

Human: what are you
Robot:   I  AM  ANDRU

We interpret the above exchange as showing that the 
response-idea I AM ANDRU was initially inhibited as a 
pair of two identical thoughts, one in the innate knowledge 
of the EnBoot English bootstrap, and one in the response 
made by the AI when asked, What are you? The inhibiition 
on the predicate nominative ANDRU lasts so long, that 
the I AM ANDRU KB-response is suppressed during five 
subsequent queries, until finally the inhibition of 
ANDRU has worn off and the AI can once again answer, 
I AM ANDRU. It is up to the mind-designer and/or AI coder 
to decide how severely to inhibit ideas in the AI, under 
considerations such as just how much mind-control do we 
wish to engage in; what is the proper trade-off between 
time-duration of inhibited concepts and exhaustive KB-search; 
and so forth. Right now we are eager to have inhibition 
strong enough to aid our code-test-debug cycle, and we 
especially want to verify that each instance of inhibition 
is happening on the horns of an idea pair (old and new), so 
that no stray inhibitions are ruining a chain of AI thought. 

Uh-oh. We've got some good results. Let's upload the code. 

http://www.scn.org/~mentifex/mindforth.txt

Meanwhile, here is what happened. In our hovering, looming 
quest for self-referential thought, we were testing the AI 
not only by asking what are you but by first telling the 
AI you are a program. Then we were getting garbage out 
in response to our what are you queries. But instead of 
dealing with the central issue (properly implementing 
inhibition), we tried removing the annoyance of seeing 
I ARE ME and PROGRAM ARES ME as some actual responses 
that the AI has issued today. So we went into the VerbPhrase 
module and inserted the following code. 

  motjuste @ 58 = IF BeVerb EXIT THEN  \ BE;  14sep2010
  motjuste @ 57 = IF BeVerb EXIT THEN  \ AM;  14sep2010
  motjuste @ 66 = IF BeVerb EXIT THEN  \ IS;  14sep2010  
  motjuste @ 67 = IF BeVerb EXIT THEN  \ ARE; 14sep2010

The above code forces the AI to treat AM IS ARE 
not as normal, regular verb-forms, but instead to let the 
BeVerb module enforce strict rules for the usage of forms 
of the BE verb. Immediately we obtained the following 
dialog which still has its problems but is on track. 

Transcript of AI Mind interview at 
6 31 38 o'clock on 14 September 2010.

Human: you are a program
Robot: PROGRAM  IS PROGRAM

Human: what are you
Robot:  PROGRAM

Human: what are you
Robot:  I  AM  PROGRAM

Further inspection of the activation-levels indicates that 
inhibition does not kick in until the AI states the idea, 
I AM PROGRAM, leaving out the article A because the 
EnArticle module is not being called. 

It is an eerie feeling to see the above results, as if 
the AI Mind is awakening before our very eyes. 


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Video of talk I gave yesterday about Cosmism

2010-09-13 Thread Ben Goertzel
Hi all,

I gave a talk in Teleplace yesterday, about Cosmist philosophy and future
technology.  A video of the talk is here:

http://telexlr8.wordpress.com/2010/09/12/ben-goertzel-on-the-cosmist-manifesto-in-teleplace-september-12/

I also put  my practice version of the talk, that I did before the real
talk, online here:

http://www.vimeo.com/14930325

(The practice version is slower-paced than the Teleplace version, and lacks
the QA at the end, but it goes through
some points in a little more depth.)

Of course, the Cosmist Manifesto book says it all in more detail ... links
to the book are given along with the first
video linked above.

thx
Ben Goertzel



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] I'm giving a talk on Cosmist philosophy (and related advanced technology) in the Teleplace virtual world...

2010-09-09 Thread Ben Goertzel
It's 10AM Pacific time, Sunday September 12 2010

Be there or don't ;-)

If you're interested to join the conversation, but haven't used Teleplace
before, be sure to download it perhaps 15-30 minutes before the talk, so you
can get used to the software.  [It's much like Second Life but simpler and
more focused on presentation/collaboration...]

Thanks much to the great Giulio Prisco for setting it up ;)


Ben Goertzel on The Cosmist Manifesto in Teleplace,
September 12, 10am PST
http://telexlr8.wordpress.com/2010/09/09/reminder-ben-goertzel-on-the-cosmist-manifesto-in-teleplace-september-12-10am-pst/


thx
Ben



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Attn: Ben Goertzel -- SINGULARITY ALERT!!!

2010-09-07 Thread A. T. Murray
Hey, Ben. Arthur here. Remember the prediction of 
the 2012 Singularity? Well, guess what happened.
I just had the following conversation with my 
http://www.scn.org/~mentifex/mindforth.txt AI:

Human: boys make cars
Robot: CARS  WHAT ARE CARS

Human: boys make guns
Robot: GUNS  WHAT ARE GUNS

Human: boys
Robot: THE BOYS MAKE THE CARS

Human: boys
Robot: THE BOYS MAKE THE GUNS

Chief AGI guru Dr. Goertzel! The above is not 
a cherry-picked, post-mucho experimentation 
routine test result put out for PR purposes.
It just happened during hard-core AI coding.

Now, before everybody jumps in and snipes, 
Aw, there's that crackpot Mentifex again, 
let me explain the significance of the I/O.

I have been coding MindForth AI for twelve 
years, since 1998 and maybe even earlier. 
Today on Mon.6.SEP.2010 for the first time 
I introduced quasi-neuronal inhibition into 
the free open-source AI source code. Why?

A year ago, around August or September of 
2009, you and I had our set-to (rumble?) 
concerning the AGI Roadmap and my posts 
there which were deleted (rolled back)
by Itamar Arel. No biggy. I did not fix 
Itamar's wagon last Halloween, so I won't 
fix it this Halloween, either. You see, I 
was maintaining my own AI Roadmap at 
http://code.google.com/p/mindforth/wiki/RoadMap
concurrently with my contributions to 
you guys' Roadmap. 

The main thing is, I was entering into 
the Roadmap Milestone of trying to achieve 
self-referential thought with my AI.
That particular achievement requires 
covering a lot of ground, not just 
you and I interactions between the 
human user and the artificial AI Mind.
The AI needs to acquire a general knowledge 
of the surrounding world, so that man and 
machine may discuss the AI as a participant 
in its world.

So at the end of 2009 I was coding the 
ability of the AI to respond to who-queries 
and what-queries, so that the AI can deal 
with questions like Who are you? and 
What are you?

Recently I have perceived the need to 
get the AI to respond with multiple answers 
to queries about topics where the AI knows 
not a single fact but multiple facts, 
such as, What do robots make? I want 
the AI to be able to say such things as:

Robots make cars.
Robots make tools.
Robots make parts.
Robots make robots.

It dawned on me a few days ago that the 
AI software would have to suppress each 
given answer in order to move on to the 
next answer available in the knowledge 
base (KB). In other words, for the first 
time ever, I had to code _inhibition_ 
into the AI Mind. Tonight I have done 
so, and that simple conversation near the 
top of this message shows the results.

The same query, of just the word boys...,
elicits two different answers from the KB 
because each response from the AI goes 
immediately into inhibition in such a way 
as to allow access to the next fact 
queued up in the recesses of the AI KB.

This Singularity Alert from Mentifex 
may generate a collective Huh? from 
the list readership, but here it is.

Bye for now (and back to the salt mines :-)

Arthur
-- 
http://AiMind-i.com
http://code.google.com/p/mindforth 
http://doi.acm.org/10.1145/307824.307853
http://robots.net/person/AI4U/diary/40.html


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Use Combinations of Constrained Methods

2010-09-05 Thread Jim Bromer
Various programs in the past have been very successful with problems that
were tightly constrained.  Sorry I don't have some examples however, I do
not think this is a controversial assertion.  The reason why these programs
worked was that they could assess any number of possibilities that the
problem space offered in a very short period of time.  I think that this
kind of problem model could be used in a test of a program to see if the
basic idea of the program was worthwhile.  Under these circumstances, where
the possibilities are limited, the computer program can use an exhaustive
search of the possibilities to test how good possible solutions
looked.  Although
many problems do not have a way to immediately rate the value of the best
candidates for a solution, the majority of the candidates typically can be
eliminated.



An awareness of these limited successes is important because it gives us
some sense of what the solution might look like.  On the other hand, the
application of these limited successes to the real world is so limited that
it may seem that they offer little that is worthwhile.  I believe that is
wrong.



Suppose that you had a variety of methods of image analysis that were useful
in different circumstances but these circumstances were very limited.  By
developing programs that combine these different methods you could have a
set of methods that might produce insight about an image that no one
particular method could produce, but the extended circumstances where
insight could be produced would be unusual and it would still not produce
the kinds of results that you would want.  Now suppose that you just kept
working on this project, finding other methods that produced some kinds of
useful information for particular circumstances.  As you continued working
in this way two things would likely occur.  Each new method would tend to be
a little less useful and the complexity that would result from combining
these methods would be a little more overwhelming for the computer to
examine the possibilities.  At some point, no matter how productive you
were, your sense of progress would come to an end.



At that point some genuine advancements would be necessary, but if you got
to that point you might find that some of those advancements were waiting
for you (if you had enough insight to realize it).



Jim Bromer



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Very Cool Object Name Intent Test

2010-09-03 Thread David Jones
I just came up with an awesome test. Ask someone, anyone you know to name
something really big and obvious around them that they already know the
position of. Tell them to point to it and name it. Practically *every* time,
they will look at it just before or as they are naming it! And it feels
incredibly uncomfortable not to look at what you are naming as you are
trying to communicate that.

These are the sorts of built in cues that children require to learn
language. The children know when they are being addressed, and they know how
to narrow the possible things that you intend to refer to when talking to
them. Pointing gestures, eye movements, etc. They all are very strong
*tells* (like in poker) regarding the intent of your speech.

We are constantly analyzing the actual intent of speakers and then
interpreting what they say. This is how children and adults learn language
and gain experience :)

I'm working on a rough to fine model of this in my Pseudo AGI design.



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Pseudo Design as a Solution to AGI Design

2010-09-01 Thread David Jones
I've been to think lately that the solution to creating a realistic AGI
design is psuedo design. What do I mean? Not simulation... not practical
applications... not extremely detailed implementations. The design would
start at a high level and go deeper into detail as possible.

So, why would this be a solution? Well, before I mention the cons to this
approach, consider the following:

*Problems it would solve:*
1) There is no money and little interest for AGI. Even if you could get
money, I am 99.99% sure it would be spent wrong. I know, I know... I'm
supposed to be trying to get us money, not dissuade it. But, I really think
we are repeating the mistakes of earlier researchers that promised too much
on unjustified ideas. Then when they failed, it created AI winters, over and
over and over again. History repeats itself.

So, getting us more money would likely do harm in addition to too little
good, the way it would be spent, for me to care. Extremely few people are
interested in AGI and among those that are, their ideas about it are very,
very flawed. We tend to approach the problem using our typical heuristics
and problem solving techniques, but the problem is no longer amenable to
these techniques. For example, the idea that patterns finding is sufficient
for intelligence. It has not been proven beyond my reasonable arguments
against it. Yet, people are getting funding and pursuing entire
architectures based on it. Does that really make sense? Nope. We must pseudo
test and pseudo design our algorithms first. Why? Because after spending
several years on these designs that I can reasonably predict will fail with
a high likelihood, we'll be back as the same place we were. Wouldn't we be
much better off figuring that out earlier rather than later through fast
prototyping techniques, such as the one I mentioned (pseudo design and
testing)?


2) Implementations tend to get overwhelmed by the desire to show immediate
results or achieve practical short-term goals. This completely throws off
AGI implementations, because these other constraints are not compatible with
more important AGI constraints.


3) We could find a solution much faster... AGI is a massively constrained
CSP (Constraint Satisfaction Problem). The eternity puzzle is a great
example of such a problem. If you approach the eternity puzzle using
heuristics alone to generate a likely solution, such as how pretty the
pattern is, or how plausible it is that the designers created this design,
it is guaranteed to fail. This is especially true if it takes you even a few
minutes to reject the design. The puzzle has so many possibilities that if
you were to try to look at each one to see if it was a solution, it would
literally take an eternity.

So, how do you solve such problems? You start with the most constrained
parts of the puzzle first, and you use heuristics to guide your search for
solutions paths that are likely to contain a solution and avoid solutions
paths that are less likely to contain a solution. Most importantly, you have
to try a lot of solutions and reject the bad ones quickly, so that you can
get to the right one. How does this apply to AGI? It's almost exactly the
same. Current researchers are spending a lot of time on solutions that were
generated using bad heuristics (unjustifiable human reasoning heuristics).
Then they take forever to test them out (years) before they inevitably fail.
A better way is to test solutions with as minimal effort and time as
possible, such as by using pseudo design and testing techniques. This way
you can settle onto the right solution path much, much faster and not waste
time on a solution that clearly wouldn't work if you simply spent a bit more
time analyzing it. Yes, such an approach has problems also, such as
dishonesty or delusion in how the algorithms would actually work. I'll
mention these more below. But, we have those delusions and problems already
:) So, overall, this approach seems to be significantly better.


4) if we could show that a pseudo AGI design works in sufficient detail and
with sufficient plausibility, it would likely change the minds of:
-many people that don't think AGI is possible,
-those that think it isn't possible in their lifetimes, and
-those that think it isn't worth investing in.
In other words... we would get the money, help and interest needed to make
it happen. Demos are great at generating interest in things that are very
complicated. This would be a fantastic demonstration.


*Pros:*
1) Fast design testing and rejection
2) Rough to fine design... would arrive at a solution faster because it uses
the *Most*-*Constrained*-Variable-First heuristic (such as has been used
to solve the eternity puzzle... you solve the most constrained portion first
to avoid having to try out many possibilities that will fail at the most
constrained part).
3) Less pressure for practical applications and more focus on important AGI
issues... this removes extra constraints

RE: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-29 Thread John G. Rose
Mike,

 

To put it into your own words here, mathematics is a delineation out of the
infinitely diversifiable, the same zone where design comes from. And
design needs a medium, the medium can be the symbolic expressions and
language of mathematics. And so conveniently here the mathematics is
expressible in a software language, computer system and database.

 

Don't forget, the designer in all of us needs a medium to express and
communicate, if not it remains in a void. A designer emits design, and in
this case, AGI, the design is the/a designer. Sounds kind of hokey but true.
there are other narrow cases where this is true, but not in the grand way
AGI is. IOW, in a way, AGI will design itself, it's coming out of the
infinitely diversifiable and maintaining a communication with it as a
delineation within itself. It's self-organizingly injecting itself into this
chaotic world via our intended or unintended manifestations.

 

John  

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 



 

JAR: Define infinitely diversifiable.

 

I just did more or less.  A form/shape can be said to be delineated
(although I'm open to alternative terms, because delineation needn't
consist of using lines as such - as in my examples, it could involve using
amorphous masses, or pseudo-lines). 

 

Diversification - in this case creating new kinds of font - therefore
involves using 1) new principles of delineation  -  the kinds of
lines/visual elements used are radically changed, and 2) new principles of
**arrangement** of the visual elements  - for example, various fonts there
can be said to conform to an A arrangement, but one or more shifted that
to a new triangle arrangement without any cross-bar in the middle; using
double/triple lines could be classified as either 1) or 2) I guess. An
innovative (although pos. PITA) arrangement would be to have elements that
move/are mobile. And delineation involves 3) introducing new kinds of
elements *in addition* to those already there or deleting existing kinds of
elements.

 

Diversifiable is merely recognizing the realities of the fields of art and
design, which is that they will - and a creative algorithm therefore would
have to be able to -  infinitely/endlessly transform the constitution and
principles of delineation and depiction of any and all forms.

 

I think part of the problem here is that you guys think like mathematicians
and not designers - you see the world in terms of more or less rigidly
structured abstract forms ( that allows for all geometric morphisms) - but
a designer has to think consciously or unconsciously much more fluidly in
terms of  kaleidomorphic, freely structured and fluidly morphable abstract
forms. He sees abstract forms as infinitely diversifiable. You don't.

 

To do AGI, I'm suggesting - in fact, I'm absolutely sure - you will have to
start thinking in addition like designers. If you have contempt for design,
as most people here seem to do, it is actually you who deserve contempt.
God was a designer long before He took up maths.

 

 

From: J. Andrew Rogers mailto:jar.mail...@gmail.com  

Sent: Wednesday, August 25, 2010 5:23 PM

To: AGI mailto:a...@listbox.com  

Subject: Re: [agi] Re: Compressed Cross-Indexed Concepts

 

 

On Wed, Aug 25, 2010 at 9:09 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:

 

You do understand BTW that your creative algorithm must be able to produce
not just a limited collection of  shapes [either squares or A's]  but an
infinitely diversifiable** collection.

 

 

Define infinitely diversifiable.

 

There are whole fields of computer science dedicated to small applications
that routinely generate effectively unbounded diversity in the strongest
possible sense. 


-- 

J. Andrew Rogers

 


AGI |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ Description: Image removed
by sender.|  https://www.listbox.com/member/?; Modify Your Subscription 

 http://www.listbox.com/ Description: Image removed by sender.


AGI |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ Description: Image removed
by sender.|
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com/ Description: Image removed by sender.

 




---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
~WRD000.jpg

[agi] Wow.... just wow. (Adaptive AI)

2010-08-25 Thread David Jones
I accidentally stumbled upon the website of Adaptive AI. I must say, it is
by FAR the best AGI approach and design I have ever seen. As I'm read it
today and yesterday (haven't quite finished it all), I agreed with so much
of what he wrote that I could almost swear that I wrote it myself. He even
uses the key word I've begun to use myself, which is explicit AGI design.
This dude is awesome. If you haven't read about it yet, please do:

http://www.adaptiveai.com/research/index.htm

Dave

PS: I don't agree with absolutely everything per say, such as the fuzzy
pattern matching stuff... because I just don't understand the specifics,
pros and cons of it to agree or disagree. But, damn, this guy got enough of
it right that I have to applaud him regardless of the other details.



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Human Reasoning Examples

2010-08-23 Thread David Jones
Does anyone know of a list, book or links about human reasoning examples?
I'm having such a hard time finding info on this. I don't want to have to
create all the examples myself. but I don't know where to look.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Alternative way to reverse engineer the brain

2010-08-20 Thread David Jones
Has anyone thought about sort of self-assembling nano electrodes or other
nano detectors that could probe the vast majority of neurons and important
structures in a very small brain (such as a gnat brain or a C. Elegans worm,
or even a larger animal)?

It seems to me that this would be a hell of a lot easier than simulating a
brain, since there are waay too many factors and dynamics involved to
get the simulation to be accurate. Maybe we could just invent a way to probe
every part of the brain in vivo.

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Natural Hyjacked Behavioral Control

2010-08-19 Thread John G. Rose
I thought this was interesting when looked at in relation to evolution and a
parasitic intelligence - 

 

http://www.guardian.co.uk/science/2010/aug/18/zombie-carpenter-ant-fungus




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Language Acquisition TV Special

2010-08-19 Thread David Jones
I've become extremely fascinated with language acquisition. I am convinced
that we can tease out the algorithms that children use to learn language
from observations like the ones seen in the video link below. I'm about to
start watching the second video, but thought you guys might like watching
this too :) Check it out! Also, if you haven't done so yet, check out
William O'Grady's book How Children Learn Language. I love that book.

http://www.youtube.com/watch?v=PZatrvNDOiENR=1

Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Compressed Cross-Indexed Concepts

2010-08-19 Thread John G. Rose
An agent can only flip so many bits per second. If it gets stuck in a
computational conundrum it will waste energy that should be used for
survival purposes and the likelihood for agent death increases. 

 

Avoidance behavior for impossible computation is enforced.

 

Mathematics is a type of database for computational energy storage. All of
us multi-agent intelligences, mainly mathematicians, contribute to it over
time.

 

How long did it take to invent the wheel, but once the pattern is known, it
takes just a few bits to store.

 

That's one obvious method of the leveraging, but this could be, and is, used
all over the place. 

 

John

 

From: Jim Bromer [mailto:jimbro...@gmail.com] 



John

How would a mathematical system that is able to leverage for unnecessary or
impossible computation work exactly.  What do you mean by this?  And how
would this work to produce better integration of concepts and better
interpretation of concepts? 

 

On Fri, Aug 13, 2010 at 4:25 PM, John G. Rose johnr...@polyplexic.com
wrote:



 -Original Message-
 From: Jim Bromer [mailto:jimbro...@gmail.com]


 On Thu, Aug 12, 2010 at 12:40 AM, John G. Rose johnr...@polyplexic.com
 wrote:
 The ideological would still need be expressed mathematically.

 I don't understand this.  Computers can represent related data objects
that may
 be best considered without using mathematical terms (or with only
incidental
 mathematical functions related to things like the numbers of objects.)


The difference between data and code, or math and data, sometimes need not
be as dichotomous.



 I said:  I think the more important question is how does a
general concept
 be interpreted across a range of different kinds of ideas.  Actually this
is not so
 difficult, but what I am getting at is how are sophisticated
 conceptual  interrelations integrated and resolved?

 John said: Depends on the structure. We would want to build it such that
this
 happens at various levels or the various multidimensional densities. But
at the
 same time complex state is preserved until proven benefits show
themselves.

 Your use of the term 'densities' suggests that you are thinking about the
kinds of
 statistical relations that have been talked about a number of times in
this
 group.   The whole problem I have with statistical models is that they
don't
 typically represent the modelling variations that could be and would need
to be
 encoded into the ideas that are being represented.  For example a Bayesian
 Network does imply that a resulting evaluation would subsequently be
encoded
 into the network evaluation process, but only in a limited manner.  It
doesn't for
 example show how an idea could change the model, even though that would be
 easy to imagine.
 Jim Bromer


I also have some issues with heavily based statistical models. When I was
referring to densities I was really meaning an interconnectional
multidimensionality in the multigraph/hypergraph intelligence network, IOW a
partly combinatorial edge of chaos. There is a combination of state and
computational potential energy that an incoming idea, represented as a
data/math combo, would result in various partly self-organizational (SOM)
changes depending on how the key - the idea - effects computational energy
potential. And this is balanced against K-complexity related local extrema.

For the statistical mechanisms I would use for more of the narrow AI stuff
that is needed and also for situations that you can't come up with something
more concrete/discrete.


John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 
Powered by Listbox: http://www.listbox.com http://www.listbox.com/ 

 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Who's on first?

2010-08-18 Thread Jim Bromer
http://search.yahoo.com/search?p=who%27s+on+firstei=utf-8fr=ie8
YouTube - Who's on first?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Students' Understanding of the Equal Sign Not Equal, Professor Says

2010-08-17 Thread Jim Bromer
Students' Understanding of the Equal Sign Not Equal, Professor Says
http://www.sciencedaily.com/releases/2010/08/100810122200.htm


The equal sign is pervasive and fundamentally linked to mathematics from
kindergarten through upper-level calculus, Robert M. Capraro says. The
idea of symbols that convey relative meaning, such as the equal sign and
less than and greater than signs, is complex and they serve as a
precursor to ideas of variables, which also require the same level of
abstract thinking.

The problem is students memorize procedures without fully understanding the
mathematics, he notes.

Students who have learned to memorize symbols and who have a limited
understanding of the equal sign will tend to solve problems such as 4+3+2=(
)+2 by adding the numbers on the left, and placing it in the parentheses,
then add those terms and create another equal sign with the new answer, he
explains. So the work would look like 4+3+2=(9)+2=11.

This response has been called a running equal sign -- similar to how a
calculator might work when the numbers and equal sign are entered as they
appear in the sentence, he explains. However, this understanding is
incorrect. The correct solution makes both sides equal. So the understanding
should be 4+3+2=(7)+2. Now both sides of the equal sign equal 9.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Probabilty Processor

2010-08-17 Thread Jan Klauck
--- quotes

The US Defense Advanced Research Projects Agency financed the basic
research necessary to create a processor that thinks in terms of
probabilities instead of the certainties of ones and zeros.
(...)
So we have been rebuilding probability computing from the gate level
all the way up to the processor.
(...)
The probability processing that Lyric has invented doesn't do the
on/off processing of a normal logic circuit, but rather makes
transistors function more like tiny dimmer switches, letting electron
flow rates represent the probability of something happening.
(...)
Reynolds says that a data center filled with servers that are
calculating probabilities for, say, a financial model, will be able
to consolidate from thousands of servers down to a single GP5 appliance
to calculate probabilities.
(...)
Digital logic that takes 500 transistors to do a probability multiply
operation, for instance, can be done with just a few transistors on
the Lyric chips. With an expected factor of 1,000 improvement over
general purpose CPUs running probability algorithms, the energy
savings of using GP5s instead of, say, x64 chips will be immense.
(...)
programming language, which is called Probability Synthesis to
Bayesian Logic, or PSBL for short.
---

Hm. Wow?

(DARPA funds Mr Spock on a Chip)
http://www.theregister.co.uk/2010/08/17/lyric_probability_processor/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: [agi] P≠NP

2010-08-16 Thread Matt Mahoney
Does anyone have any comments on this proof? I don't have the mathematical 
background to tell if it is correct. But it seems related to the idea from 
algorithmic information theory that the worst case complexity for any algorithm 
is equal to the average case for compressed inputs. Then to show that P != NP 
you would show that SAT (specifically 9-SAT) with compressed inputs has 
exponential average case complexity. That is not quite the approach the paper 
takes, probably because compression is not computable.

 -- Matt Mahoney, matmaho...@yahoo.com





From: Kaj Sotala xue...@gmail.com
To: agi agi@v2.listbox.com
Sent: Thu, August 12, 2010 2:18:13 AM
Subject: [agi] Re: [agi] P≠NP

2010/8/12 John G. Rose johnr...@polyplexic.com

 BTW here is the latest one:

 http://www.win.tue.nl/~gwoegi/P-versus-NP/Deolalikar.pdf

See also:

http://www.ugcs.caltech.edu/~stansife/pnp.html - brief summary of the proof

Discussion about whether it's correct:

http://rjlipton.wordpress.com/2010/08/08/a-proof-that-p-is-not-equal-to-np/
http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p?np/
http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p≠np/
http://rjlipton.wordpress.com/2010/08/11/deolalikar-responds-to-issues-about-his-p≠np-proof/

http://news.ycombinator.com/item?id=1585850

Wiki page summarizing a lot of the discussion, as well as collecting
many of the links above:

http://michaelnielsen.org/polymath1/index.php?title=Deolalikar%27s_P!%3DNP_paper#Does_the_argument_prove_too_much.3F



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

* Open Link in New Tab
* Download


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Digital incremental transmissions

2010-08-14 Thread Steve Richfield
Long ago I figured out how to build digital incremental transmissions. What
are they? Imagine a sausage-shaped structure with the outside being many
narrow reels of piano wire, with electrical and computer connections on the
end. Under computer control, each of the rings can be independently
controlled to rotate a specific distance playing one strand out while
reeling another strand in, pull a specific amount, or execute a long
coordinated sequence of moves. Further, this is a true infinitely-variable
transmission, so that if you command a ring to turn REALLY slowly, you can
exert nearly limitless force, or at least enough to destroy the structure.
Hence, obvious software safeguards are needed. Lowering a weight recovers
the energy to use elsewhere, or return out the supply lines. In short, a
complete android musculature could be build this way, and take only a tiny
amount of space - MUCH less than in our bodies, or with motors as is now the
case. Little heat would be generated because this system is fundamentally
efficient.

Nearly all of the components are cut from flat metal stock, akin to
mechanical clock parts, only with much beefier shapes. Hence, it is both
cheap and strong. Think horsepower, available from any strand. The strand
pairs would be hooded up to be flexor and extensor muscles for the many
joints, etc.

I haven't actually built it because I haven't (yet) found a customer who
wanted it badly m enough to pay the development costs and then wait a year
for it. However, this would sure make be an enabling system for people who
want to build REAL robots.

Does anyone here have ANY idea what to do with this, other than putting it
back on the shelf and waiting another decade?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Neuroplasticity Explanation Hypothesis

2010-08-14 Thread David Jones
I just had this really interesting idea about neuroplasticity as I'm sitting
here listening to a speeches at the Singularity Summit.

I was trying to figure out how neuroplasticity works and why the hell is it
that the brain can find the same patterns in input from completely different
senses. For example, if born without eyes, we can see with touch. If born
without hearing and vision, we can also see and hear with touch! (an example
of this is a blind and deaf person putting their hand on your mouth and neck
to detect and understand your speech. this is a real example).

How the hell does the brain do that?!

The brain knows how to process certain inputs just the right way. For
example, it knows to group things by color or that faces have certain
special meanings. How does it know to process this sensory input the right
way? I don't think it's purely pattern recognition. Actually, it cannot be
just pattern recognition alone.

So, I realized that it would make sense that cells don't create a network
and wait for input. The cells are not specialized *before* they get sensory
inputs or other types of input (such as input from nearby cells). These
cells specialize AFTER receiving input! That means that our DNA defines what
patterns we should look for and how to process those patterns. Guess what
that means! That means that if these patterns come from completely different
sensory organs, the brain can still recognize the patterns and the cells
that receive these patterns can specialize just right to process them a
certain way! That would perfectly (so I believe) explain neuroplasticity.

Basically, it is a side-effect of the specific design of our brains. But, it
means that the brain is not just a pattern recognizer. It has built-in
knowledge which is absolutely essential to process inputs correctly. This
supports my hypothesis that artificial neural nets are not correctly design
to be able to achieve AGI the way the brain does.

This would also explain my beliefs that the brain knows how to process in
ways that correctly represent true real-world relationships. It would also
explain why this processing can self assemble correctly. The knowledge for
how to process inputs is built in(my hypothesis), but it self assembles only
when inputs that have certain patterns and chemical signals are presented to
the cells.

This would explain the confusion for between purely self-assembling models
and built-in knowledge of how certain patterns or input should be processed.
Clearly, the brain does not evolve to process world input correctly every
single time a person is born. We solved this problem already through our DNA
and billions of years of evolution. So, the solutions to the problems are
built into our DNA.

This would also explain how the brain is able to handle other important
functions such as: memory, hierarchical relationships, etc. When the brain
detects the need and the right patterns of specialized cells, it can then
create even more specialized cells or cellular changes to perform: memory
and other important brain functions.

I also came up with an interesting idea to explain why people go into comas.
I could be completely off. It's just an uneducated guess. The cause of comas
could be that the brain circuit that controls attention has been damaged.
The attention part of the brain probably drives everything by deciding what
circuits to activate and why! Without that circuit creating activity, the
brain's neurons have no reason to fire normally and the brain's normal
activity does not occur.


Dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Nao Nao

2010-08-13 Thread Ian Parker
There is one further point which is absolutely fundamental
in operating system/compiler theory. The user should be unaware of how the
work is divided up. A robot may simply have a WiFi router and very little
else, or it might have considerable on board processing. The user should not
be aware of this.


  = Ian Parker



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Grand Cooperative Projects

2010-08-13 Thread Mike Tintner
like this ( the Genome Project):

http://www.nytimes.com/2010/08/13/health/research/13alzheimer.html?_r=1themc=th

should become an ever bigger part of sci.  tech. Of course, with Alzheimer's 
there is a great deal of commonly recognized ground. Not so with AGI. It might 
be interesting to speculate on what could be common ground in AGI  associated 
robotics.  Common technological approaches, like the common protocols for 
robots suggested here, seem to me vulnerable to the probability that the chosen 
technologies may be simply wrong for AGI.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Compressed Cross-Indexed Concepts

2010-08-13 Thread Jim Bromer
On Thu, Aug 12, 2010 at 12:40 AM, John G. Rose johnr...@polyplexic.comwrote:

 The ideological would still need be expressed mathematically.


I don't understand this.  Computers can represent related data objects that
may be best considered without using mathematical terms (or with only
incidental mathematical functions related to things like the numbers of
objects.)



 I said:  I think the more important question is how does a general concept
 be interpreted across a range of different kinds of ideas.  Actually this is
 not so difficult, but what I am getting at is how are sophisticated
 conceptual  interrelations integrated and resolved?

 John said: Depends on the structure. We would want to build it such that
 this happens at various levels or the various multidimensional densities.
 But at the same time complex state is preserved until proven benefits show
 themselves.


Your use of the term 'densities' suggests that you are thinking about the
kinds of statistical relations that have been talked about a number of times
in this group.   The whole problem I have with statistical models is that
they don't typically represent the modelling variations that could be and
would need to be encoded into the ideas that are being represented.  For
example a Bayesian Network does imply that a resulting evaluation would
subsequently be encoded into the network evaluation process, but only in a
limited manner.  It doesn't for example show how an idea could change the
model, even though that would be easy to imagine.
Jim Bromer


On Thu, Aug 12, 2010 at 12:40 AM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
 
  Well, if it was a mathematical structure then we could start developing
  prototypes using familiar mathematical structures.  I think the structure
 has
  to involve more ideological relationships than mathematical.

 The ideological would still need be expressed mathematically.

  For instance
  you can apply a idea to your own thinking in a such a way that you are
  capable of (gradually) changing how you think about something.  This
 means
  that an idea can be a compression of some greater change in your own
  programming.

 Mmm yes or like a key.

  While the idea in this example would be associated with a
  fairly strong notion of meaning, since you cannot accurately understand
 the
  full consequences of the change it would be somewhat vague at first.  (It
  could be a very precise idea capable of having strong effect, but the
 details of
  those effects would not be known until the change had progressed.)
 

 Yes. It would need to have receptors, an affinity something like that, or
 somehow enable an efficiency change.

  I think the more important question is how does a general concept be
  interpreted across a range of different kinds of ideas.  Actually this is
 not so
  difficult, but what I am getting at is how are sophisticated conceptual
  interrelations integrated and resolved?
  Jim

 Depends on the structure. We would want to build it such that this happens
 at various levels or the various multidimensional densities. But at the
 same
 time complex state is preserved until proven benefits show themselves.

 John





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Compressed Cross-Indexed Concepts

2010-08-13 Thread Jim Bromer
It would be easy to relativize a weighted network so that it could be used
to include ideas that can effectively reshape the network (or at least
reshape the virtual network) but it is not easy to see how this could be
done intelligently enough to produce actual intelligence.  But maybe I
should try it sometime just to get some idea of what it would do.
Jim Bromer



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Single Neurons Can Detect Sequences

2010-08-13 Thread Jim Bromer
Single Neurons Can Detect Sequences
http://www.sciencedaily.com/releases/2010/08/100812151632.htm



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Nao Nao

2010-08-13 Thread John G. Rose
I suppose that part of the work that it does is making people feel good
and being a neat conversation piece.

 

Interoperability and communications protocols can facilitate the path to
AGI. Just like the many protocols used on the internet. I haven't looked at
any for robotics specifically though there definitely are some. But having
worked with many myself I am familiar with limitations, shortcomings and
issues. Protocols is where it's at when making diverse systems work together
and having good protocols initially can save vast amounts of engineering
work. It's bang for the buck in a big way.

 

John


From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, August 12, 2010 9:02 AM
To: agi
Subject: Re: [agi] Nao Nao

 

By not made to perform work, you mean that it is not sturdy enough? Are
any half-way AGI robots made to perform work, vs production line robots? (I
think the idea of performing useful work should be a goal).

 

The protocol is obviously a good idea, but you're not suggesting it per se
will lead to AGI?

 

From: John G. Rose mailto:johnr...@polyplexic.com  

Sent: Thursday, August 12, 2010 3:17 PM

To: agi mailto:agi@v2.listbox.com  

Subject: RE: [agi] Nao Nao

 

Typically the demo is some of the best that it can do. It looks like the
robot is a mass produced model that has some really basic handling
capabilities, not that it is made to perform work. It could still have
relatively advanced microprocessor and networking system, IOW parts of the
brain could run on centralized servers. I don't think they did that BUT it
could.

 

But it looks like one Nao can talk to another Nao. What's needed here is a
standardized robot communication protocol. So a Nao could talk to a vacuum
cleaner or a video cam or any other device that supports the protocol.
Companies may resist this at first as they want to grab market share and
don't understand the benefit.

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, August 12, 2010 4:56 AM
To: agi
Subject: Re: [agi] Nao Nao

 

John,

 

Any more detailed thoughts about its precise handling capabilities? Did it,
first, not pick up the duck independently,  (without human assistance)? If
it did,  what do you think would be the range of its object handling?  (I
had an immediate question about all this - have asked the site for further
clarificiation - but nothing yet).

 

From: John G. Rose mailto:johnr...@polyplexic.com  

Sent: Thursday, August 12, 2010 5:46 AM

To: agi mailto:agi@v2.listbox.com  

Subject: RE: [agi] Nao Nao

 

I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked
mechanical entities. This opens up a whole new world of security and privacy
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 

Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com
wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription 

 http://www.listbox.com 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription

 http://www.listbox.com 

 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription 

 http://www.listbox.com 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Compressed Cross-Indexed Concepts

2010-08-13 Thread John G. Rose


 -Original Message-
 From: Jim Bromer [mailto:jimbro...@gmail.com]
 
 On Thu, Aug 12, 2010 at 12:40 AM, John G. Rose johnr...@polyplexic.com
 wrote:
 The ideological would still need be expressed mathematically.
 
 I don't understand this.  Computers can represent related data objects
that may
 be best considered without using mathematical terms (or with only
incidental
 mathematical functions related to things like the numbers of objects.)
 

The difference between data and code, or math and data, sometimes need not
be as dichotomous. 

 
 I said:  I think the more important question is how does a
general concept
 be interpreted across a range of different kinds of ideas.  Actually this
is not so
 difficult, but what I am getting at is how are sophisticated
 conceptual  interrelations integrated and resolved?
 
 John said: Depends on the structure. We would want to build it such that
this
 happens at various levels or the various multidimensional densities. But
at the
 same time complex state is preserved until proven benefits show
themselves.
 
 Your use of the term 'densities' suggests that you are thinking about the
kinds of
 statistical relations that have been talked about a number of times in
this
 group.   The whole problem I have with statistical models is that they
don't
 typically represent the modelling variations that could be and would need
to be
 encoded into the ideas that are being represented.  For example a Bayesian
 Network does imply that a resulting evaluation would subsequently be
encoded
 into the network evaluation process, but only in a limited manner.  It
doesn't for
 example show how an idea could change the model, even though that would be
 easy to imagine.
 Jim Bromer
 

I also have some issues with heavily based statistical models. When I was
referring to densities I was really meaning an interconnectional
multidimensionality in the multigraph/hypergraph intelligence network, IOW a
partly combinatorial edge of chaos. There is a combination of state and
computational potential energy that an incoming idea, represented as a
data/math combo, would result in various partly self-organizational (SOM)
changes depending on how the key - the idea - effects computational energy
potential. And this is balanced against K-complexity related local extrema. 

For the statistical mechanisms I would use for more of the narrow AI stuff
that is needed and also for situations that you can't come up with something
more concrete/discrete.

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: [agi] P≠NP

2010-08-12 Thread Kaj Sotala
2010/8/12 John G. Rose johnr...@polyplexic.com

 BTW here is the latest one:

 http://www.win.tue.nl/~gwoegi/P-versus-NP/Deolalikar.pdf

See also:

http://www.ugcs.caltech.edu/~stansife/pnp.html - brief summary of the proof

Discussion about whether it's correct:

http://rjlipton.wordpress.com/2010/08/08/a-proof-that-p-is-not-equal-to-np/
http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p?np/
http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p≠np/
http://rjlipton.wordpress.com/2010/08/11/deolalikar-responds-to-issues-about-his-p≠np-proof/
http://news.ycombinator.com/item?id=1585850

Wiki page summarizing a lot of the discussion, as well as collecting
many of the links above:

http://michaelnielsen.org/polymath1/index.php?title=Deolalikar%27s_P!%3DNP_paper#Does_the_argument_prove_too_much.3F


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Nao Nao

2010-08-12 Thread Ian Parker
Just two quick comments. CCTV is already networked, the Police can track
smoothly from one camera to another. Second comment is that if you (say)
taking a heavy load upstairs you need 2 robots one holding each end. A
single PC can control them both. In fact a robot workshop will be a kind of
cloud, in terms of cloud computing.


  - Ian Parker

On 12 August 2010 05:46, John G. Rose johnr...@polyplexic.com wrote:

 I wasn't meaning to portray pessimism.



 And that little sucker probably couldn't pick up a knife yet.



 But this is a paradigm change happening where we will have many networked
 mechanical entities. This opens up a whole new world of security and privacy
 issues...



 John



 *From:* David Jones [mailto:davidher...@gmail.com]

 Way too pessimistic in my opinion.

 On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com
 wrote:

 Aww, so cute.



 I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
 sensory information back to the main servers with all the other Nao's all
 collecting personal data in a massive multi-agent geo-distributed
 robo-network.



 So cuddly!



 And I wonder if it receives and executes commands, commands that come in
 over the network from whatever interested corporation or government pays the
 most for access.



 Such a sweet little friendly Nao. Everyone should get one :)



 John
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-12 Thread Ian Parker
Someone who really believes that P=NP should go to Saudi Arabia or the
Emirates and crack the Blackberry code.


  - Ian Parker

On 12 August 2010 06:10, John G. Rose johnr...@polyplexic.com wrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 Re: [agi] Re: Compressed Cross-Indexed Concepts
 
  David,
  I am not a mathematician although I do a lot of computer-
  related mathematical work of course.  My remark was directed toward John
  who had suggested that he thought that there is some sophisticated
  mathematical sub system that would (using my words here) provide such a
  substantial benefit to AGI that its lack may be at the core of the
  contemporary problem.  I was saying that unless this required mathemagic
  then a scalable AGI system demonstrating how effective this kind of
  mathematical advancement could probably be simulated using contemporary
  mathematics.  This is not the same as saying that AGI is solvable by
 sanitized
  formal representations any more than saying that your message is a
 sanitized
  formal statement because it was dependent on a lot of computer
  mathematics in order to send it.  In other words I was challenging John
 at
 that
  point to provide some kind of evidence for his view.
 

 I don't know if we need to create some new mathemagics, a breakthrough, or
 whatever. I just think using existing math to engineer it, using the math
 like if was software is what should be done. But you may be right perhaps
 proof of P=NP something similar is needed. I don't think so though.

 The main goal would be to leverage existing math to compensate for
 unnecessary and/or impossible computation. We don't need to re-evolve the
 wheel as we already figured that out. And computers are v. slow compared to
 other physical computations that are performed in the natural physical
 world.

 Maybe not - developing a system from scratch that discovers all of the
 discoveries over the millennia of science and civilization? Would that be
 possible?

  I then went on to say, that for example, I think that fast SAT solutions
 would
  make scalable AGI possible (that is, scalable up to a point that is way
 beyond
  where we are now), and therefore I believe that I could create a
 simulation
  of an AGI program to demonstrate what I am talking about.  (A simulation
 is
  not the same as the actual thing.)
 
  I didn't say, nor did I imply, that the mathematics would be all there is
 to it.  I
  have spent a long time thinking about the problems of applying formal and
  informal systems to 'real world' (or other world) problems and the
  application of methods is a major part of my AGI theories.  I don't
 expect
 you
  to know all of my views on the subject but I hope you will keep this in
 mind
  for future discussions.

 Using available skills and tools the best we can use them. And, inventing
 new tools by engineering utilitarian and efficient mathematical structure.
 Math is just like software in all this but way more powerful. And using the
 right math, the most general where it is called for and specific/narrow
 when
 needed. I don't see a problem with the specific most of the time but I
 don't
 know if many people get the general. Though it may be an error or lack of
 understanding on my part...

 John



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: [agi] P≠NP

2010-08-12 Thread Ian Parker
This is a very powerful argument, but is not quite a rigorous
proof. Thermodynamics is like saying that because all zeros below 10^20 have
a real part of 0.5 therefore there are no non trivial zeros for which that
is not the case. What I am saying is pedantic, very pedantic but will still
affect Clay's view of the matter.

You will *not* be able to decode Blackberry, of course.


  - Ian Parker

2010/8/12 John G. Rose johnr...@polyplexic.com

 BTW here is the latest one:



 http://www.win.tue.nl/~gwoegi/P-versus-NP/Deolalikar.pdf
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Nao Nao

2010-08-12 Thread Mike Tintner
John,

Any more detailed thoughts about its precise handling capabilities? Did it, 
first, not pick up the duck independently,  (without human assistance)? If it 
did,  what do you think would be the range of its object handling?  (I had an 
immediate question about all this - have asked the site for further 
clarificiation - but nothing yet).


From: John G. Rose 
Sent: Thursday, August 12, 2010 5:46 AM
To: agi 
Subject: RE: [agi] Nao Nao


I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked 
mechanical entities. This opens up a whole new world of security and privacy 
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 



Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays sensory 
information back to the main servers with all the other Nao's all collecting 
personal data in a massive multi-agent geo-distributed robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in over 
the network from whatever interested corporation or government pays the most 
for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Nao Nao

2010-08-12 Thread John G. Rose
Typically the demo is some of the best that it can do. It looks like the
robot is a mass produced model that has some really basic handling
capabilities, not that it is made to perform work. It could still have
relatively advanced microprocessor and networking system, IOW parts of the
brain could run on centralized servers. I don't think they did that BUT it
could.

 

But it looks like one Nao can talk to another Nao. What's needed here is a
standardized robot communication protocol. So a Nao could talk to a vacuum
cleaner or a video cam or any other device that supports the protocol.
Companies may resist this at first as they want to grab market share and
don't understand the benefit.

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, August 12, 2010 4:56 AM
To: agi
Subject: Re: [agi] Nao Nao

 

John,

 

Any more detailed thoughts about its precise handling capabilities? Did it,
first, not pick up the duck independently,  (without human assistance)? If
it did,  what do you think would be the range of its object handling?  (I
had an immediate question about all this - have asked the site for further
clarificiation - but nothing yet).

 

From: John G. Rose mailto:johnr...@polyplexic.com  

Sent: Thursday, August 12, 2010 5:46 AM

To: agi mailto:agi@v2.listbox.com  

Subject: RE: [agi] Nao Nao

 

I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked
mechanical entities. This opens up a whole new world of security and privacy
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 

Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com
wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription 

 http://www.listbox.com 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Nao Nao

2010-08-12 Thread Mike Tintner
By not made to perform work, you mean that it is not sturdy enough? Are any 
half-way AGI robots made to perform work, vs production line robots? (I think 
the idea of performing useful work should be a goal).

The protocol is obviously a good idea, but you're not suggesting it per se will 
lead to AGI?


From: John G. Rose 
Sent: Thursday, August 12, 2010 3:17 PM
To: agi 
Subject: RE: [agi] Nao Nao


Typically the demo is some of the best that it can do. It looks like the robot 
is a mass produced model that has some really basic handling capabilities, not 
that it is made to perform work. It could still have relatively advanced 
microprocessor and networking system, IOW parts of the brain could run on 
centralized servers. I don't think they did that BUT it could.

 

But it looks like one Nao can talk to another Nao. What's needed here is a 
standardized robot communication protocol. So a Nao could talk to a vacuum 
cleaner or a video cam or any other device that supports the protocol. 
Companies may resist this at first as they want to grab market share and don't 
understand the benefit.

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Thursday, August 12, 2010 4:56 AM
To: agi
Subject: Re: [agi] Nao Nao

 

John,

 

Any more detailed thoughts about its precise handling capabilities? Did it, 
first, not pick up the duck independently,  (without human assistance)? If it 
did,  what do you think would be the range of its object handling?  (I had an 
immediate question about all this - have asked the site for further 
clarificiation - but nothing yet).

 

From: John G. Rose 

Sent: Thursday, August 12, 2010 5:46 AM

To: agi 

Subject: RE: [agi] Nao Nao

 

I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked 
mechanical entities. This opens up a whole new world of security and privacy 
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 

Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays sensory 
information back to the main servers with all the other Nao's all collecting 
personal data in a massive multi-agent geo-distributed robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in over 
the network from whatever interested corporation or government pays the most 
for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John

  agi | Archives | Modify Your Subscription 
 
 

  agi | Archives | Modify Your Subscription
 
 

 

  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Fwd: [singularity] NEWS: Max More is Running for Board of Humanity+

2010-08-12 Thread Ben Goertzel
-- Forwarded message --
From: Natasha Vita-More nata...@natasha.cc
Date: Thu, Aug 12, 2010 at 1:02 PM
Subject: [singularity] NEWS: Max More is Running for Board of Humanity+
To: singularity singular...@v2.listbox.com


 Friends,

It is my pleasure to endorse Max More's candidacy for joining the Board of
Directors of Humanity+.

Today is the last day to become a member of Humanity+ in order to vote for
Max as a new Board member.   Voting opens this weekend!

Please join now!  http://humanityplus.org/join/

Thank you for your support of Max!

Natasha


Natasha Vita-More http://www.natasha.cc/

(If you have any questions, please email me off list.)
  *singularity* | Archiveshttps://www.listbox.com/member/archive/11983/=now
https://www.listbox.com/member/archive/rss/11983/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
Adjunct Professor of Cognitive Science, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Nao Nao

2010-08-12 Thread Ian Parker
We are getting down to some of the nitty gritty. To a considerable extent
what is holding robotics back is the lack of common standards. We can think
about what we might need. One would instinctively start with a CAD/CAM
package like ProEngineer. We can thus descibe a robot in terms of assembles
and parts. A single joint is a part, a human finger has 3 joints is an
assembly. A hand is an assembly. We get this by using CAD.

A robotic language has to be composed as follows.

class Part{

}

class Assemble{

}

An assembly/part will have a position. The simplest command is to move from
one position to another. Note that a position is a multidimensional quantity
and describes the positions of each part.

*Pick up ball* is a complex command. We first have to localise the ball,
determine the position required to grasp the ball any then put the parts
into a position so that the ball moves into a new position.

Sounds complicated? Yes it is, but a lot of the basic work has already been
done. The first time a task is performed the system would have to compute
from first principles. The second time it would have some stored positions.
The system could *learn*.

A position is a vector (multidimensional) 2 robots will have twice the
dimensions of a single robot.

*Move bed upstairs* is a twin robot problem, but no different in principle
from a single robot problem. Above all I think we must start off
mathematically and construct a language of maximum generality. It should be
pointed out too that there programs which will evaluate forces in a
multi-limb environment. In fact matrix theory was devised in the 19th
century.


  - Ian Parker

On 12 August 2010 15:17, John G. Rose johnr...@polyplexic.com wrote:

 Typically the demo is some of the best that it can do. It looks like the
 robot is a mass produced model that has some really basic handling
 capabilities, not that it is made to perform work. It could still have
 relatively advanced microprocessor and networking system, IOW parts of the
 brain could run on centralized servers. I don't think they did that BUT it
 could.



 But it looks like one Nao can talk to another Nao. What's needed here is a
 standardized robot communication protocol. So a Nao could talk to a vacuum
 cleaner or a video cam or any other device that supports the protocol.
 Companies may resist this at first as they want to grab market share and
 don't understand the benefit.



 John



 *From:* Mike Tintner [mailto:tint...@blueyonder.co.uk]
 *Sent:* Thursday, August 12, 2010 4:56 AM
 *To:* agi
 *Subject:* Re: [agi] Nao Nao



 John,



 Any more detailed thoughts about its precise handling capabilities? Did it,
 first, not pick up the duck independently,  (without human assistance)? If
 it did,  what do you think would be the range of its object handling?  (I
 had an immediate question about all this - have asked the site for further
 clarificiation - but nothing yet).



 *From:* John G. Rose johnr...@polyplexic.com

 *Sent:* Thursday, August 12, 2010 5:46 AM

 *To:* agi agi@v2.listbox.com

 *Subject:* RE: [agi] Nao Nao



 I wasn't meaning to portray pessimism.



 And that little sucker probably couldn't pick up a knife yet.



 But this is a paradigm change happening where we will have many networked
 mechanical entities. This opens up a whole new world of security and privacy
 issues...



 John



 *From:* David Jones [mailto:davidher...@gmail.com]

 Way too pessimistic in my opinion.

 On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com
 wrote:

 Aww, so cute.



 I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
 sensory information back to the main servers with all the other Nao's all
 collecting personal data in a massive multi-agent geo-distributed
 robo-network.



 So cuddly!



 And I wonder if it receives and executes commands, commands that come in
 over the network from whatever interested corporation or government pays the
 most for access.



 Such a sweet little friendly Nao. Everyone should get one :)



 John

 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/| 
 Modifyhttps://www.listbox.com/member/?;Your Subscription

 http://www.listbox.com

 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/| 
 Modifyhttps://www.listbox.com/member/?;Your Subscription

 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-12 Thread Steve Richfield
Ben,

There is obvious confusion here. MOST mutations harm, but occasionally one
helps. By selecting for a particular difficult-to-achieve thing, like long
lifespan, we can discard the harmful mutations while selecting for the
helpful ones. However, selecting for something harmful and easy to achieve,
like the presence of genes that shorten lifespan, the selection process is
SO non-specific that it can't tell us much of anything. There are countless
mutations that kill WITHOUT conferring compensatory advantages. I could see
stressing the flies in various ways without controlling for lifespan, but
controlling for short lifespan in the absence of such stresses would seem to
be completely worthless. Of course, once stressed, you would also be seeing
genes to combat those (irrelevant) stresses.

In short, I still haven't heard words that suggest that this can go
anywhere, though it sure would be wonderful (like you and I might live twice
as long) if some workable path could be found.

I still suspect that the best path is in analyzing the DNA of long-living
people, rather than that of fruit flies. Perhaps there is some way to
combine the two approaches?

Steve

On Wed, Aug 11, 2010 at 8:37 PM, Ben Goertzel b...@goertzel.org wrote:



 On Wed, Aug 11, 2010 at 11:34 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben,

 It seems COMPLETELY obvious (to me) that almost any mutation would shorten
 lifespan, so we shouldn't expect to learn much from it.



 Why then do the Methuselah flies live 5x as long as normal flies?  You're
 conjecturing this is unrelated to the dramatically large number of SNPs with
 very different frequencies in the two classes of populations???

 ben



*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Steve Richfield
Bryan,

*I'm interested!*

Continuing...

On Tue, Aug 10, 2010 at 11:27 AM, Bryan Bishop kanz...@gmail.com wrote:

 On Tue, Aug 10, 2010 at 6:25 AM, Steve Richfield wrote:

 Note my prior posting explaining my inability even to find a source of
 used mice for kids to use in high-school anti-aging experiments, all while
 university labs are now killing their vast numbers of such mice. So long as
 things remain THIS broken, anything that isn't part of the solution simply
 becomes a part of the very big problem, AIs included.


 You might be inerested in this- I've been putting together an
 adopt-a-lab-rat program that is actually an adoption program for lab mice.


... then it is an adopt-a-mouse program?

I don't know if you are a *Pinky and the Brain* fan, but calling your
project something like *The Pinky Project* would be catchy.

In some cases mice that are used as a control group in experiments are then
 discarded at the end of the program because, honestly, their lifetime is
 over more or less, so the idea is that some people might be interested in
 adopting these mice.


I had several discussions with the folks at the U of W whose job it was to
euthanize those mice. Their worries seemed to center in two areas:
1.  Financial liability, e.g. a mouse bites a kid, whose finger becomes
infected and...
2.  Social liability, e.g. some kids who are torturing them put their videos
on the Internet.

Of course, you can also just pony up the $15 and get one from Jackson Labs.


Not the last time I checked. They are very careful NOT to sell them to
exactly the same population that I intend to supply them to - high-school
kids. I expect that if I became a middleman, that they would simply stop
selling to me. Even I would have a hard time purchasing them, because they
only sell to genuine LABS.

I haven't fully launced adopt-a-lab-rat yet because I am still trying to
 figure out how to avoid ending up in a situation where I have hundreds of
 rats and rodents running around my apartment and I get the short end of the
 stick (oops).


*What is your present situation and projections? How big a volume could you
supply? What are their approximate ages? Do they have really good
documentation? Were they used in any way that might compromise anti-aging
experiments, e.g. raised in a nicer-than-usual-laboratory environment? Do
you have any liability concerns as discussed above?
*

Mice in the wild live ~4 years. Lab mice live ~2 years. If you take a young
lab mouse and do everything you can to extend its life, you can approach 4
years. If you take an older lab mouse and do everything you can, you double
the REMAINDER of their life, e.g. starting with a one-year-old mouse, you
could get it to live ~3 years. How much better (or worse) than this you do
is the basis for judging by the Methuselah Mouse people.

Hence, really good documentation is needed to establish when they were born,
and when they left a laboratory environment. Tattoos or tags link the mouse
to the paperwork. If I/you/we are to get kids to compete to develop better
anti-aging methods, the mice need to be documented well enough to PROVE
beyond a shadow of a doubt that they did what they claimed they did.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread David Jones
This seems to be an overly simplistic view of AGI from a mathematician. It's
kind of funny how people over emphasize what they know or depend on their
current expertise too much when trying to solve new problems.

I don't think it makes sense to apply sanitized and formal mathematical
solutions to AGI. What reason do we have to believe that the problems we
face when developing AGI are solvable by such formal representations? What
reason do we have to think we can represent the problems as an instance of
such mathematical problems?

We have to start with the specific problems we are trying to solve, analyze
what it takes to solve them, and then look for and design a solution.
Starting with the solution and trying to hack the problem to fit it is not
going to work for AGI, in my opinion. I could be wrong, but I would need
some evidence to think otherwise.

Dave

On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.com wrote:

 You probably could show that a sophisticated mathematical structure would
 produce a scalable AGI program if is true, using contemporary mathematical
 models to simulate it.  However, if scalability was completely dependent on
 some as yet undiscovered mathemagical principle, then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems with
 contemporary AGI.  So I believe this could be demonstrated on a simulation.
 That means, that I could demonstrate effective AGI that works so long as the
 SAT problems are easily solved.  If the program reported that a complicated
 logical problem could not be solved, the user could provide his insight into
 the problem at those times to help with the problem.  This would not work
 exactly as hoped, but by working from there, I believe that I would be able
 to determine better ways to develop such a program so it would work better -
 if my conjecture about the potential efficacy of polynomial time SAT for AGI
 was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't think
 that this subsystem would take up millions of lines of code either.
 It's
 just that it is a *very* sophisticated and dynamic mathematical structure
 IMO.

 John



 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
 has to involve more ideological relationships than mathematical.  For
 instance you can apply a idea to your own thinking in a such a way that you
 are capable of (gradually) changing how you think about something.  This
 means that an idea can be a compression of some greater change in your own
 programming.  While the idea in this example would be associated with a
 fairly strong notion of meaning, since you cannot accurately understand the
 full consequences of the change it would be somewhat vague at first.  (It
 could be a very precise idea capable of having strong effect, but the
 details of those effects would not be known until the change had
 progressed.)

 I think the more important question is how does a general concept be
 interpreted across a range of different kinds of ideas.  Actually this is
 not so difficult, but what I am getting at is how are sophisticated
 conceptual interrelations integrated and resolved?
 Jim




*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Steve Richfield
Ben,

Genescient has NOT paralleled human mating habits that would predictably
shorten life. They have only started from a point well beyond anything
achievable in the human population, and gone on from there. Hence, while
their approach may find some interesting things, it is unlikely to find the
things that are now killing our elderly population.

Continuing...

On Tue, Aug 10, 2010 at 11:59 AM, Ben Goertzel b...@goertzel.org wrote:




 I should dredge up and forward past threads with them. There are some
 flaws in their chain of reasoning, so that it won't be all that simple to
 sort the few relevant from the many irrelevant mutations. There is both a
 huge amount of noise, and irrelevant adaptations to their environment and
 their treatment.


 They have evolved many different populations in parallel, using the same
 fitness criterion.  This provides powerful noise filtering


Multiple measurements improve the S/N ratio by the square root of the number
of measurements. Hence, if they were to develop 100 parallel populations,
they could expect to improve their S/N ratio by 10:1. They haven't done 100
parallel populations, and they need much better than 10:1 improvement to the
S/N ratio.

Of course, this is all aside from the fact that their signal is wrong
because of the different mating habits.


 Even when the relevant mutations are eventually identified, it isn't clear
 how that will map to usable therapies for the existing population.


 yes, that's a complex matter



 Further, most of the things that kill us operate WAY too slowly to affect
 fruit flies, though there are some interesting dual-affecting problems.


 Fruit flies get all the  major ailments that kill people frequently, except
 cancer.  heart disease, neurodegenerative disease, respiratory problems,
 immune problems, etc.


Curiously, the list of conditions that they DO exhibit appears to be the
SAME list as people with reduced body temperatures exhibit. This suggests
simply correcting elderly people's body temperatures as they crash. Then,
where do we go from there?

Note that as you get older, your risk of contracting cancer rises
dramatically - SO dramatically that the odds of you eventually contracting
it are ~100%. Meanwhile, the risks of the other diseases DECREASE as you get
older past a certain age, so if you haven't contracted them by ~80, then you
probably never will contract them.

Scientific American had an article a while back about people in Israel who
are 100 years old. At ~100, your risk of dieing during each following year
DECREASES with further advancing age!!! This strongly suggests some
early-killers, that if you somehow escape them, you can live for quite a
while. Our breeding practices would certainly invite early-killers. Of
course, only a very tiny segment of the population lives to be 100.


 As I have posted in the past, what we have here in the present human
 population is about the equivalent of a fruit fly population that was bred
 for the shortest possible lifespan.


 Certainly not.


??? Not what?


 We have those fruit fly populations also, and analysis of their genetics
 refutes your claim ;p ...


Where? References? The last I looked, all they had in addition to their
long-lived groups were uncontrolled control groups, and no groups bred only
from young flies.

In any case, since the sociology of humans is SO much different than that of
fruit flies, and breeding practices interact so much with sociology, e.g.
the bright colorings of birds, beards (that I have commented on before),
etc. In short, I would expect LOTS of mutations from young-bread groups, but
entirely different mutations in people than in fruit flies.

I suspect that there is LOTS more information in the DNA of healthy people
100 than there is in any population of fruit flies. Perhaps, data from
fruit flies could then be used to reduce the noise from the limited human
population who lives to be 100? Anyway, if someone has thought this whole
thing out, I sure haven't seen it. Sure there is probably lots to be learned
from genetic approaches, but Genescient's approach seems flawed by its
simplicity.

The challenge here is as always. The value of such research to us is VERY
high, yet there is no meaningful funding. If/when an early AI becomes
available to help in such efforts, there simply won't be any money available
to divert it away from defense (read that: offense) work.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Ben Goertzel
 We have those fruit fly populations also, and analysis of their genetics
 refutes your claim ;p ...


 Where? References? The last I looked, all they had in addition to their
 long-lived groups were uncontrolled control groups, and no groups bred only
 from young flies.



Michael rose's UCI lab has evolved flies specifically for short lifespan,
but the results may not be published yet...



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
David,
I am not a mathematician although I do a lot
of computer-related mathematical work of course.  My remark was directed
toward John who had suggested that he thought that there is some
sophisticated mathematical sub system that would (using my words here)
provide such a substantial benefit to AGI that its lack may be at the core
of the contemporary problem.  I was saying that unless this required
mathemagic then a scalable AGI system demonstrating how effective this kind
of mathematical advancement could probably be simulated using contemporary
mathematics.  This is not the same as saying that AGI is solvable by
sanitized formal representations any more than saying that your message is a
sanitized formal statement because it was dependent on a lot of computer
mathematics in order to send it.  In other words I was challenging John at
that point to provide some kind of evidence for his view.

I then went on to say, that for example, I think that fast SAT solutions
would make scalable AGI possible (that is, scalable up to a point that is
way beyond where we are now), and therefore I believe that I could create a
simulation of an AGI program to demonstrate what I am talking about.  (A
simulation is not the same as the actual thing.)

I didn't say, nor did I imply, that the mathematics would be all there is to
it.  I have spent a long time thinking about the problems of applying formal
and informal systems to 'real world' (or other world) problems and the
application of methods is a major part of my AGI theories.  I don't expect
you to know all of my views on the subject but I hope you will keep this in
mind for future discussions.
Jim Bromer

On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve, analyze
 what it takes to solve them, and then look for and design a solution.
 Starting with the solution and trying to hack the problem to fit it is not
 going to work for AGI, in my opinion. I could be wrong, but I would need
 some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability was
 completely dependent on some as yet undiscovered mathemagical principle,
 then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems
 with contemporary AGI.  So I believe this could be demonstrated on a
 simulation.  That means, that I could demonstrate effective AGI that works
 so long as the SAT problems are easily solved.  If the program reported that
 a complicated logical problem could not be solved, the user could provide
 his insight into the problem at those times to help with the problem.  This
 would not work exactly as hoped, but by working from there, I believe that I
 would be able to determine better ways to develop such a program so it would
 work better - if my conjecture about the potential efficacy of polynomial
 time SAT for AGI was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't think
 that this subsystem would take up millions of lines of code either.
 It's
 just that it is a *very* sophisticated and dynamic mathematical
 structure
 IMO.

 John



 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
 has to involve more ideological relationships than mathematical.  For
 instance you can apply a idea to your own thinking in a such a way that you
 are capable of (gradually) changing how you think about something.  This
 means that an idea can be a compression of some greater change in your own
 programming.  While the idea in this example would be associated with a
 fairly strong notion of meaning

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread David Jones
Jim,

Fair enough. My apologies then. I just often see your posts on SAT or other
very formal math problems and got the impression that you thought this was
at the core of AGI's problems and that pursuing a fast solution to
NP-complete problems is the best way to solve it. At least, that was my
impression. So, my thought was that such formal methods don't seem to be a
complete solution at all and other factors, such as uncertainty, could make
such formal solutions ineffective or unusable. Which is why I said it's
important to analyze the requirements of the problem and then apply a
solution.

Dave

On Wed, Aug 11, 2010 at 1:02 PM, Jim Bromer jimbro...@gmail.com wrote:

 David,
 I am not a mathematician although I do a lot
 of computer-related mathematical work of course.  My remark was directed
 toward John who had suggested that he thought that there is some
 sophisticated mathematical sub system that would (using my words here)
 provide such a substantial benefit to AGI that its lack may be at the core
 of the contemporary problem.  I was saying that unless this required
 mathemagic then a scalable AGI system demonstrating how effective this kind
 of mathematical advancement could probably be simulated using contemporary
 mathematics.  This is not the same as saying that AGI is solvable by
 sanitized formal representations any more than saying that your message is a
 sanitized formal statement because it was dependent on a lot of computer
 mathematics in order to send it.  In other words I was challenging John at
 that point to provide some kind of evidence for his view.

 I then went on to say, that for example, I think that fast SAT solutions
 would make scalable AGI possible (that is, scalable up to a point that is
 way beyond where we are now), and therefore I believe that I could create a
 simulation of an AGI program to demonstrate what I am talking about.  (A
 simulation is not the same as the actual thing.)

 I didn't say, nor did I imply, that the mathematics would be all there is
 to it.  I have spent a long time thinking about the problems of applying
 formal and informal systems to 'real world' (or other world) problems and
 the application of methods is a major part of my AGI theories.  I don't
 expect you to know all of my views on the subject but I hope you will keep
 this in mind for future discussions.
 Jim Bromer

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability was
 completely dependent on some as yet undiscovered mathemagical principle,
 then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems
 with contemporary AGI.  So I believe this could be demonstrated on a
 simulation.  That means, that I could demonstrate effective AGI that works
 so long as the SAT problems are easily solved.  If the program reported that
 a complicated logical problem could not be solved, the user could provide
 his insight into the problem at those times to help with the problem.  This
 would not work exactly as hoped, but by working from there, I believe that I
 would be able to determine better ways to develop such a program so it would
 work better - if my conjecture about the potential efficacy of polynomial
 time SAT for AGI was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose 
 johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't
 think

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve, analyze
 what it takes to solve them, and then look for and design a solution.
 Starting with the solution and trying to hack the problem to fit it is not
 going to work for AGI, in my opinion. I could be wrong, but I would need
 some evidence to think otherwise.



I agree that disassociated theories have not proved to be very successful at
AGI, but then again what has?

I would use a mathematical method that gave me the number or percentage of
True cases that satisfy a propositional formula as a way to check the
internal logic of different combinations of logic-based conjectures.  Since
methods that can do this with logical variables for any logical system that
goes (a little) past 32 variables are feasible the potential of this method
should be easy to check (although it would hit a rather low ceiling of
scalability).  So I do think that logic and other mathematical methods would
help in true AGI programs.  However, the other major problem, as I see it,
is one of application. And strangely enough, this application problem is so
pervasive, that it means that you cannot even develop artificial opinions!
You can program the computer to jump on things that you expect it to see,
and you can program it to create theories about random combinations of
objects, but how could you have a true opinion without child-level
judgement?

This may sound like frivolous philosophy but I think it really shows that
the starting point isn't totally beyond us.

Jim Bromer


On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.com wrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve, analyze
 what it takes to solve them, and then look for and design a solution.
 Starting with the solution and trying to hack the problem to fit it is not
 going to work for AGI, in my opinion. I could be wrong, but I would need
 some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability was
 completely dependent on some as yet undiscovered mathemagical principle,
 then you couldn't.

 For example, I think polynomial time SAT would solve a lot of problems
 with contemporary AGI.  So I believe this could be demonstrated on a
 simulation.  That means, that I could demonstrate effective AGI that works
 so long as the SAT problems are easily solved.  If the program reported that
 a complicated logical problem could not be solved, the user could provide
 his insight into the problem at those times to help with the problem.  This
 would not work exactly as hoped, but by working from there, I believe that I
 would be able to determine better ways to develop such a program so it would
 work better - if my conjecture about the potential efficacy of polynomial
 time SAT for AGI was true.

 Jim Bromer

 On Mon, Aug 9, 2010 at 6:11 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.comwrote:

  -Original Message-
  From: Jim Bromer [mailto:jimbro...@gmail.com]
 
   how would these diverse examples
  be woven into highly compressed and heavily cross-indexed pieces of
  knowledge that could be accessed quickly and reliably, especially for
 the
  most common examples that the person is familiar with.

 This is a big part of it and for me the most exciting. And I don't think
 that this subsystem would take up millions of lines of code either.
 It's
 just that it is a *very* sophisticated and dynamic mathematical
 structure
 IMO.

 John



 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
 has to involve more ideological relationships than mathematical.  For
 instance you can apply a idea to your own thinking

[agi] Scalable vs Diversifiable

2010-08-11 Thread Mike Tintner
Isn't it time that people started adopting true AGI criteria?

The universal endlessly repeated criterion here that a system must be capable 
of being scaled up is a narrow AI criterion.

The proper criterion is diversifiable. If your system can say navigate a 
DARPA car through a grid of city streets, it's AGI if it's diversifiable - or 
rather can diversify itself - if it can then navigate its way through a forest, 
or a strange maze - without being programmed anew. A system is AGI if it can 
diversify from one kind of task/activity to another different kind - as humans 
and animals do - without being additionally programmed . Scale is irrelevant 
and deflects attention from the real problem.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Scalable vs Diversifiable

2010-08-11 Thread Jim Bromer
I don't feel that a non-programmer can actually define what true AGI
criteria would be.  The problem is not just oriented around a consumer
definition of a goal, because it involves a fundamental comprehension of the
tools available to achieve that goal.  I appreciate your idea that AGI has
to be diversifiable but your inability to understand certain things that are
said about computer programming makes your proclamation look odd.
Jim Bromer

On Wed, Aug 11, 2010 at 2:26 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Isn't it time that people started adopting true AGI criteria?

 The universal endlessly repeated criterion here that a system must be
 capable of being scaled up is a narrow AI criterion.

 The proper criterion is diversifiable. If your system can say navigate a
 DARPA car through a grid of city streets, it's AGI if it's diversifiable -
 or rather can diversify itself - if it can then navigate its way through a
 forest, or a strange maze - without being programmed anew. A system is AGI
 if it can diversify from one kind of task/activity to another different kind
 - as humans and animals do - without being additionally programmed . Scale
 is irrelevant and deflects attention from the real problem.
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Scalable vs Diversifiable

2010-08-11 Thread Jim Bromer
I think I may understand where the miscommunication occurred.  When we talk
about scaling up an AGI program we are - of course - referrring to improving
on an AGI program that can work effectively with a very limited amount of
referential knowledge so that it would be able to handle a much greater
diversification of referential knowledge.  You might say that is what
scalability means.
Jim Bromer

On Wed, Aug 11, 2010 at 2:43 PM, Jim Bromer jimbro...@gmail.com wrote:

 I don't feel that a non-programmer can actually define what true AGI
 criteria would be.  The problem is not just oriented around a consumer
 definition of a goal, because it involves a fundamental comprehension of the
 tools available to achieve that goal.  I appreciate your idea that AGI has
 to be diversifiable but your inability to understand certain things that are
 said about computer programming makes your proclamation look odd.
 Jim Bromer

 On Wed, Aug 11, 2010 at 2:26 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Isn't it time that people started adopting true AGI criteria?

 The universal endlessly repeated criterion here that a system must be
 capable of being scaled up is a narrow AI criterion.

 The proper criterion is diversifiable. If your system can say navigate a
 DARPA car through a grid of city streets, it's AGI if it's diversifiable -
 or rather can diversify itself - if it can then navigate its way through a
 forest, or a strange maze - without being programmed anew. A system is AGI
 if it can diversify from one kind of task/activity to another different kind
 - as humans and animals do - without being additionally programmed . Scale
 is irrelevant and deflects attention from the real problem.
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Scalable vs Diversifiable

2010-08-11 Thread Mike Tintner
To respond in kind ,you along with virtually all AGI-ers show an inability to 
understand or define the problems of AGI - i.e. the end-problems that an AGI 
must face,  the problems of creativity vs rationality. You only actually deal 
in standard, narrow AI problems. 

If you don't understand what a new machine must do, all your technical 
knowledge of machines to date may be irrelevant. And in your case, I can't 
think of any concerns of yours like complexity that have anything to do with 
AGI problems at all - nor have you ever tried to relate them to any actual AGI 
problems.

So we're well-matched in inability - except that in creative matters, knowledge 
of the problems-to-be-solved always takes priority over knowledge of entirely 
irrelevant solutions.



From: Jim Bromer 
Sent: Wednesday, August 11, 2010 7:43 PM
To: agi 
Subject: Re: [agi] Scalable vs Diversifiable


I don't feel that a non-programmer can actually define what true AGI criteria 
would be.  The problem is not just oriented around a consumer definition of a 
goal, because it involves a fundamental comprehension of the tools available to 
achieve that goal.  I appreciate your idea that AGI has to be diversifiable but 
your inability to understand certain things that are said about computer 
programming makes your proclamation look odd.
Jim Bromer


On Wed, Aug 11, 2010 at 2:26 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Isn't it time that people started adopting true AGI criteria?

  The universal endlessly repeated criterion here that a system must be capable 
of being scaled up is a narrow AI criterion.

  The proper criterion is diversifiable. If your system can say navigate a 
DARPA car through a grid of city streets, it's AGI if it's diversifiable - or 
rather can diversify itself - if it can then navigate its way through a forest, 
or a strange maze - without being programmed anew. A system is AGI if it can 
diversify from one kind of task/activity to another different kind - as humans 
and animals do - without being additionally programmed . Scale is irrelevant 
and deflects attention from the real problem.
agi | Archives  | Modify Your Subscription   



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
I've made two ultra-brilliant statements in the past few days.  One is that
a concept can simultaneously be both precise and vague.  And the other is
that without judgement even opinions are impossible.  (Ok, those two
statements may not be ultra-brilliant but they are brilliant right?  Ok,
maybe not truly brilliant,  but highly insightful and
perspicuously intelligent... Or at least interesting to the cognoscenti
maybe?.. Well, they were interesting to me at least.)

Ok, these two interesting-to-me comments made by me are interesting because
they suggest that we do not know how to program a computer even to create
opinions.  Or if we do, there is a big untapped difference between those
programs that show nascent judgement (perhaps only at levels relative to the
domain of their capabilities) and those that don't.

This is AGI programmer's utopia.  (Or at least my utopia).  Because I need
to find something that is simple enough for me to start with and which can
lend itself to develop and test theories of AGI judgement and scalability.
By allowing an AGI program to participate more in the selection of its own
primitive 'interests' we will be able to interact with it, both as
programmer and as user, to guide it toward selecting those interests which
we can understand and seem interesting to us.  By creating an AGI program
that has a faculty for primitive judgement (as we might envision such an
ability), and then testing the capabilities in areas where the program seems
to work more effectively, we might be better able to develop more
powerful AGI theories that show greater scalability, so long as we are able
to understand what interests the program is pursuing.

Jim Bromer

On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.



 I agree that disassociated theories have not proved to be very successful
 at AGI, but then again what has?

 I would use a mathematical method that gave me the number or percentage of
 True cases that satisfy a propositional formula as a way to check the
 internal logic of different combinations of logic-based conjectures.  Since
 methods that can do this with logical variables for any logical system that
 goes (a little) past 32 variables are feasible the potential of this method
 should be easy to check (although it would hit a rather low ceiling of
 scalability).  So I do think that logic and other mathematical methods would
 help in true AGI programs.  However, the other major problem, as I see it,
 is one of application. And strangely enough, this application problem is so
 pervasive, that it means that you cannot even develop artificial opinions!
 You can program the computer to jump on things that you expect it to see,
 and you can program it to create theories about random combinations of
 objects, but how could you have a true opinion without child-level
 judgement?

 This may sound like frivolous philosophy but I think it really shows that
 the starting point isn't totally beyond us.

 Jim Bromer


  On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 This seems to be an overly simplistic view of AGI from a mathematician.
 It's kind of funny how people over emphasize what they know or depend on
 their current expertise too much when trying to solve new problems.

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.

 Dave

   On Wed, Aug 11, 2010 at 10:39 AM, Jim Bromer jimbro...@gmail.comwrote:

   You probably could show that a sophisticated mathematical structure
 would produce a scalable AGI program if is true, using contemporary
 mathematical models to simulate it.  However, if scalability

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread David Jones
Slightly off the topic of your last email. But, all this discussion has made
me realize how to phrase something... That is that solving AGI requires
understand the constraints that problems impose on a solution. So, it's sort
of a unbelievably complex constraint satisfaction problem. What we've been
talking about is how we come up with solutions to these problems when we
sometimes aren't actually trying to solve any of the real problems. As I've
been trying to articulate lately is that in order to satisfy the constraints
of the problems AGI imposes, we must really understand the problems we want
to solve and how they can be solved(their constraints). I think that most of
us do not do this because the problem is so complex, that we refuse to
attempt to understand all of its constraints. Instead we focus on something
very small and manageable with fewer constraints. But, that's what creates
narrow AI, because the constraints you have developed the solution for only
apply to a narrow set of problems. Once you try to apply it to a different
problem that imposes new, incompatible constraints, the solution fails.

So, lately I've been pushing for people to truly analyze the problems
involved in AGI, step by step to understand what the constraints are. I
think this is the only way we will develop a solution that is guaranteed to
work without wasting undo time in trial and error. I don't think trial and
error approaches will work. We must know what the constraints are, instead
of guessing at what solutions might approximate the constraints. I think the
problem space is too large to guess.

Of course, I think acquisition of knowledge through automated means is the
first step in understanding these constraints. But, unfortunately, few agree
with me.

Dave

On Wed, Aug 11, 2010 at 3:44 PM, Jim Bromer jimbro...@gmail.com wrote:

 I've made two ultra-brilliant statements in the past few days.  One is that
 a concept can simultaneously be both precise and vague.  And the other is
 that without judgement even opinions are impossible.  (Ok, those two
 statements may not be ultra-brilliant but they are brilliant right?  Ok,
 maybe not truly brilliant,  but highly insightful and
 perspicuously intelligent... Or at least interesting to the cognoscenti
 maybe?.. Well, they were interesting to me at least.)

 Ok, these two interesting-to-me comments made by me are interesting because
 they suggest that we do not know how to program a computer even to create
 opinions.  Or if we do, there is a big untapped difference between those
 programs that show nascent judgement (perhaps only at levels relative to the
 domain of their capabilities) and those that don't.

 This is AGI programmer's utopia.  (Or at least my utopia).  Because I need
 to find something that is simple enough for me to start with and which can
 lend itself to develop and test theories of AGI judgement and scalability.
 By allowing an AGI program to participate more in the selection of its own
 primitive 'interests' we will be able to interact with it, both as
 programmer and as user, to guide it toward selecting those interests which
 we can understand and seem interesting to us.  By creating an AGI program
 that has a faculty for primitive judgement (as we might envision such an
 ability), and then testing the capabilities in areas where the program seems
 to work more effectively, we might be better able to develop more
 powerful AGI theories that show greater scalability, so long as we are able
 to understand what interests the program is pursuing.

 Jim Bromer

 On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do we have to think we can represent the problems as an instance of
 such mathematical problems?

 We have to start with the specific problems we are trying to solve,
 analyze what it takes to solve them, and then look for and design a
 solution. Starting with the solution and trying to hack the problem to fit
 it is not going to work for AGI, in my opinion. I could be wrong, but I
 would need some evidence to think otherwise.



 I agree that disassociated theories have not proved to be very successful
 at AGI, but then again what has?

 I would use a mathematical method that gave me the number or percentage of
 True cases that satisfy a propositional formula as a way to check the
 internal logic of different combinations of logic-based conjectures.  Since
 methods that can do this with logical variables for any logical system that
 goes (a little) past 32 variables are feasible the potential of this method
 should be easy to check (although it would hit a rather low ceiling of
 scalability).  So I do think that logic

Re: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-11 Thread Jim Bromer
I guess what I was saying was that I can test my mathematical theory and my
theories about primitive judgement both at the same time by trying to find
those areas where the program seems to be good at something.  For example, I
found that it was easy to write a program that found outlines where there
was some contrast between a solid object and whatever was in the background
or whatever was in the foreground.  Now I, as an artist could use that to
create interesting abstractions.  However, that does not mean that an AGI
program that was supposed to learn and acquire greater judgement based on my
ideas for a primitive judgement would be able to do that.  Instead, I would
let it do what it seemed good at, so long as I was able to appreciate what
it was doing.  Since this would lead to something - a next step at least - I
could use this to test my theory that a good more general SAT solution would
be useful as well.
Jim Bromer

On Wed, Aug 11, 2010 at 3:57 PM, David Jones davidher...@gmail.com wrote:

 Slightly off the topic of your last email. But, all this discussion has
 made me realize how to phrase something... That is that solving AGI requires
 understand the constraints that problems impose on a solution. So, it's sort
 of a unbelievably complex constraint satisfaction problem. What we've been
 talking about is how we come up with solutions to these problems when we
 sometimes aren't actually trying to solve any of the real problems. As I've
 been trying to articulate lately is that in order to satisfy the constraints
 of the problems AGI imposes, we must really understand the problems we want
 to solve and how they can be solved(their constraints). I think that most of
 us do not do this because the problem is so complex, that we refuse to
 attempt to understand all of its constraints. Instead we focus on something
 very small and manageable with fewer constraints. But, that's what creates
 narrow AI, because the constraints you have developed the solution for only
 apply to a narrow set of problems. Once you try to apply it to a different
 problem that imposes new, incompatible constraints, the solution fails.

 So, lately I've been pushing for people to truly analyze the problems
 involved in AGI, step by step to understand what the constraints are. I
 think this is the only way we will develop a solution that is guaranteed to
 work without wasting undo time in trial and error. I don't think trial and
 error approaches will work. We must know what the constraints are, instead
 of guessing at what solutions might approximate the constraints. I think the
 problem space is too large to guess.

 Of course, I think acquisition of knowledge through automated means is the
 first step in understanding these constraints. But, unfortunately, few agree
 with me.

 Dave

 On Wed, Aug 11, 2010 at 3:44 PM, Jim Bromer jimbro...@gmail.com wrote:

 I've made two ultra-brilliant statements in the past few days.  One is
 that a concept can simultaneously be both precise and vague.  And the other
 is that without judgement even opinions are impossible.  (Ok, those two
 statements may not be ultra-brilliant but they are brilliant right?  Ok,
 maybe not truly brilliant,  but highly insightful and
 perspicuously intelligent... Or at least interesting to the cognoscenti
 maybe?.. Well, they were interesting to me at least.)

 Ok, these two interesting-to-me comments made by me are interesting
 because they suggest that we do not know how to program a computer even to
 create opinions.  Or if we do, there is a big untapped difference between
 those programs that show nascent judgement (perhaps only at levels relative
 to the domain of their capabilities) and those that don't.

 This is AGI programmer's utopia.  (Or at least my utopia).  Because I need
 to find something that is simple enough for me to start with and which can
 lend itself to develop and test theories of AGI judgement and scalability.
 By allowing an AGI program to participate more in the selection of its own
 primitive 'interests' we will be able to interact with it, both as
 programmer and as user, to guide it toward selecting those interests which
 we can understand and seem interesting to us.  By creating an AGI program
 that has a faculty for primitive judgement (as we might envision such an
 ability), and then testing the capabilities in areas where the program seems
 to work more effectively, we might be better able to develop more
 powerful AGI theories that show greater scalability, so long as we are able
 to understand what interests the program is pursuing.

 Jim Bromer

 On Wed, Aug 11, 2010 at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote:

 On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:

 I don't think it makes sense to apply sanitized and formal mathematical
 solutions to AGI. What reason do we have to believe that the problems we
 face when developing AGI are solvable by such formal representations? What
 reason do

Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Steve Richfield
Ben,

It seems COMPLETELY obvious (to me) that almost any mutation would shorten
lifespan, so we shouldn't expect to learn much from it. What particular
lifespan-shortening mutations are in the human genome wouldn't be expected
to be the same, or even the same as separated human populations. Hmmm, an
interesting thought: I wonder if certain racially mixed people have shorter
lifespans because they have several disjoint sets of such mutations?!!! Any
idea where to find such data?

It has long been noticed that some racial subgroups do NOT have certain
age-related illnesses, e.g. Japanese don't have clogged arteries, but they
DO have lots of cancer. So far everyone has been blindly presuming diet, but
seeking a particular level of genetic disaster could also explain it.

Any thoughts?

Steve

On Wed, Aug 11, 2010 at 8:06 AM, Ben Goertzel b...@goertzel.org wrote:


 We have those fruit fly populations also, and analysis of their genetics
 refutes your claim ;p ...


 Where? References? The last I looked, all they had in addition to their
 long-lived groups were uncontrolled control groups, and no groups bred only
 from young flies.



 Michael rose's UCI lab has evolved flies specifically for short lifespan,
 but the results may not be published yet...

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Nao Nao

2010-08-11 Thread John G. Rose
Well both. Though much of the control could be remote depending on
bandwidth. 

 

Also, one robot could benefit from the eyes of many as they would all be
internetworked to a degree.

 

John

 

From: Ian Parker [mailto:ianpark...@gmail.com] 



Your remarks about WiFi echo my own view. Should a robot rely on an external
connection (WiFi) or should it have complex processing itself.

 

In general we try to keep real time response information local, although
local my be viewed in terms of the c the speed of light. If a PC is 150m
away from a robot this is a 300m double journey which will take a
microsecond. To access the Web for a program will, of course, take
considerably longer.

 

A μ sec is nothing even when we are considering time critical functions like
balance. However for balance it might be a good idea to either have the
robot balancing, or else to have a card inserted into the PC.

 

This is one topic for which I have not been able to have a satisfactory
discussion or answer. People who build robots tend to think in terms of
having the processing power on the robot. This I believe is wrong.

 

 

  - Ian Parker

On 10 August 2010 00:06, John G. Rose johnr...@polyplexic.com wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 

 

An unusually sophisticated ( somewhat expensive) promotional robot vid:

 

 
http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expre
sses-and-detects-emotions.html
http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expres
ses-and-detects-emotions.html


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ | Modify Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  


 https://www.listbox.com/member/archive/rss/303/ agi | Archives | Modify
Your Subscription

 https://www.listbox.com/member/archive/rss/303/ 

 https://www.listbox.com/member/archive/rss/303/  




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Compressed Cross-Indexed Concepts

2010-08-11 Thread John G. Rose
 -Original Message-
 From: Jim Bromer [mailto:jimbro...@gmail.com]
 
 
 Well, if it was a mathematical structure then we could start developing
 prototypes using familiar mathematical structures.  I think the structure
has
 to involve more ideological relationships than mathematical.  

The ideological would still need be expressed mathematically.

 For instance
 you can apply a idea to your own thinking in a such a way that you are
 capable of (gradually) changing how you think about something.  This means
 that an idea can be a compression of some greater change in your own
 programming.  

Mmm yes or like a key.

 While the idea in this example would be associated with a
 fairly strong notion of meaning, since you cannot accurately understand
the
 full consequences of the change it would be somewhat vague at first.  (It
 could be a very precise idea capable of having strong effect, but the
details of
 those effects would not be known until the change had progressed.)
 

Yes. It would need to have receptors, an affinity something like that, or
somehow enable an efficiency change.

 I think the more important question is how does a general concept be
 interpreted across a range of different kinds of ideas.  Actually this is
not so
 difficult, but what I am getting at is how are sophisticated conceptual
 interrelations integrated and resolved?
 Jim

Depends on the structure. We would want to build it such that this happens
at various levels or the various multidimensional densities. But at the same
time complex state is preserved until proven benefits show themselves.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Nao Nao

2010-08-11 Thread John G. Rose
I wasn't meaning to portray pessimism.

 

And that little sucker probably couldn't pick up a knife yet.

 

But this is a paradigm change happening where we will have many networked
mechanical entities. This opens up a whole new world of security and privacy
issues...  

 

John

 

From: David Jones [mailto:davidher...@gmail.com] 



Way too pessimistic in my opinion. 

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com
wrote:

Aww, so cute.

 

I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal data in a massive multi-agent geo-distributed
robo-network.

 

So cuddly!

 

And I wonder if it receives and executes commands, commands that come in
over the network from whatever interested corporation or government pays the
most for access.

 

Such a sweet little friendly Nao. Everyone should get one :)

 

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Steve Richfield
Ben,

On Mon, Aug 9, 2010 at 1:07 PM, Ben Goertzel b...@goertzel.org wrote:


 I'm speaking there, on Ai applied to life extension; and participating in a
 panel discussion on narrow vs. general AI...

 Having some interest, expertise, and experience in both areas, I find it
hard to imagine much interplay at all.

The present challenge is wrapped up in a lack of basic information,
resulting from insufficient funds to do the needed experiments.
Extrapolations have already gone WAY beyond the data, and new methods to
push extrapolations even further wouldn't be worth nearly as much as just a
little more hard data.

Just look at Aubrey's long list of aging mechanisms. We don't now even know
which predominate, or which cause others. Further, there are new candidates
arising every year, e.g. Burzynski's theory that most aging is secondary to
methylation of DNA receptor sites, or my theory that Aubrey's entire list
could be explained by people dropping their body temperatures later in life.
There are LOTS of other theories, and without experimental results, there is
absolutely no way, AI or not, to sort the wheat from the chaff.

Note that one of the front runners, the cosmic ray theory, could easily be
tested by simply raising some mice in deep tunnels. This is high-school
level stuff, yet with NO significant funding for aging research, it remains
undone.

Note my prior posting explaining my inability even to find a source of
used mice for kids to use in high-school anti-aging experiments, all while
university labs are now killing their vast numbers of such mice. So long as
things remain THIS broken, anything that isn't part of the solution simply
becomes a part of the very big problem, AIs included.

The best that an AI could seemingly do is to pronounce Fund and facilitate
basic aging research and then suspend execution pending an interrupt
indicating that the needed experiments have been done.

Could you provide some hint as to where you are going with this?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Nao Nao

2010-08-10 Thread David Jones
Way too pessimistic in my opinion.

On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.comwrote:

 Aww, so cute.



 I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
 sensory information back to the main servers with all the other Nao's all
 collecting personal data in a massive multi-agent geo-distributed
 robo-network.



 So cuddly!



 And I wonder if it receives and executes commands, commands that come in
 over the network from whatever interested corporation or government pays the
 most for access.



 Such a sweet little friendly Nao. Everyone should get one :)



 John



 *From:* Mike Tintner [mailto:tint...@blueyonder.co.uk]



 An unusually sophisticated ( somewhat expensive) promotional robot vid:




 http://www.telegraph.co.uk/technology/news/7934318/Nao-the-robot-that-expresses-and-detects-emotions.html

 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/| 
 Modifyhttps://www.listbox.com/member/?;Your Subscription

 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Ben Goertzel
I'm writing an article on the topic for H+ Magazine, which will appear in
the next couple weeks ... I'll post a link to it when it appears

I'm not advocating applying AI in the absence of new experiments of course.
I've been working closely with Genescient, applying AI tech to analyze the
genomics of their long-lived superflies, so part of my message is about the
virtuous cycle achievable via synergizing AI data analysis with
carefully-designed experimental evolution of model organisms...

-- Ben

On Tue, Aug 10, 2010 at 7:25 AM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben,

 On Mon, Aug 9, 2010 at 1:07 PM, Ben Goertzel b...@goertzel.org wrote:


 I'm speaking there, on Ai applied to life extension; and participating in
 a panel discussion on narrow vs. general AI...

 Having some interest, expertise, and experience in both areas, I find it
 hard to imagine much interplay at all.

 The present challenge is wrapped up in a lack of basic information,
 resulting from insufficient funds to do the needed experiments.
 Extrapolations have already gone WAY beyond the data, and new methods to
 push extrapolations even further wouldn't be worth nearly as much as just a
 little more hard data.

 Just look at Aubrey's long list of aging mechanisms. We don't now even know
 which predominate, or which cause others. Further, there are new candidates
 arising every year, e.g. Burzynski's theory that most aging is secondary to
 methylation of DNA receptor sites, or my theory that Aubrey's entire list
 could be explained by people dropping their body temperatures later in life.
 There are LOTS of other theories, and without experimental results, there is
 absolutely no way, AI or not, to sort the wheat from the chaff.

 Note that one of the front runners, the cosmic ray theory, could easily be
 tested by simply raising some mice in deep tunnels. This is high-school
 level stuff, yet with NO significant funding for aging research, it remains
 undone.

 Note my prior posting explaining my inability even to find a source of
 used mice for kids to use in high-school anti-aging experiments, all while
 university labs are now killing their vast numbers of such mice. So long as
 things remain THIS broken, anything that isn't part of the solution simply
 becomes a part of the very big problem, AIs included.

 The best that an AI could seemingly do is to pronounce Fund and facilitate
 basic aging research and then suspend execution pending an interrupt
 indicating that the needed experiments have been done.

 Could you provide some hint as to where you are going with this?

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread David Jones
Steve,

Capable and effective AI systems would be very helpful at every step of the
research process. Basic research is a major area I think that AGI will be
applied to. In fact, that's exactly where I plan to apply it first.

Dave

On Tue, Aug 10, 2010 at 7:25 AM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben,

 On Mon, Aug 9, 2010 at 1:07 PM, Ben Goertzel b...@goertzel.org wrote:


 I'm speaking there, on Ai applied to life extension; and participating in
 a panel discussion on narrow vs. general AI...

 Having some interest, expertise, and experience in both areas, I find it
 hard to imagine much interplay at all.

 The present challenge is wrapped up in a lack of basic information,
 resulting from insufficient funds to do the needed experiments.
 Extrapolations have already gone WAY beyond the data, and new methods to
 push extrapolations even further wouldn't be worth nearly as much as just a
 little more hard data.

 Just look at Aubrey's long list of aging mechanisms. We don't now even know
 which predominate, or which cause others. Further, there are new candidates
 arising every year, e.g. Burzynski's theory that most aging is secondary to
 methylation of DNA receptor sites, or my theory that Aubrey's entire list
 could be explained by people dropping their body temperatures later in life.
 There are LOTS of other theories, and without experimental results, there is
 absolutely no way, AI or not, to sort the wheat from the chaff.

 Note that one of the front runners, the cosmic ray theory, could easily be
 tested by simply raising some mice in deep tunnels. This is high-school
 level stuff, yet with NO significant funding for aging research, it remains
 undone.

 Note my prior posting explaining my inability even to find a source of
 used mice for kids to use in high-school anti-aging experiments, all while
 university labs are now killing their vast numbers of such mice. So long as
 things remain THIS broken, anything that isn't part of the solution simply
 becomes a part of the very big problem, AIs included.

 The best that an AI could seemingly do is to pronounce Fund and facilitate
 basic aging research and then suspend execution pending an interrupt
 indicating that the needed experiments have been done.

 Could you provide some hint as to where you are going with this?

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Compressed Cross-Indexed Concepts

2010-08-10 Thread Mike Tintner
[from:

Concept-Rich Mathematics Instruction]



Teacher: Very good. Now, look at this drawing

and explain what you see. [Draws.]

Debora: It's a pie with three pieces.

Teacher: Tell us about the pieces.

Debora: Three thirds.

Teachers: What is the difference among the pieces?

Debora: This is the largest third, and here is the smallest . . .

Sound familiar? Have you ever wondered why students often

understand mathematics in a very rudimentary and prototypical

way, why even rich and exciting hands-on types of active learning

do not always result in real learning of new concepts? From

the psycho-educational perspective, these are the critical questions.

In other words, epistemology is valuable to the extent that

it helps us find ways to enable students who come with preconceived

and misconceived ideas to understand a framework of

scientific and mathematical concepts.

Constructivism: A New Perspective

At the dawn of behaviorism, constructivism became the most

dominant epistemology in education. The purest forms of this

philosophy profess that knowledge is not passively received

either through the senses or by way of communication, just as

meaning is not explicitly out there for grabs. Rather, constructivists

generally agree that knowledge is actively built up by a

cognizing human who needs to adapt to what is fit and viable

(von Glasersfeld, 1995). Thus, there is no dispute among constructivists

over the premise that one's knowledge is in a constant

state of flux because humans are subject to an ever-changing

reality (Jaworski, 1994, p. 16).

Although constructivists generally regard understanding as

the outcome of an active process, constructivists still argue

over the nature of the process of knowing. Is knowing simply

a matter of recall? Does learning new concepts reflect additive

or structural cognitive changes? Is the process of knowing

concepts built from the bottom up, or can it be a top-down

process? How does new conceptual knowledge depend on

experience? How does conceptual knowledge relate to procedural

knowledge? And, can teachers mediate conceptual

development?

| Concept-Rich Mathematics Instruction

Is Learning New Concepts Simply a Mechanism

of Memorization and Recall?

Science and mathematics educators have become increasingly

aware that our understanding of conceptual change is at least as

important as the analysis of the concepts themselves. In fact, a

plethora of research has established that concepts are mental

structures of intellectual relationships, not simply a subject matter.

The research indicates that the mental structures of intellectual

relationships that make up mental concepts organize human

experiences and human memory (Bartsch, 1998). Therefore, conceptual

changes represent structural cognitive changes, not simply

additive changes. Based on the research in cognitive psychology,

the attention of research in education has been shifting from the

content (e.g., mathematical concepts) to the mental predicates,

language, and preconcepts. Despite the research, many teachers

continue to approach new concepts as if they were simply addons

to their students' existing knowledge-a subject of memorization

and recall. This practice may well be one of the causes of

misconceptions in mathematics.

Structural Cognitive Change

The notion of structural cognitive change, or schematic change,

was first introduced in the field of psychology (by Bartlett, who

studied memory in the 1930s). It became one of the basic tenets

of constructivism. Researchers in mathematics education picked

up on this term and have been leaning heavily on it since the

1960s, following Skemp (1962), Minsky (1975), and Davis (1984).

The generally accepted idea among researchers in the field, as

stated by Skemp (1986, p. 43), is that in mathematics, to understand

something is to assimilate it into an appropriate schema.

A structural cognitive change is not merely an appendage. It

involves the whole network of interrelated operational and

conceptual schemata. Structural changes are pervasive, central,

and permanent.

The first characteristic of structural change refers to its pervasive

nature. That is, new experiences do not have a limited

effect, but cause the entire cognitive structure to rearrange itself.

Vygotsky (1986, p. 167) argued,

It was shown and proved experimentally that mental development

does not coincide with the development of separate psychological

functions, but rather depends on changing relations between them.

The development of each function, in turn, depends upon the

progress in the development of the interfunctional system.



From: Jim Bromer 
Sent: Monday, August 09, 2010 11:11 PM
To: agi 
Subject: [agi] Compressed Cross-Indexed Concepts


On Mon, Aug 9, 2010 at 4:57 PM, John G. Rose johnr...@polyplexic.com wrote:

   -Original Message-
   From: Jim Bromer [mailto:jimbro...@gmail.com]
  
how would these diverse

Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Steve Richfield
Ben,

On Tue, Aug 10, 2010 at 8:44 AM, Ben Goertzel b...@goertzel.org wrote:


 I'm writing an article on the topic for H+ Magazine, which will appear in
 the next couple weeks ... I'll post a link to it when it appears

 I'm not advocating applying AI in the absence of new experiments of
 course.  I've been working closely with Genescient, applying AI tech to
 analyze the genomics of their long-lived superflies, so part of my message
 is about the virtuous cycle achievable via synergizing AI data analysis with
 carefully-designed experimental evolution of model organisms...


I should dredge up and forward past threads with them. There are some flaws
in their chain of reasoning, so that it won't be all that simple to sort the
few relevant from the many irrelevant mutations. There is both a huge amount
of noise, and irrelevant adaptations to their environment and their
treatment. Even when the relevant mutations are eventually identified, it
isn't clear how that will map to usable therapies for the existing
population.

Perhaps you remember the old Star Trek episode about the long-lived
population that was still locked in a war after hundreds of years? The
episode devolved into a dispute over the potential value of this discovery -
was there something valuable in the environment, or did they just evolve to
live longer? Here, the long-lived population isn't even human.

Further, most of the things that kill us operate WAY too slowly to affect
fruit flies, though there are some interesting dual-affecting problems.
Unfortunately, it isn't as practical to autopsy fruit flies as it is to
autopsy people to see what killed them.

As I have posted in the past, what we have here in the present human
population is about the equivalent of a fruit fly population that was bred
for the shortest possible lifespan. Our social practices could hardly do
worse. Our present challenge is to get to where fruit flies were before Rose
first bred them for long life.

I strongly suspect that we have some early-killer mutations, e.g. to people
off as quickly as possible after they pass child-bearing age, which itself
is probably being shortened through our bizarre social habits of mating
like-aged people. Genescient's approach holds no promise of identifying
THOSE genes, and identifying the other genes won't help at all until those
killer genes are first silenced.

In short, there are some really serious challenges to Genescient's approach.
I expect success for several other quarters long before Genescient bears
real-world usable fruit. I suspect that these challenges, along with the
ubiquitous shortage of funding will keep Genescient out of producing
real-world usable results pretty much forever.

Future AGI output: Fund aging research.

Update on studying more of Burzynski's papers: His is not a cancer cure at
all. What he is doing is removing gene-silencing methylization from the DNA,
and letting nature take its course, e.g. having their immune systems kill
the cancer via aptosis. In short, it is a real-world anti-aging approach
that has snuck in under the radar. OF COURSE any real-world working
anti-aging approach would kill cancer! How good is his present product? Who
knows? It sure looks to me like this is a valid approach, and I suspect that
any bugs will get worked out in time. WATCH THIS. This looks to me like it
will work in the real-world long before any other of the present popular
approaches stand a chance of working. After all, it sure seems to be working
on some people with really extreme gene silencing - called cancer.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Bob Mottram
On 10 August 2010 16:44, Ben Goertzel b...@goertzel.org wrote:
 I'm writing an article on the topic for H+ Magazine, which will appear in the 
 next couple weeks ... I'll post a link to it when it appears

 I'm not advocating applying AI in the absence of new experiments of course.  
 I've been working closely with Genescient, applying AI tech to analyze the 
 genomics of their long-lived superflies, so part of my message is about the 
 virtuous cycle achievable via synergizing AI data analysis with 
 carefully-designed experimental evolution of model organisms...




Probably if I was going to apply AI in a medical context I'd
prioritize those conditions which are both common and either fatal or
have a severe impact on quality of life.  Also worthwhile would be
using AI to try to discover drugs which have an equivalent effect to
existing known ones but can be manufactured at a significantly lower
cost, such that they are brought within the means of a larger fraction
of the population.  Investigating aging is perfectly legitimate, but
if you're trying to maximize your personal utility I'd regard it as a
low priority compared to other more urgent medical issues which cause
premature deaths.

Also in the endeavor to extend life we need not focus entirely upon
medical aspects.  The organizational problems of delivering known
medications on a large scale is also a problem which AI could perhaps
be used to optimize.  The way in which things like this are currently
organized seems to be based upon some combination of tradition and
intuitive hunches, so there may be low hanging fruit to be obtained
here.  For example, if an epidemic breaks out, why should you
vaccinate first?  If you have access to a social graph (from Facebook,
or wherever) it's probably possible to calculate an optimal strategy.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread David Jones
The think the biggest thing to remember here is that general AI could be
applied to many different problems in parallel by many different people.
They would help with many aspects of the problem solving process, not just a
single one and certainly not just applied to a single experiment/study.

I'm confident that Ben is aware of this


On Tue, Aug 10, 2010 at 1:43 PM, Bob Mottram fuzz...@gmail.com wrote:

 On 10 August 2010 16:44, Ben Goertzel b...@goertzel.org wrote:
  I'm writing an article on the topic for H+ Magazine, which will appear in
 the next couple weeks ... I'll post a link to it when it appears
 
  I'm not advocating applying AI in the absence of new experiments of
 course.  I've been working closely with Genescient, applying AI tech to
 analyze the genomics of their long-lived superflies, so part of my message
 is about the virtuous cycle achievable via synergizing AI data analysis with
 carefully-designed experimental evolution of model organisms...




 Probably if I was going to apply AI in a medical context I'd
 prioritize those conditions which are both common and either fatal or
 have a severe impact on quality of life.  Also worthwhile would be
 using AI to try to discover drugs which have an equivalent effect to
 existing known ones but can be manufactured at a significantly lower
 cost, such that they are brought within the means of a larger fraction
 of the population.  Investigating aging is perfectly legitimate, but
 if you're trying to maximize your personal utility I'd regard it as a
 low priority compared to other more urgent medical issues which cause
 premature deaths.

 Also in the endeavor to extend life we need not focus entirely upon
 medical aspects.  The organizational problems of delivering known
 medications on a large scale is also a problem which AI could perhaps
 be used to optimize.  The way in which things like this are currently
 organized seems to be based upon some combination of tradition and
 intuitive hunches, so there may be low hanging fruit to be obtained
 here.  For example, if an epidemic breaks out, why should you
 vaccinate first?  If you have access to a social graph (from Facebook,
 or wherever) it's probably possible to calculate an optimal strategy.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Bryan Bishop
On Tue, Aug 10, 2010 at 6:25 AM, Steve Richfield wrote:

 Note my prior posting explaining my inability even to find a source of
 used mice for kids to use in high-school anti-aging experiments, all while
 university labs are now killing their vast numbers of such mice. So long as
 things remain THIS broken, anything that isn't part of the solution simply
 becomes a part of the very big problem, AIs included.


You might be inerested in this- I've been putting together an
adopt-a-lab-rat program that is actually an adoption program for lab mice.
In some cases mice that are used as a control group in experiments are then
discarded at the end of the program because, honestly, their lifetime is
over more or less, so the idea is that some people might be interested in
adopting these mice. Of course, you can also just pony up the $15 and get
one from Jackson Labs. I haven't fully launced adopt-a-lab-rat yet because I
am still trying to figure out how to avoid ending up in a situation where I
have hundreds of rats and rodents running around my apartment and I get the
short end of the stick (oops).

- Bryan
http://heybryan.org/
1 512 203 0507



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Bob Mottram
On 10 August 2010 18:43, Bob Mottram fuzz...@gmail.com wrote:
 here.  For example, if an epidemic breaks out, why should you
 vaccinate first?


That should have been who rather than why :-)

Just thinking a little further, in hand waving mode, If something like
the common cold were added as a status within social networks, and
everyone was on the network it might even be possible to eliminate
this disease simply by getting people to avoid those who are known to
have it for a certain period of time - a sort of internet enabled
smart avoidance strategy.  This wouldn't be a cure, but it could
severely hamper the disease transmission mechanism, perhaps even to
the extent of driving it to extinction.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Ben Goertzel
 I should dredge up and forward past threads with them. There are some flaws
 in their chain of reasoning, so that it won't be all that simple to sort the
 few relevant from the many irrelevant mutations. There is both a huge amount
 of noise, and irrelevant adaptations to their environment and their
 treatment.


They have evolved many different populations in parallel, using the same
fitness criterion.  This provides powerful noise filtering



 Even when the relevant mutations are eventually identified, it isn't clear
 how that will map to usable therapies for the existing population.


yes, that's a complex matter



 Further, most of the things that kill us operate WAY too slowly to affect
 fruit flies, though there are some interesting dual-affecting problems.


Fruit flies get all the  major ailments that kill people frequently, except
cancer.  heart disease, neurodegenerative disease, respiratory problems,
immune problems, etc.



 As I have posted in the past, what we have here in the present human
 population is about the equivalent of a fruit fly population that was bred
 for the shortest possible lifespan.



Certainly not.  We have those fruit fly populations also, and analysis of
their genetics refutes your claim ;p ...



ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread David Jones
Bob, their are serious issues with such a suggestion.

The biggest issue, is that there is a good chance it wouldn't work because
diseases, including the common cold, have incubation times. So, you may not
have any symptoms at all, yet you can pass it on to other people.

And even if we did know who was sick, are you really going to stay home for
2 weeks every time you get sick? If I were an employer, I would rather have
you come to work when you feel up to it.

Another point I've given to germaphobes is that let's say you are successful
at avoiding as many possible germs as possible and avoid getting sick as
much as possible. That means that you are likely not immune to some common
colds and such that you should be. So, when you are old and less capable,
your immune system will not be able to fight off the infection and you will
die an early death.

Dave

On Tue, Aug 10, 2010 at 1:51 PM, Bob Mottram fuzz...@gmail.com wrote:

 On 10 August 2010 18:43, Bob Mottram fuzz...@gmail.com wrote:
  here.  For example, if an epidemic breaks out, why should you
  vaccinate first?


 That should have been who rather than why :-)

 Just thinking a little further, in hand waving mode, If something like
 the common cold were added as a status within social networks, and
 everyone was on the network it might even be possible to eliminate
 this disease simply by getting people to avoid those who are known to
 have it for a certain period of time - a sort of internet enabled
 smart avoidance strategy.  This wouldn't be a cure, but it could
 severely hamper the disease transmission mechanism, perhaps even to
 the extent of driving it to extinction.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] How To Create General AI Draft2

2010-08-09 Thread John G. Rose
Actually this is quite critical.

 

Defining a chair - which would agree with each instance of a chair in the
supplied image - is the way a chair should be defined and is the way the
mind processes it.

 

It can be defined mathematically in many ways. There is a particular one I
would go for though...

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Sunday, August 08, 2010 7:28 AM
To: agi
Subject: Re: [agi] How To Create General AI Draft2

 

You're waffling.

 

You say there's a pattern for chair - DRAW IT. Attached should help you.

 

Analyse the chairs given in terms of basic visual units. Or show how any
basic units can be applied to them. Draw one or two.

 

You haven't identified any basic visual units  - you don't have any. Do you?
Yes/no. 

 

No. That's not funny, that's a waste.. And woolly and imprecise through
and through.

 

 

 

From: David Jones mailto:davidher...@gmail.com  

Sent: Sunday, August 08, 2010 1:59 PM

To: agi mailto:agi@v2.listbox.com  

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny. 

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:

Dave:No... it is equivalent to saying that the whole world can be modeled as
if everything was made up of matter

 

And matter is... ?  Huh?

 

You clearly don't realise that your thinking is seriously woolly - and you
will pay a heavy price in lost time.

 

What are your basic world/visual-world analytic units  wh. you are
claiming to exist?  

 

You thought - perhaps think still - that *concepts* wh. are pretty
fundamental intellectual units of analysis at a certain level, could be
expressed as, or indeed, were patterns. IOW there's a fundamental pattern
for chair or table. Absolute nonsense. And a radical failure to
understand the basic nature of concepts which is that they are *freeform*
schemas, incapable of being expressed either as patterns or programs.

 

You had merely assumed that concepts could be expressed as patterns,but had
never seriously, visually analysed it. Similarly you are merely assuming
that the world can be analysed into some kind of visual units - but you
haven't actually done the analysis, have you? You don't have any of these
basic units to hand, do you? If you do, I suggest, reply instantly, naming a
few. You won't be able to do it. They don't exist.

 

Your whole approach to AGI is based on variations of what we can call
fundamental analysis - and it's wrong. God/Evolution hasn't built the
world with any kind of geometric, or other consistent, bricks. He/It is a
freeform designer. You have to start thinking outside the
box/brick/fundamental unit.

 

From: David Jones mailto:davidher...@gmail.com  

Sent: Sunday, August 08, 2010 5:12 AM

To: agi mailto:agi@v2.listbox.com  

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

I took your comments into consideration and have been updating my paper to
make sure these problems are addressed. 

See more comments below.

On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk
wrote:

1) You don't define the difference between narrow AI and AGI - or make clear
why your approach is one and not the other


I removed this because my audience is for AI researchers... this is AGI 101.
I think it's clear that my design defines general as being able to handle
the vast majority of things we want the AI to handle without requiring a
change in design.
 

 

2) Learning about the world won't cut it -  vast nos. of progs. claim they
can learn about the world - what's the difference between narrow AI and AGI
learning?


The difference is in what you can or can't learn about and what tasks you
can or can't perform. If the AI is able to receive input about anything it
needs to know about in the same formats that it knows how to understand and
analyze, it can reason about anything it needs to.
 

 

3) Breaking things down into generic components allows us to learn about
and handle the vast majority of things we want to learn about. This is what
makes it general!

 

Wild assumption, unproven or at all demonstrated and untrue.


You are only right that I haven't demonstrated it. I will address this in
the next paper and continue adding details over the next few drafts.

As a simple argument against your counter argument... 

If that were true that we could not understand the world using a limited

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ian Parker
What about DESTIN? Jim has talked about video. Could DESTIN be generalized
to 3 dimensions, or even n dimensions?


  - Ian Parker

On 9 August 2010 07:16, John G. Rose johnr...@polyplexic.com wrote:

 Actually this is quite critical.



 Defining a chair - which would agree with each instance of a chair in the
 supplied image - is the way a chair should be defined and is the way the
 mind processes it.



 It can be defined mathematically in many ways. There is a particular one I
 would go for though...



 John



 *From:* Mike Tintner [mailto:tint...@blueyonder.co.uk]
 *Sent:* Sunday, August 08, 2010 7:28 AM
 *To:* agi
 *Subject:* Re: [agi] How To Create General AI Draft2



 You're waffling.



 You say there's a pattern for chair - DRAW IT. Attached should help you.



 Analyse the chairs given in terms of basic visual units. Or show how any
 basic units can be applied to them. Draw one or two.



 You haven't identified any basic visual units  - you don't have any. Do
 you? Yes/no.



 No. That's not funny, that's a waste.. And woolly and imprecise through
 and through.







 *From:* David Jones davidher...@gmail.com

 *Sent:* Sunday, August 08, 2010 1:59 PM

 *To:* agi agi@v2.listbox.com

 *Subject:* Re: [agi] How To Create General AI Draft2



 Mike,

 We've argued about this over and over and over. I don't want to repeat
 previous arguments to you.

 You have no proof that the world cannot be broken down into simpler
 concepts and components. The only proof you attempt to propose are your
 example problems that *you* don't understand how to solve. Just because
 *you* cannot solve them, doesn't mean they cannot be solved at all using a
 certain methodology. So, who is really making wild assumptions?

 The mere fact that you can refer to a chair means that it is a
 recognizable pattern. LOL. That fact that you don't realize this is quite
 funny.

 Dave

 On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk
 wrote:

 Dave:No... it is equivalent to saying that the whole world can be modeled
 as if everything was made up of matter



 And matter is... ?  Huh?



 You clearly don't realise that your thinking is seriously woolly - and you
 will pay a heavy price in lost time.



 What are your basic world/visual-world analytic units  wh. you are
 claiming to exist?



 You thought - perhaps think still - that *concepts* wh. are pretty
 fundamental intellectual units of analysis at a certain level, could be
 expressed as, or indeed, were patterns. IOW there's a fundamental pattern
 for chair or table. Absolute nonsense. And a radical failure to
 understand the basic nature of concepts which is that they are *freeform*
 schemas, incapable of being expressed either as patterns or programs.



 You had merely assumed that concepts could be expressed as patterns,but had
 never seriously, visually analysed it. Similarly you are merely assuming
 that the world can be analysed into some kind of visual units - but you
 haven't actually done the analysis, have you? You don't have any of these
 basic units to hand, do you? If you do, I suggest, reply instantly, naming a
 few. You won't be able to do it. They don't exist.



 Your whole approach to AGI is based on variations of what we can call
 fundamental analysis - and it's wrong. God/Evolution hasn't built the
 world with any kind of geometric, or other consistent, bricks. He/It is a
 freeform designer. You have to start thinking outside the
 box/brick/fundamental unit.



 *From:* David Jones davidher...@gmail.com

 *Sent:* Sunday, August 08, 2010 5:12 AM

 *To:* agi agi@v2.listbox.com

 *Subject:* Re: [agi] How To Create General AI Draft2



 Mike,

 I took your comments into consideration and have been updating my paper to
 make sure these problems are addressed.

 See more comments below.

 On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk
 wrote:

 1) You don't define the difference between narrow AI and AGI - or make
 clear why your approach is one and not the other


 I removed this because my audience is for AI researchers... this is AGI
 101. I think it's clear that my design defines general as being able to
 handle the vast majority of things we want the AI to handle without
 requiring a change in design.




 2) Learning about the world won't cut it -  vast nos. of progs. claim
 they can learn about the world - what's the difference between narrow AI and
 AGI learning?


 The difference is in what you can or can't learn about and what tasks you
 can or can't perform. If the AI is able to receive input about anything it
 needs to know about in the same formats that it knows how to understand and
 analyze, it can reason about anything it needs to.




 3) Breaking things down into generic components allows us to learn about
 and handle the vast majority of things we want to learn about. This is what
 makes it general!



 Wild assumption, unproven or at all demonstrated and untrue.


 You are only right that I

Re: RE: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
I agree John that this is a useful exercise. This would be a good discussion
if mike would ever admit that I might be right and he might be wrong. I'm
not sure that will ever happen though. :) First he says I can't define a
pattern that works. Then, when I do, he says the pattern is no good because
it isn't physical. Lol. If he would ever admit that I might have gotten it
right, the discussion would be a good one. Instead, he hugs his preconceived
notions no matter how good my arguments are and finds yet another reason,
any reason will do, to say I'm still wrong.

On Aug 9, 2010 2:18 AM, John G. Rose johnr...@polyplexic.com wrote:

Actually this is quite critical.



Defining a chair - which would agree with each instance of a chair in the
supplied image - is the way a chair should be defined and is the way the
mind processes it.



It can be defined mathematically in many ways. There is a particular one I
would go for though...



John



*From:* Mike Tintner [mailto:tint...@blueyonder.co.uk]
*Sent:* Sunday, August 08, 2010 7:28 AM


To: agi
Subject: Re: [agi] How To Create General AI Draft2



You're waffling.



You say there's a pattern for chair - DRAW IT. Attached should help you.



Analyse the chairs given in terms of basic visual units. Or show how any
basic units can be applied to them. Draw one or two.



You haven't identified any basic visual units  - you don't have any. Do you?
Yes/no.



No. That's not funny, that's a waste.. And woolly and imprecise through
and through.







*From:* David Jones davidher...@gmail.com

*Sent:* Sunday, August 08, 2010 1:59 PM

*To:* agi agi@v2.listbox.com

*Subject:* Re: [agi] How To Create General AI Draft2



Mike,

We've argued about this over and over and over. I don't want to repeat
previous arguments to you.

You have no proof that the world cannot be broken down into simpler concepts
and components. The only proof you attempt to propose are your example
problems that *you* don't understand how to solve. Just because *you* cannot
solve them, doesn't mean they cannot be solved at all using a certain
methodology. So, who is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a
recognizable pattern. LOL. That fact that you don't realize this is quite
funny.

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk
wrote:

Dave:No... it is equivalent to saying that the whole world can be modeled as
if everything was made up of matter



And matter is... ?  Huh?



You clearly don't realise that your thinking is seriously woolly - and you
will pay a heavy price in lost time.



What are your basic world/visual-world analytic units  wh. you are
claiming to exist?



You thought - perhaps think still - that *concepts* wh. are pretty
fundamental intellectual units of analysis at a certain level, could be
expressed as, or indeed, were patterns. IOW there's a fundamental pattern
for chair or table. Absolute nonsense. And a radical failure to
understand the basic nature of concepts which is that they are *freeform*
schemas, incapable of being expressed either as patterns or programs.



You had merely assumed that concepts could be expressed as patterns,but had
never seriously, visually analysed it. Similarly you are merely assuming
that the world can be analysed into some kind of visual units - but you
haven't actually done the analysis, have you? You don't have any of these
basic units to hand, do you? If you do, I suggest, reply instantly, naming a
few. You won't be able to do it. They don't exist.



Your whole approach to AGI is based on variations of what we can call
fundamental analysis - and it's wrong. God/Evolution hasn't built the
world with any kind of geometric, or other consistent, bricks. He/It is a
freeform designer. You have to start thinking outside the
box/brick/fundamental unit.



*From:* David Jones davidher...@gmail.com

*Sent:* Sunday, August 08, 2010 5:12 AM

*To:* agi agi@v2.listbox.com

*Subject:* Re: [agi] How To Create General AI Draft2



Mike,

I took your comments into consideration and have been updating my paper to
make sure these problems are addressed.

See more comments below.

On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk
wrote:

1) You don't define the difference between narrow AI and AGI - or make clear
why your approach is one and not the other


I removed this because my audience is for AI researchers... this is AGI 101.
I think it's clear that my design defines general as being able to handle
the vast majority of things we want the AI to handle without requiring a
change in design.




2) Learning about the world won't cut it -  vast nos. of progs. claim they
can learn about the world - what's the difference between narrow AI and AGI
learning?


The difference is in what you can or can't learn about and what tasks you
can or can't perform. If the AI is able to receive input about anything it
needs to know about in the same formats

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
John:It can be defined mathematically in many ways

Try it - crude drawings/jottings/diagrams totally acceptable. See my set of 
fotos to Dave.

(And yes, you're right this is of extreme importance. And no. Dave, there are 
no such things as non-physical patterns).


From: John G. Rose 
Sent: Monday, August 09, 2010 7:16 AM
To: agi 
Subject: RE: [agi] How To Create General AI Draft2


Actually this is quite critical.

 

Defining a chair - which would agree with each instance of a chair in the 
supplied image - is the way a chair should be defined and is the way the mind 
processes it.

 

It can be defined mathematically in many ways. There is a particular one I 
would go for though...

 

John

 

From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
Sent: Sunday, August 08, 2010 7:28 AM
To: agi
Subject: Re: [agi] How To Create General AI Draft2

 

You're waffling.

 

You say there's a pattern for chair - DRAW IT. Attached should help you.

 

Analyse the chairs given in terms of basic visual units. Or show how any basic 
units can be applied to them. Draw one or two.

 

You haven't identified any basic visual units  - you don't have any. Do you? 
Yes/no. 

 

No. That's not funny, that's a waste.. And woolly and imprecise through and 
through.

 

 

 

From: David Jones 

Sent: Sunday, August 08, 2010 1:59 PM

To: agi 

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

We've argued about this over and over and over. I don't want to repeat previous 
arguments to you.

You have no proof that the world cannot be broken down into simpler concepts 
and components. The only proof you attempt to propose are your example problems 
that *you* don't understand how to solve. Just because *you* cannot solve them, 
doesn't mean they cannot be solved at all using a certain methodology. So, who 
is really making wild assumptions?

The mere fact that you can refer to a chair means that it is a recognizable 
pattern. LOL. That fact that you don't realize this is quite funny. 

Dave

On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

Dave:No... it is equivalent to saying that the whole world can be modeled as if 
everything was made up of matter

 

And matter is... ?  Huh?

 

You clearly don't realise that your thinking is seriously woolly - and you will 
pay a heavy price in lost time.

 

What are your basic world/visual-world analytic units  wh. you are claiming 
to exist?  

 

You thought - perhaps think still - that *concepts* wh. are pretty fundamental 
intellectual units of analysis at a certain level, could be expressed as, or 
indeed, were patterns. IOW there's a fundamental pattern for chair or 
table. Absolute nonsense. And a radical failure to understand the basic 
nature of concepts which is that they are *freeform* schemas, incapable of 
being expressed either as patterns or programs.

 

You had merely assumed that concepts could be expressed as patterns,but had 
never seriously, visually analysed it. Similarly you are merely assuming that 
the world can be analysed into some kind of visual units - but you haven't 
actually done the analysis, have you? You don't have any of these basic units 
to hand, do you? If you do, I suggest, reply instantly, naming a few. You won't 
be able to do it. They don't exist.

 

Your whole approach to AGI is based on variations of what we can call 
fundamental analysis - and it's wrong. God/Evolution hasn't built the world 
with any kind of geometric, or other consistent, bricks. He/It is a freeform 
designer. You have to start thinking outside the box/brick/fundamental unit.

 

From: David Jones 

Sent: Sunday, August 08, 2010 5:12 AM

To: agi 

Subject: Re: [agi] How To Create General AI Draft2

 

Mike,

I took your comments into consideration and have been updating my paper to make 
sure these problems are addressed. 

See more comments below.

On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

1) You don't define the difference between narrow AI and AGI - or make clear 
why your approach is one and not the other


I removed this because my audience is for AI researchers... this is AGI 101. I 
think it's clear that my design defines general as being able to handle the 
vast majority of things we want the AI to handle without requiring a change in 
design.
 

   

  2) Learning about the world won't cut it -  vast nos. of progs. claim they 
can learn about the world - what's the difference between narrow AI and AGI 
learning?


The difference is in what you can or can't learn about and what tasks you can 
or can't perform. If the AI is able to receive input about anything it needs to 
know about in the same formats that it knows how to understand and analyze, it 
can reason about anything it needs to.
 

   

  3) Breaking things down into generic components allows us to learn about and 
handle the vast majority of things we want to learn about. This is what makes 
it general!

   

  Wild

Re: RE: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
Dave,

You offer nothing to even attend to.

The questions completely unanswered by you are:

1. what basic visual units of analysis have you arrived at? (you say there are 
such things - you must have arrived at something, no?) - zero answer

2.what kind of physical/visual *pattern* informs our concept of chair? - zero 
answer. A non-physical pattern pace you is a non-existent entity/figment of 
your mind, (just as the pattern of divine grace is),  - and yet another 
non-answer.

You're supposed to be doing visual AGI - put up something visual in answer to 
the questions, or, I suggest, keep quiet.


From: David Jones 
Sent: Monday, August 09, 2010 11:55 AM
To: agi 
Subject: Re: RE: [agi] How To Create General AI Draft2


I agree John that this is a useful exercise. This would be a good discussion if 
mike would ever admit that I might be right and he might be wrong. I'm not sure 
that will ever happen though. :) First he says I can't define a pattern that 
works. Then, when I do, he says the pattern is no good because it isn't 
physical. Lol. If he would ever admit that I might have gotten it right, the 
discussion would be a good one. Instead, he hugs his preconceived notions no 
matter how good my arguments are and finds yet another reason, any reason will 
do, to say I'm still wrong. 


  On Aug 9, 2010 2:18 AM, John G. Rose johnr...@polyplexic.com wrote:


  Actually this is quite critical.



  Defining a chair - which would agree with each instance of a chair in the 
supplied image - is the way a chair should be defined and is the way the mind 
processes it.



  It can be defined mathematically in many ways. There is a particular one I 
would go for though...



  John



  From: Mike Tintner [mailto:tint...@blueyonder.co.uk] 
  Sent: Sunday, August 08, 2010 7:28 AM 


  To: agi
  Subject: Re: [agi] How To Create General AI Draft2




  You're waffling.



  You say there's a pattern for chair - DRAW IT. Attached should help you.



  Analyse the chairs given in terms of basic visual units. Or show how any 
basic units can be applied to them. Draw one or two.



  You haven't identified any basic visual units  - you don't have any. Do you? 
Yes/no. 



  No. That's not funny, that's a waste.. And woolly and imprecise through and 
through.







  From: David Jones 

  Sent: Sunday, August 08, 2010 1:59 PM

  To: agi 

  Subject: Re: [agi] How To Create General AI Draft2



  Mike,

  We've argued about this over and over and over. I don't want to repeat 
previous arguments to you.

  You have no proof that the world cannot be broken down into simpler concepts 
and components. The only proof you attempt to propose are your example problems 
that *you* don't understand how to solve. Just because *you* cannot solve them, 
doesn't mean they cannot be solved at all using a certain methodology. So, who 
is really making wild assumptions?

  The mere fact that you can refer to a chair means that it is a recognizable 
pattern. LOL. That fact that you don't realize this is quite funny. 

  Dave

  On Sun, Aug 8, 2010 at 8:23 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Dave:No... it is equivalent to saying that the whole world can be modeled as 
if everything was made up of matter



  And matter is... ?  Huh?



  You clearly don't realise that your thinking is seriously woolly - and you 
will pay a heavy price in lost time.



  What are your basic world/visual-world analytic units  wh. you are claiming 
to exist?  



  You thought - perhaps think still - that *concepts* wh. are pretty 
fundamental intellectual units of analysis at a certain level, could be 
expressed as, or indeed, were patterns. IOW there's a fundamental pattern for 
chair or table. Absolute nonsense. And a radical failure to understand the 
basic nature of concepts which is that they are *freeform* schemas, incapable 
of being expressed either as patterns or programs.



  You had merely assumed that concepts could be expressed as patterns,but had 
never seriously, visually analysed it. Similarly you are merely assuming that 
the world can be analysed into some kind of visual units - but you haven't 
actually done the analysis, have you? You don't have any of these basic units 
to hand, do you? If you do, I suggest, reply instantly, naming a few. You won't 
be able to do it. They don't exist.



  Your whole approach to AGI is based on variations of what we can call 
fundamental analysis - and it's wrong. God/Evolution hasn't built the world 
with any kind of geometric, or other consistent, bricks. He/It is a freeform 
designer. You have to start thinking outside the box/brick/fundamental unit.



  From: David Jones 

  Sent: Sunday, August 08, 2010 5:12 AM

  To: agi 

  Subject: Re: [agi] How To Create General AI Draft2



  Mike,

  I took your comments into consideration and have been updating my paper to 
make sure these problems are addressed. 

  See more comments below.

  On Fri, Aug 6, 2010 at 8:15 PM, Mike

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Jim Bromer
The mind cannot determine whether or not -every- instance of a kind
of object is that kind of object.  I believe that the problem must be a
problem of complexity and it is just that the mind is much better at dealing
with complicated systems of possibilities than any computer program.  A
young child first learns that certain objects are called chairs, and that
the furniture objects that he sits on are mostly chairs.  In a few cases,
after seeing an odd object that is used as a chair for the first time (like
seeing an odd outdoor chair that is fashioned from twisted pieces of wood)
he might not know that it is a chair, or upon reflection wonder if it is or
not.  And think of odd furniture that appears and comes into fashion for a
while and then disappears (like the bean bag chair).  The question for me is
not what the smallest pieces of visual information necessary to represent
the range and diversity of kinds of objects are, but how would these diverse
examples be woven into highly compressed and heavily cross-indexed pieces of
knowledge that could be accessed quickly and reliably, especially for the
most common examples that the person is familiar with.
Jim Bromer

On Mon, Aug 9, 2010 at 2:16 AM, John G. Rose johnr...@polyplexic.comwrote:

  Actually this is quite critical.



 Defining a chair - which would agree with each instance of a chair in the
 supplied image - is the way a chair should be defined and is the way the
 mind processes it.



 John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
You see. This is precisely why I don't want to argue with Mike anymore. it
must be a physical pattern. LOL. Who ever said that patterns must be
physical? This is exactly why you can't see my point of view. You impose
unnecessary restrictions on any possible solution when there really are no
such restrictions.

Dave

On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  John:It can be defined mathematically in many ways

 Try it - crude drawings/jottings/diagrams totally acceptable. See my set of
 fotos to Dave.

 (And yes, you're right this is of extreme importance. And no. Dave, there
 are no such things as non-physical patterns).





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
PS Examples of nonphysical patterns AND how they are applicable to visual AGI.?



From: David Jones 
Sent: Monday, August 09, 2010 1:34 PM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


You see. This is precisely why I don't want to argue with Mike anymore. it 
must be a physical pattern. LOL. Who ever said that patterns must be physical? 
This is exactly why you can't see my point of view. You impose unnecessary 
restrictions on any possible solution when there really are no such 
restrictions.

Dave


On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  John:It can be defined mathematically in many ways

  Try it - crude drawings/jottings/diagrams totally acceptable. See my set of 
fotos to Dave.

  (And yes, you're right this is of extreme importance. And no. Dave, there are 
no such things as non-physical patterns).




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
I already stated these. read previous emails.

On Mon, Aug 9, 2010 at 8:48 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  PS Examples of nonphysical patterns AND how they are applicable to visual
 AGI.?


  *From:* David Jones davidher...@gmail.com
 *Sent:* Monday, August 09, 2010 1:34 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

 You see. This is precisely why I don't want to argue with Mike anymore. it
 must be a physical pattern. LOL. Who ever said that patterns must be
 physical? This is exactly why you can't see my point of view. You impose
 unnecessary restrictions on any possible solution when there really are no
 such restrictions.

 Dave

 On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  John:It can be defined mathematically in many ways

 Try it - crude drawings/jottings/diagrams totally acceptable. See my set
 of fotos to Dave.

 (And yes, you're right this is of extreme importance. And no. Dave, there
 are no such things as non-physical patterns).


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
Examples of nonphysical patterns?


From: David Jones 
Sent: Monday, August 09, 2010 1:34 PM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


You see. This is precisely why I don't want to argue with Mike anymore. it 
must be a physical pattern. LOL. Who ever said that patterns must be physical? 
This is exactly why you can't see my point of view. You impose unnecessary 
restrictions on any possible solution when there really are no such 
restrictions.

Dave


On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  John:It can be defined mathematically in many ways

  Try it - crude drawings/jottings/diagrams totally acceptable. See my set of 
fotos to Dave.

  (And yes, you're right this is of extreme importance. And no. Dave, there are 
no such things as non-physical patterns).




  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
No you didn't. You're being evasive through and through.

You haven't answered the questions put to you in any shape or form other than 
nonphysical - and never will. Nor do you have any answer. Finis.


From: David Jones 
Sent: Monday, August 09, 2010 1:51 PM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2


I already stated these. read previous emails. 


On Mon, Aug 9, 2010 at 8:48 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  PS Examples of nonphysical patterns AND how they are applicable to visual 
AGI.?



  From: David Jones 
  Sent: Monday, August 09, 2010 1:34 PM
  To: agi 
  Subject: Re: [agi] How To Create General AI Draft2


  You see. This is precisely why I don't want to argue with Mike anymore. it 
must be a physical pattern. LOL. Who ever said that patterns must be physical? 
This is exactly why you can't see my point of view. You impose unnecessary 
restrictions on any possible solution when there really are no such 
restrictions.

  Dave


  On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

John:It can be defined mathematically in many ways

Try it - crude drawings/jottings/diagrams totally acceptable. See my set of 
fotos to Dave.

(And yes, you're right this is of extreme importance. And no. Dave, there 
are no such things as non-physical patterns).




agi | Archives  | Modify Your Subscription   

agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
Mike,

Quoting a previous email:

QUOTE

In fact, the chair patterns you refer to are not strictly physical
patterns. The pattern is based on how the objects can be used, what their
intended uses probably are, and what most common effective uses are.

So, chairs are objects that are used to sit on. You can identify objects
whose most likely use is for sitting based on experience.

END QUOTE


Even refrigerators can be chairs. If a fridge is in the woods and you're out
there camping, you can sit on it. I could say sit on that fridge couch over
there. The fact that multiple people can sit on it, makes it possible to
call it a couch.

But, it's odd to call it a chair, because it's a fridge. So, when the object
has a more common effective use, as I stated above, it is usually referred
to by that use. If something is most likely used for sitting by a single
person, then it is a chair. If its most common best use is something else,
like cooling food, you would call it a fridge.

So, maybe the pattern would be, if it has some features like a chair, like
possible arm rests, a soft bottom, cushions, legs, a back rest, etc. and you
can't see it being used as anything else, then maybe it's a chair. If
someone sits on it, it certainly is a chair, if you find it by searching for
chairs, its likely a chair. etc.

You see, chairs are not simply recognized by their physical structure. There
are multiple ways you can recognize it and it is certainly important to know
that it doesn't seem useful for another task.

The idea that chairs cannot be recognized because they come in all shapes,
sizes and structures is just wrong.

Dave


On Mon, Aug 9, 2010 at 8:47 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Examples of nonphysical patterns?

  *From:* David Jones davidher...@gmail.com
 *Sent:* Monday, August 09, 2010 1:34 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

 You see. This is precisely why I don't want to argue with Mike anymore. it
 must be a physical pattern. LOL. Who ever said that patterns must be
 physical? This is exactly why you can't see my point of view. You impose
 unnecessary restrictions on any possible solution when there really are no
 such restrictions.

 Dave

 On Mon, Aug 9, 2010 at 7:27 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  John:It can be defined mathematically in many ways

 Try it - crude drawings/jottings/diagrams totally acceptable. See my set
 of fotos to Dave.

 (And yes, you're right this is of extreme importance. And no. Dave, there
 are no such things as non-physical patterns).


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Jim Bromer
Mike Tintner tint...@blueyonder.co.uk wrote:

  How do you reckon that will work for an infant or anyone who has only
 seen an example or two of the concept class-of-forms?


I do not reckon that it will work for an infant or anyone (or anything) who
(or that) has only seen an example or two of the concept class-of-forms.  I
haven't looked at your photos, but I did indicate that learning has to be
able to advance with new kinds of objects of a kind.  My previous comment
specifically dealt with the problem of learning to recognize radically
different instances of the kind.

There was once a time when it was thought that domain-specific AI, using
general methods of reasoning would be more feasible than general AI.  This
optimism was not borne out by experiment.  The question is why not?  I
believe that domain specific AI needs to rely on so much general knowledge
(AGI) as a base, that until a certain level of success in AGI is achieved,
narrower domain specific AI will be limited to calculation-based reasoning
and the like (as in closed taxonomic AI or simple neural networks).

A similar situation occurred in space travel.  At the dawn of the space age
some people intuitively thought that traveling to the moon would be 2000
times more difficult than sending a space vehicle up a 100 miles (since it
was 2000 times further away) so if it took 10 years to get to the pont where
they could get a space capsule up 100 miles, it would take 2 years to
reach the moon.  It didn't work that way, because as the leading experts
realized, getting away from earth's gravity results in a significant and
geometric decrease in the force needed to continue.  Because this fact was
not intuitive to the naive critic it wasn't completely grasped by many
people until the first space vehicle escaped earth orbit a few years after
the first space shots.

I think a similar situation probably is at the center of the feasibility of
basic AGI.  As more and more examples are learned, the complications in
storing and accessing that information in a wise and intelligent manner
become more and more elusive.  But, for example, if domain specific
information is dependent on a certain level of general knowledge, then you
won't see domain specific AI really take off until that level of AGI becomes
feasible.  Why would this relationship occur?  Because each time you double
*all* knowledge (as is implied by a doubling of general knowledge) you have
a progressively more complicated load on the computer.  So to double that
general knowledge twice, you would have to create an AGI program that was
capable of dealing with four times as much complexity.  To double that
general knowledge again, you would have to create an AGI program that would
have to deal with 8 times the complexity as your first prototype.  Once you
get your AGI program to work at a certain level of complexity, then your
domain-specific AI program might start to take off and you would see the
kind of dazzling results which would make the critics more wary of
expressing their skepticism.

Jim Bromer

On Mon, Aug 9, 2010 at 8:13 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  How do you reckon that will work for an infant or anyone who has only
 seen an example or two of the concept class-of-forms?

 (You're effectively misreading the set of fotos -  altho. this needs making
 clear - a major point of the set is:  how will any concept/schema of chair,
 derived from any set of particular kinds of chairs, cope with a radically
 new kind of chair?  Just saying - well let's analyse the chairs we have -
 is not an answer. You can take it for granted that the new chair will have
 some feature[s]/form that constitutes a radical departure from existing
 ones. (as is amply illustrated by my set of fotos). And yet your - an AGI -
 mind can normally adapt and recognize the new object as a chair. ).

 *From:* Jim Bromer jimbro...@gmail.com
  *Sent:* Monday, August 09, 2010 12:50 PM
  *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2

   The mind cannot determine whether or not -every- instance of a kind
 of object is that kind of object.  I believe that the problem must be a
 problem of complexity and it is just that the mind is much better at dealing
 with complicated systems of possibilities than any computer program.  A
 young child first learns that certain objects are called chairs, and that
 the furniture objects that he sits on are mostly chairs.  In a few cases,
 after seeing an odd object that is used as a chair for the first time (like
 seeing an odd outdoor chair that is fashioned from twisted pieces of wood)
 he might not know that it is a chair, or upon reflection wonder if it is or
 not.  And think of odd furniture that appears and comes into fashion for a
 while and then disappears (like the bean bag chair).  The question for me is
 not what the smallest pieces of visual information necessary to represent
 the range and diversity of kinds

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
Hi David,

I read the essay

I think it summarizes well some of the key issues involving the bridge
between perception and cognition, and the hierarchical decomposition of
natural concepts

I find the ideas very harmonious with those of Jeff Hawkins, Itamar Arel,
and other researchers focused on hierarchical deep learning approaches to
vision with longer-term AGI ambitions

I'm not sure there are any dramatic new ideas in the essay.  Do you think
there are?

My own view is that these ideas are basically right, but handle only a
modest percentage of what's needed to make a human-level, vaguely human-like
AGI   I.e. I don't agree that solving vision and the vision-cognition
bridge is *such* a huge part of AGI, though it's certainly a nontrivial
percentage...


-- Ben G

On Fri, Aug 6, 2010 at 4:44 PM, David Jones davidher...@gmail.com wrote:

 Hey Guys,

 I've been working on writing out my approach to create general AI to share
 and debate it with others in the field. I've attached my second draft of it
 in PDF format, if you guys are at all interested. It's still a work in
 progress and hasn't been fully edited. Please feel free to comment,
 positively or negatively, if you have a chance to read any of it. I'll be
 adding to and editing it over the next few days.

 I'll try to reply more professionally than I have been lately :) Sorry :S

 Cheers,

 Dave
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
Mike,

The concept of chair is not an isolated concept by itself. It is also not
recognized using a single simple schema. People have seen many chair
instances in their lives and are able to learn their features and
affordances. We are able to compare their features and structures.

So, when we see another chair, we are not just comparing a single
constructed schema. We compare the features to features we've seen before,
we analyze the uses of the structures and how similar they are to other
objects we've seen before. What might it be used for? Put just about
anything with a concave shape on the floor with 3 or four legs and you can
call it a chair. LOL. You see, there are physical features and patterns that
are do make it possible to consider that maybe a new object might be a
chair, but it is by no means some schema set in stone. We just fine
something that works well. And I've given you plenty of ways to think about
it that would suggest ways of solving the problem that would work well.

So, to say that I must create this perfect schema to prove that AGI is
possible is dumb and unreasonable. I can get you a close description of a
schema that would recognize it. But I certainly cannot write the program
out for you. It involves knowledge, which involves lots of supporting
algorithms to construct and use. It seems that no matter how much detail I
give you, you can't read between the lines.

So, give it a rest mike. It is clearly possible to do. How exactly it is
done is yet to be determined. This is why I say in my paper it is important
to start with raw data, because it is unrealistic and unrepresentative to
construct solutions that don't use knowledge and try to solve the problem
without the right knowledge.

Human beings do not recognize chairs in a vacuum. A lot of knowledge and
experience goes into it. Some of the things on your google example images
would not be recognized as chairs to people if given out of context. So, to
force AI to recognize them all with 100% accuracy is unreasonable. That's
why I don't like arguing with you. You are unreasonable and will never admit
that you're wrong.

Dave


On Mon, Aug 9, 2010 at 10:21 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  You're somewhat confused here (and now that you're answering, one can see
 why  make progress).

 The use of or to use a chair, involves a physical class of forms -
 bottoms or other objects have to make physical contact with - sit on - the
 chair/fridge etc. Everything we're talking about is physical and can only be
 conceived of physically and, relative to our discussion, visually.

 And you clearly don't see that you have still not identified any kind of
 physical schema/ framework for either chair or sitting  or anything
 else.

 And that is what a visual AGI must do - use some kind of physical schema -
 in order to recognize an object as a chair or the action of an object as
 sitting.

 [Note I use schema/framework rather than pattern - the former are
 more general terms, the latter much more specific ( mathematical).  I
 suspect that you may be  using pattern here confusedly in the
 popular/nonmathematical sense wh. is more akin to schema. But you and all
 other AGI-ers actually deal computationally in math. patterns, and it is
 that sense that I am addressing].

 When you claim that there is a pattern to chair[s] you are making a
 mathematical claim, - and it is completely indefensible. (Show me otherwise,
 John). And that is perhaps the most central issue of AGI. So it is worth
 consideration.

 You also seem to be confused about my position - wh. BTW as I've pointed
 out is backed by at least one significant AGI-er. I am NOT suggesting
 conceptualisation/object recognition cannot be done -  just not done by
 your and others' 100%-record-of-failure mathematical methods. (I'm almost
 tempted to say a blind idiot could see that [image: Smile emoticon] ).**

 I'm suggesting that the brain uses fluid schemas to recognize objects (and
 concepts) - fluidly stretchable (and editable) schemas -  when we say by no
 stretch of the imagination can that be recognized/classify as a chair.  -
 we are unconsciously indicating the underlying process of object recognition
 - one of stretching image schemas to match incoming objects.

 If you want an inspirational image of a fluid schema, think strings - as
 in string theory - those oscillating strings which are supposed to be
 capable of making any shape of particle or object. (I'm too ignorant to know
 how precisely the brain's image schemas and nature's theoretical string
 schemas can be aligned - comments welcome -  but there seems to be a loose
 aptness and even beauty in the comparison. It would be rather wonderful if
 mind and matter are conceived/work on similar principles).

 If you want both evidence and a concrete example of how fluid and
 stretchable the brain's schemas can be - think of what the schema must be
 like for one or 1. Well, something like a line obviously

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread David Jones
Thanks Ben,

I think the biggest difference with the way I approach it is to be
deliberate in how the system solves specific kinds of problems. I haven't
gone into that in detail yet though.

For example, Itamar seems to want to give the AI the basic building blocks
that make up spaciotemporal dependencies as a sort of bag of features and
just let a neural-net-like structure find the patterns. If that is not
accurate, please correct me. I am very skeptical of such approaches because
there is no guarantee at all that the system will properly represent the
relationships and structure of the data. It seems just hopeful to me that
such a system would get it right out of the vast number of possible results
it could accidental arrive at.

The human visual system doesn't evolve like that on the fly. This can be
proven by the fact that we all see the same visual illusions. We all exhibit
the same visual limitations in the same way. There is much evidence that the
system doesn't evolve accidentally. It has a limited set of rules it uses to
learn from perceptual data.

I think a more deliberate approach would be more effective because we can
understand why it does what it does, how it does it, and why its not working
if it doesn't work. With such deliberate approaches, it is much more clear
how to proceed and to reuse knowledge in many complementary ways. This is
what I meant by emergence.

I propose a more deliberate approach that knows exactly why problems can be
solved a certain way and how the system is likely to solve them.

I'm suggesting to represent the spaciotemporal relationships deliberately
and explicitly. Then we can construct general algorithms to solve problems
explicitly, yet generally.

Regarding computer vision not being that important... Don't you think that
because knowledge is so essential and manual input is inneffective,
perception-based acquisition of knowledge is a very serious barrier to AGI?
It seems to me that the solutions to AGI problems being constructed are not
using knowledge gained from simulated perception effectively. OpenCog's
natural language processing for example, seems to use very very little
knowledge that would be gathered from visual perception. As far as I
remember, it mostly uses things that are learned from other sources. To me,
it doesn't make sense to spend so much time debugging and developing such
solutions, when a better and more general approach to language understanding
would use a lot of knowledge.

Those are the sorts of things I feel are new to this approach.

Thanks Again,

Dave

PS: I'm planning to go to the Singularity Summit :) Last minute. Hope to see
you there.


On Mon, Aug 9, 2010 at 10:01 AM, Ben Goertzel b...@goertzel.org wrote:

 Hi David,

 I read the essay

 I think it summarizes well some of the key issues involving the bridge
 between perception and cognition, and the hierarchical decomposition of
 natural concepts

 I find the ideas very harmonious with those of Jeff Hawkins, Itamar Arel,
 and other researchers focused on hierarchical deep learning approaches to
 vision with longer-term AGI ambitions

 I'm not sure there are any dramatic new ideas in the essay.  Do you think
 there are?

 My own view is that these ideas are basically right, but handle only a
 modest percentage of what's needed to make a human-level, vaguely human-like
 AGI   I.e. I don't agree that solving vision and the vision-cognition
 bridge is *such* a huge part of AGI, though it's certainly a nontrivial
 percentage...


 -- Ben G

 On Fri, Aug 6, 2010 at 4:44 PM, David Jones davidher...@gmail.com wrote:

 Hey Guys,

 I've been working on writing out my approach to create general AI to share
 and debate it with others in the field. I've attached my second draft of it
 in PDF format, if you guys are at all interested. It's still a work in
 progress and hasn't been fully edited. Please feel free to comment,
 positively or negatively, if you have a chance to read any of it. I'll be
 adding to and editing it over the next few days.

 I'll try to reply more professionally than I have been lately :) Sorry :S

 Cheers,

 Dave
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 I admit that two times two makes four is an excellent thing, but if we are
 to give everything its due, two times two makes five is sometimes a very
 charming thing too. -- Fyodor Dostoevsky

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
Ben: I don't agree that solving vision and the vision-cognition bridge is 
*such* a huge part of AGI, though it's certainly a nontrivial percentage

Presumably because you don't envisage your AGI/computer as an independent 
entity? All its info. is going to have to be entered into it in a specially 
prepared form - and it's still going to be massively and continuously dependent 
on human programmers?

Humans and real AGI's receive virtually all their info. - certainly all their 
internet info - through heavily visual processing (with obvious exceptions like 
sound). You can't do maths and logic if you can't see them, and they have 
visual forms -  equations and logic have visual form and use visual 
ideogrammatic as well as visual numerical signs. 

Just wh. intelligent problemsolving operations is your AGI going to do, that do 
NOT involve visual processing OR - the alternative - massive human assistance 
to substitute for that processing?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
On Mon, Aug 9, 2010 at 11:42 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Ben: I don't agree that solving vision and the vision-cognition bridge is
 *such* a huge part of AGI, though it's certainly a nontrivial percentage

 Presumably because you don't envisage your AGI/computer as an independent
 entity? All its info. is going to have to be entered into it in a specially
 prepared form - and it's still going to be massively and continuously
 dependent on human programmers?


I envisage my AGI as an independent entity, ingesting information from the
world in a similar manner to how humans do (as well as through additional
senses not available to humans)

You misunderstood my statement.  I think that vision and the
vision-cognition bridge are important for AGI, but I think they're only a
moderate portion of the problem, and not the hardest part...




 Humans and real AGI's receive virtually all their info. - certainly all
 their internet info - through heavily visual processing (with obvious
 exceptions like sound). You can't do maths and logic if you can't see them,
 and they have visual forms -  equations and logic have visual form and use
 visual ideogrammatic as well as visual numerical signs.

 Just wh. intelligent problemsolving operations is your AGI going to do,
 that do NOT involve visual processing OR - the alternative - massive human
 assistance to substitute for that processing?

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel

 The human visual system doesn't evolve like that on the fly. This can be
 proven by the fact that we all see the same visual illusions. We all exhibit
 the same visual limitations in the same way. There is much evidence that the
 system doesn't evolve accidentally. It has a limited set of rules it uses to
 learn from perceptual data.



That is not a proof, of course.  It could be that given a general
architecture, and inputs with certain statistical properties, the same
internal structures inevitably self-organize




 I think a more deliberate approach would be more effective because we can
 understand why it does what it does, how it does it, and why its not working
 if it doesn't work. With such deliberate approaches, it is much more clear
 how to proceed and to reuse knowledge in many complementary ways. This is
 what I meant by emergence.



I understand the general concept.  I am reminded a bit of Poggio's
hierarchical visual cortex simulations -- which do attempt to emulate the
human brain's specific processing, on a neuronal cluster and inter-cluster
connectivity level

However, Poggio hasn't yet solved the problem of making this kind of
deliberately-engineered hierarchical vision network incorporate cognition==
perception feedback.  At this stage it seems basically a feedforward
system.

So I'm curious

-- what are the specific pattern-recognition modules that you will put into
your system, and how will you arrange them hierarchically?

-- how will you handle feedback connections (top-down) among the modules?

thx
ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Mike Tintner
Ben:I think that vision and the vision-cognition bridge are important for AGI, 
but I think they're only a moderate portion of the problem, and not the hardest 
part...

Which is?


From: Ben Goertzel 
Sent: Monday, August 09, 2010 4:57 PM
To: agi 
Subject: Re: [agi] How To Create General AI Draft2





On Mon, Aug 9, 2010 at 11:42 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  Ben: I don't agree that solving vision and the vision-cognition bridge is 
*such* a huge part of AGI, though it's certainly a nontrivial percentage

  Presumably because you don't envisage your AGI/computer as an independent 
entity? All its info. is going to have to be entered into it in a specially 
prepared form - and it's still going to be massively and continuously dependent 
on human programmers?

I envisage my AGI as an independent entity, ingesting information from the 
world in a similar manner to how humans do (as well as through additional 
senses not available to humans)

You misunderstood my statement.  I think that vision and the vision-cognition 
bridge are important for AGI, but I think they're only a moderate portion of 
the problem, and not the hardest part...

 

  Humans and real AGI's receive virtually all their info. - certainly all their 
internet info - through heavily visual processing (with obvious exceptions like 
sound). You can't do maths and logic if you can't see them, and they have 
visual forms -  equations and logic have visual form and use visual 
ideogrammatic as well as visual numerical signs. 

  Just wh. intelligent problemsolving operations is your AGI going to do, that 
do NOT involve visual processing OR - the alternative - massive human 
assistance to substitute for that processing?

agi | Archives  | Modify Your Subscription  




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are to 
give everything its due, two times two makes five is sometimes a very charming 
thing too. -- Fyodor Dostoevsky


  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


  1   2   3   4   5   6   7   8   9   10   >