RE: [agi] Pretty worldchanging

2010-07-25 Thread John G. Rose
You have to give toast though to Net entities like Wikipedia, I'd dare say
one of humankind's greatest achievements. Then eventually over a few years
it'll be available as a plug-in, as a virtual trepan thus reducing the
effort of subsuming all that. And then maybe structural intelligence add-ins
so that abstract concepts need not be learned by medieval rote
conditioning. These humanity features are not far off. So instead of a $35
laptop a fifty cent liqua chip could be injected as a prole
inoculation/augmentation.

 

John

 

From: Boris Kazachenko [mailto:bori...@verizon.net] 
Sent: Saturday, July 24, 2010 5:50 PM
To: agi
Subject: Re: [agi] Pretty worldchanging

 

Maybe there are some students on this email list, who are wading through all
the BS and learning something about AGI, by following links and reading
papers mentioned here, etc.  Without the Net, how would these students learn
about AGI, in practice?  Such education would be far harder to come by and
less effective without the Net.  That's world-changing... ;-) ...

The Net saves time. Back in the day, one could spend a lifetime sifting
through paper in the library, or traveling the world to meet authorities.
Now you do some googling, realize that no one has a clue,  go on to do some
real work on your own. That's if you have the guts, of course.

intelligence-as-a-cognitive-algorithm
http://ntelligence-as-a-cognitive-algorithm 


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: RE: [agi] Pretty worldchanging

2010-07-25 Thread boris.k
Wikipedia doesn't allow original research Understandable, but that excludes 
people who don't have time to waste on getting established. Abstract concepts 
can't be learned by conditioning, only the practically useful ones. Structural 
intelligence add-ins? Those will obsolete your brain, no need to add to it. 

http://knol.google.com/k/intelligence-as-a-cognitive-algorithm#

You 
have to give toast though to Net entities like Wikipedia, I'd dare say one of 
humankind's 
greatest achievements. Then eventually over a few years it'll be available as 
a 
plug-in, as a virtual trepan thus reducing the effort of subsuming all that. 
And 
then maybe structural intelligence add-ins so that abstract concepts need not 
be 
learned by medieval rote conditioning. These humanity features are not 
far off. So instead of a $35 laptop a fifty cent liqua chip could be injected 
as 
a prole inoculation/augmentation.

John

From: Boris Kazachenko 
[mailto:bori...@verizon.net] 
Sent: Saturday, July 24, 2010 5:50 PM
To: agi
Subject: Re: [agi] Pretty worldchanging

Maybe there are some students on this email list, who are 
wading through all the BS and learning something about AGI, by following links 
and 
reading papers mentioned here, etc.  Without the Net, how would these students 
learn about AGI, in practice?  Such education would be far harder to come by 
and less effective without the Net.  That's world-changing... ;-) ...

The Net saves time. Back in the day, one could spend a lifetime 
sifting through paper in the library, or traveling the world to meet 
authorities. 
Now you do some googling, realize that no one has a clue,  go on to do some 
real work on your own. That's if you have the guts, of course.

intelligence-as-a-cognitive-algorithm


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Clues to the Mind: Illusions / Vision

2010-07-25 Thread John G. Rose
Here is an example of superimposed images where you have to have a
predisposed perception - 

 

http://www.youtube.com/watch?v=V1m0kCdC7co

 

John

 

From: deepakjnath [mailto:deepakjn...@gmail.com] 
Sent: Saturday, July 24, 2010 11:03 PM
To: agi
Subject: [agi] Clues to the Mind: Illusions / Vision

 

http://www.youtube.com/watch?v=QbKw0_v2clo
http://www.youtube.com/watch?v=QbKw0_v2clofeature=player_embedded
feature=player_embedded

What we see is not really what you see. Its what you see and what you know
you are seeing. The brain superimposes the predicted images to the viewed
image to actually have a perception of image.

cheers,
Deepak


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-25 Thread Jim Bromer
I believe that trans-infinite would mean that there is no recursively
enumerable algorithm that could 'reach' every possible item in the
trans-infinite group.



Since each program in Solomonoff Induction, written for a Universal Turing
Machine could be written on a single role of tape, that means that every
possible combination of programs could also be written on the tape.  They
would therefore be recursively enumerable just as they could be enumerated
on a one to one basis with aleph null (counting numbers).



But, unfortunately for my criticism, there are algorithms that could reach
any finite combination of things which means that even though you could not
determine the ordering of programs that would be necessary to show that the
probabilities of each string approach a limit, it would be possible to write
an algorithm that could show trends, given infinite resources.  This doesn't
mean that the probabilities would approach the limit, it just means that if
they did, there would be an infinite algorithm that could make the best
determination given the information that the programs had already produced.
This would be a necessary step of a theoretical (but still
non-constructivist) proof.



So I can't prove that Solomonoff Induction is inherently trans-infinite.



I am going to take a few weeks to see if I can determine if the idea of
Solomonoff Induction makes hypothetical sense to me.
Jim Bromer



On Sat, Jul 24, 2010 at 6:04 PM, Matt Mahoney matmaho...@yahoo.com wrote:

   Jim Bromer wrote:
  Solomonoff Induction may require a trans-infinite level of complexity
 just to run each program.

 Trans-infinite is not a mathematically defined term as far as I can tell.
 Maybe you mean larger than infinity, as in the infinite set of real numbers
 is larger than the infinite set of natural numbers (which is true).

 But it is not true that Solomonoff induction requires more than aleph-null
 operations. (Aleph-null is the size of the set of natural numbers, the
 smallest infinity). An exact calculation requires that you test aleph-null
 programs for aleph-null time steps each. There are aleph-null programs
 because each program is a finite length string, and there is a 1 to 1
 correspondence between the set of finite strings and N, the set of natural
 numbers. Also, each program requires aleph-null computation in the case that
 it runs forever, because each step in the infinite computation can be
 numbered 1, 2, 3...

 However, the total amount of computation is still aleph-null because each
 step of each program can be described by an ordered pair (m,n) in N^2,
 meaning the n'th step of the m'th program, where m and n are natural
 numbers. The cardinality of N^2 is the same as the cardinality of N because
 there is a 1 to 1 correspondence between the sets. You can order the ordered
 pairs as (1,1), (1,2), (2,1), (1,3), (2,2), (3,1), (1,4), (2,3), (3,2),
 (4,1), (1,5), etc. See
 http://en.wikipedia.org/wiki/Countable_set#More_formal_introduction

 Furthermore you may approximate Solomonoff induction to any desired
 precision with finite computation. Simply interleave the execution of all
 programs as indicated in the ordering of ordered pairs that I just gave,
 where the programs are ordered from shortest to longest. Take the shortest
 program found so far that outputs your string, x. It is guaranteed that this
 algorithm will approach and eventually find the shortest program that
 outputs x given sufficient time, because this program exists and it halts.

 In case you are wondering how Solomonoff induction is not computable, the
 problem is that after this algorithm finds the true shortest program that
 outputs x, it will keep running forever and you might still be wondering if
 a shorter program is forthcoming. In general you won't know.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, July 24, 2010 3:59:18 PM

 *Subject:* Re: [agi] Comments On My Skepticism of Solomonoff Induction

 Solomonoff Induction may require a trans-infinite level of complexity just
 to run each program.  Suppose each program is iterated through the
 enumeration of its instructions.  Then, not only do the infinity of
 possible programs need to be run, many combinations of the infinite programs
 from each simulated Turing Machine also have to be tried.  All the
 possible combinations of (accepted) programs, one from any two or more of
 the (accepted) programs produced by each simulated Turing Machine, have to
 be tried.  Although these combinations of programs from each of the
 simulated Turing Machine may not all be unique, they all have to be tried.
 Since each simulated Turing Machine would produce infinite programs, I am
 pretty sure that this means that Solmonoff Induction is, *by 
 definition,*trans-infinite.
 Jim Bromer


 On Thu, Jul 22, 2010 at 2:06 PM, Jim Bromer jimbro...@gmail.com wrote:

 I have to retract my 

Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-25 Thread Jim Bromer
No, I might have been wrong about the feasibility of writing an algorithm
that can produce all the possible combinations of items when I wrote my last
message.  It is because the word combination is associated with more than
one mathematical method. I am skeptical of the possibility that there is a
re algorithm that can write out every possible combination of items taken
from more than one group of *types* when strings of infinite length are
possible.

So yes, I may have proved that there is no re algorithm, even if given
infinite resources, that can order the computation of Solomonoff Induction
in such a way to show that the probability (or probabilities) tend toward a
limit.  If my theory is correct, and right now I would say that there is a
real chance that it is, I have proved that Solmonoff Induction is
theoretically infeasible, illogical and therefore refuted.

Jim Bromer



On Sun, Jul 25, 2010 at 9:02 AM, Jim Bromer jimbro...@gmail.com wrote:

 I believe that trans-infinite would mean that there is no recursively
 enumerable algorithm that could 'reach' every possible item in the
 trans-infinite group.



 Since each program in Solomonoff Induction, written for a Universal Turing
 Machine could be written on a single role of tape, that means that every
 possible combination of programs could also be written on the tape.  They
 would therefore be recursively enumerable just as they could be enumerated
 on a one to one basis with aleph null (counting numbers).



 But, unfortunately for my criticism, there are algorithms that could reach
 any finite combination of things which means that even though you could not
 determine the ordering of programs that would be necessary to show that the
 probabilities of each string approach a limit, it would be possible to write
 an algorithm that could show trends, given infinite resources.  This
 doesn't mean that the probabilities would approach the limit, it just means
 that if they did, there would be an infinite algorithm that could make the
 best determination given the information that the programs had already
 produced.  This would be a necessary step of a theoretical (but still
 non-constructivist) proof.



 So I can't prove that Solomonoff Induction is inherently trans-infinite.



 I am going to take a few weeks to see if I can determine if the idea of
 Solomonoff Induction makes hypothetical sense to me.
 Jim Bromer



 On Sat, Jul 24, 2010 at 6:04 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Jim Bromer wrote:
  Solomonoff Induction may require a trans-infinite level of complexity
 just to run each program.

 Trans-infinite is not a mathematically defined term as far as I can
 tell. Maybe you mean larger than infinity, as in the infinite set of real
 numbers is larger than the infinite set of natural numbers (which is true).

 But it is not true that Solomonoff induction requires more than aleph-null
 operations. (Aleph-null is the size of the set of natural numbers, the
 smallest infinity). An exact calculation requires that you test aleph-null
 programs for aleph-null time steps each. There are aleph-null programs
 because each program is a finite length string, and there is a 1 to 1
 correspondence between the set of finite strings and N, the set of natural
 numbers. Also, each program requires aleph-null computation in the case that
 it runs forever, because each step in the infinite computation can be
 numbered 1, 2, 3...

 However, the total amount of computation is still aleph-null because each
 step of each program can be described by an ordered pair (m,n) in N^2,
 meaning the n'th step of the m'th program, where m and n are natural
 numbers. The cardinality of N^2 is the same as the cardinality of N because
 there is a 1 to 1 correspondence between the sets. You can order the ordered
 pairs as (1,1), (1,2), (2,1), (1,3), (2,2), (3,1), (1,4), (2,3), (3,2),
 (4,1), (1,5), etc. See
 http://en.wikipedia.org/wiki/Countable_set#More_formal_introduction

 Furthermore you may approximate Solomonoff induction to any desired
 precision with finite computation. Simply interleave the execution of all
 programs as indicated in the ordering of ordered pairs that I just gave,
 where the programs are ordered from shortest to longest. Take the shortest
 program found so far that outputs your string, x. It is guaranteed that this
 algorithm will approach and eventually find the shortest program that
 outputs x given sufficient time, because this program exists and it halts.

 In case you are wondering how Solomonoff induction is not computable, the
 problem is that after this algorithm finds the true shortest program that
 outputs x, it will keep running forever and you might still be wondering if
 a shorter program is forthcoming. In general you won't know.


 -- Matt Mahoney, matmaho...@yahoo.com


  --
 *From:* Jim Bromer jimbro...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, July 24, 2010 3:59:18 PM

 *Subject:* 

[agi] The Math Behind Creativity

2010-07-25 Thread Mike Tintner
I came across this, thinking it was going to be an example of maths fantasy, 
but actually it has a rather nice idea about the mathematics of creativity.


The Math Behind Creativity
By Chuck Scott on June 15, 2010

The Science of Creativity is based on the following mathematical formula for 
Creativity:



In other words, Creativity is equal to infinity minus the area of a defined 
circle of what's working. 

Note:  is the geometric formula for calculating the area of a circle; where  is 
3.142 rounded to the nearest thousandth, and R is a circle's radius (the length 
from a circle's center to edge).



**

Simply, it's saying - that for every problem, and ultimately that's not just 
creative but rational problems, there's a definable space of options - the 
spaces you guys work with in your programs - wh. may work, if the problem is 
rational, but normally don't if it's creative. And beyond that space is the 
undefined space of creativity, wh. encompasses the entire world in an infinity 
of combinations. (Or all the fabulous multiverse[s] of Ben's mind).  Creative 
ideas - and that can be small everyday ideas as well as large cultural ones - 
can come from anywhere in, and any combinations of, the entire world (incl 
butterflies in Brazil and caterpillars in Katmandu -  QED I just drew that last 
phrase off the cuff from that vast world). Creative thinking - and that incl. 
the thinking of all humans from children on - is what in the world ? 
thinking - that can and does draw upon the infinite resources of the world. 
What in the world is he on about? Where in the world will I find s.o. 
who..? What in the world could be of help here?

And that is another way of highlighting the absurdity of current approaches to 
AGI - that would seek to encompass the entire world of creative ideas/options 
in the infinitesimal spaces/options of programs.







---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
math_994_a9533b31457bd21311d15e42a60f9153.pngmath_994_d895701496b1057f4cbe3c7c38db0d30.pngmath_994.5_8edb2cf68079344a2edd739531259f6c.png

Re: [agi] Huge Progress on the Core of AGI

2010-07-25 Thread A. T. Murray
David Jones wrote:

Arthur,

Thanks. I appreciate that. I would be happy to aggregate some of those
things. I am sometimes not good at maintaining the website because I get
bored of maintaining or updating it very quickly :)

Dave

On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray menti...@scn.org wrote:

 The Web site of David Jones at

 http://practicalai.org

 is quite impressive to me
 as a kindred spirit building AGI.
 (Just today I have been coding MindForth AGI :-)

 For his Practical AI Challenge or similar
 ventures, I would hope that David Jones is
 open to the idea of aggregating or archiving
 representative AI samples from such sources as
 - TexAI;
 - OpenCog;
 - Mentifex AI;
 - etc.;
 so that visitors to PracticalAI may gain an
 overview of what is happening in our field.

 Arthur
 --
 http://www.scn.org/~mentifex/AiMind.html
 http://www.scn.org/~mentifex/mindforth.txt

Just today, a few minutes ago, I updated the
mindforth.txt AI souce code listed above.

In the PracticalAi aggregates, you might consider
listing Mentifex AI with copies of the above two
AI source code pages, and with links to the
original scn.org URL's, where visitors to
PracticalAi could look for any more recent
updates that you had not gotten around to
transferring from scn.org to PracticalAi.
In that way, theses releases of Mentifex 
free AI source code would have a more robust
Web presence (SCN often goes down) and I 
could link to PracticalAi for the aggregates
and other features of PracticalAI.

Thanks.

Arthur T. Murray



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Clues to the Mind: What do you think is the reason for selective attention

2010-07-25 Thread David Jones
I found proof of my interpretation in the following paper also. It concludes
that we can only keep track of 3 or 4 objects in detail at a time.(something
like that)

http://www.pni.princeton.edu/conte/pdfs/project2/Proj2Pub8anne.pdf

It says:
For explicit visual working memory, object tokens are stored in a limited
capacity, vulnerable store that maintains the bindings of features for just
2 to 4 objects.
Attention is required to sustain the memories.

Dave


On Sun, Jul 25, 2010 at 1:00 AM, deepakjnath deepakjn...@gmail.com wrote:

 Thanks Dave, its very interesting. This gives us more clues in to how the
 brain compresses and uses the relevant information while neglecting the
 irrelevant information. But as Anast has demonstrated, the brain does need
 priming inorder to decide what is relevant and irrelevant. :)

 Cheers,
 Deepak

 On Sun, Jul 25, 2010 at 5:34 AM, David Jones davidher...@gmail.comwrote:

 I also wanted to say that it is agi related because this may be the way
 that the brain deals with ambiguity in the real world. It ignores many
 things if it can use expectations to constrain possibilities. It is an
 important way in which the brain tracks objects and identifies them without
 analyzing all of an objects features before matching over the whole image.

 On Jul 24, 2010 7:53 PM, David Jones davidher...@gmail.com wrote:

 Actually Deepak, this is AGI related.

 This week I finally found a cool body of research that I previously had no
 knowledge of. This research area is in psychology, which is probably why I
 missed it the first time. It has to do with human perception, object files,
 how we keep track of object, individuate them, match them (the
 correspondence problem), etc.

 And I found the perfect article just now for you Deepak:
 http://www.duke.edu/~mitroff/papers/SimonsMitroff_01.pdfhttp://www.duke.edu/%7Emitroff/papers/SimonsMitroff_01.pdf

 This article mentions why the brain does not notice things. And I just
 realized as I was reading it why we don't see the gorilla or other
 unexpected changes. The reason is this:
 We have a limited amount of processing power that we can apply to visual
 tracking and analysis. So, in attention demanding situations such as these,
 we assign our processing resources to only track the things we are
 interested in. In fact, we probably do this all the time, but it is only
 when we need a lot of attention to be applied to a few objects do we notice
 that we don't see some unexpected events.

 So, our brain knows where to expect the ball next and our visual
 processing is very busy tracking the ball and then seeing who is throwing
 it. As a result, it is unable to also process the movement of other objects.
 If the unexpected event is drastic enough, it will get our attention. But
 since some of the people are in black, our brain probably thinks it is just
 a person in black and doesn't consider it an event that is worthy of
 interrupting our intense tracking.

 Dave



 On Sat, Jul 24, 2010 at 4:58 PM, Anastasios Tsiolakidis sokratis.dk@
 gmail.com wrote:
 
  On Sat,...

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 cheers,
 Deepak
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-07-25 Thread Chris Petersen
Don't fret; your main site's got good uptime.

http://www.nothingisreal.com/mentifex_faq.html

-Chris



On Sun, Jul 25, 2010 at 9:42 AM, A. T. Murray menti...@scn.org wrote:

 David Jones wrote:
 
 Arthur,
 
 Thanks. I appreciate that. I would be happy to aggregate some of those
 things. I am sometimes not good at maintaining the website because I get
 bored of maintaining or updating it very quickly :)
 
 Dave
 
 On Sat, Jul 24, 2010 at 10:02 AM, A. T. Murray menti...@scn.org wrote:
 
  The Web site of David Jones at
 
  http://practicalai.org
 
  is quite impressive to me
  as a kindred spirit building AGI.
  (Just today I have been coding MindForth AGI :-)
 
  For his Practical AI Challenge or similar
  ventures, I would hope that David Jones is
  open to the idea of aggregating or archiving
  representative AI samples from such sources as
  - TexAI;
  - OpenCog;
  - Mentifex AI;
  - etc.;
  so that visitors to PracticalAI may gain an
  overview of what is happening in our field.
 
  Arthur
  --
  http://www.scn.org/~mentifex/AiMind.htmlhttp://www.scn.org/%7Ementifex/AiMind.html
  http://www.scn.org/~mentifex/mindforth.txthttp://www.scn.org/%7Ementifex/mindforth.txt

 Just today, a few minutes ago, I updated the
 mindforth.txt AI souce code listed above.

 In the PracticalAi aggregates, you might consider
 listing Mentifex AI with copies of the above two
 AI source code pages, and with links to the
 original scn.org URL's, where visitors to
 PracticalAi could look for any more recent
 updates that you had not gotten around to
 transferring from scn.org to PracticalAi.
 In that way, theses releases of Mentifex
 free AI source code would have a more robust
 Web presence (SCN often goes down) and I
 could link to PracticalAi for the aggregates
 and other features of PracticalAI.

 Thanks.

 Arthur T. Murray



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments On My Skepticism of Solomonoff Induction

2010-07-25 Thread Jim Bromer
I got confused with the two kinds of combinations that I was thinking about.
Sorry.  However, while the reordering of the partial accumulation of a
finite number of probabilities, where each probability is taken just once,
can be done with a re algorithm, there is no re algorithm that can consider
all possible combinations for an infinite set of probabilities.  I believe
that this means that the probability of a particular string cannot be proven
to attain a stable value using general mathematical methods but that the
partial ordering of probabilities after any finite programs had been run can
be made, both with actual computed values and through the use of an a
priori methods made with general mathematical methods - if someone (like a
twenty second century AI program) was capable of dealing with the
extraordinary complexity of the problem.


So I haven't proven that there is a theoretical disconnect between the
desired function and the method.  Right now, no one has, as far as I can
tell, been able to prove that the method would actually produce the desired
function for all cases, but I haven't been able to sketch a proof that the
claimed relation between the method and the desired function is completely
unsound.

Jim Bromer


On Sun, Jul 25, 2010 at 9:36 AM, Jim Bromer jimbro...@gmail.com wrote:

 No, I might have been wrong about the feasibility of writing an algorithm
 that can produce all the possible combinations of items when I wrote my last
 message.  It is because the word combination is associated with more than
 one mathematical method. I am skeptical of the possibility that there is a
 re algorithm that can write out every possible combination of items taken
 from more than one group of *types* when strings of infinite length are
 possible.

 So yes, I may have proved that there is no re algorithm, even if given
 infinite resources, that can order the computation of Solomonoff Induction
 in such a way to show that the probability (or probabilities) tend toward a
 limit.  If my theory is correct, and right now I would say that there is a
 real chance that it is, I have proved that Solmonoff Induction is
 theoretically infeasible, illogical and therefore refuted.

 Jim Bromer




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Math Behind Creativity

2010-07-25 Thread rob levy
Not sure how that is useful, or even how it relates to creativity if
considered as an informal description?

On Sun, Jul 25, 2010 at 10:15 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I came across this, thinking it was going to be an example of maths
 fantasy, but actually it has a rather nice idea about the mathematics of
 creativity.

 
 The Math Behind Creativity http://www.alwayscreative.com/math/

 By Chuck Scott http://www.alwayscreative.com/author/admin/ on June 15,
 2010

 The Science of Creativity is based on the following mathematical formula
 for Creativity:

 [image: C = {infty} - {pi}R^2]

 In other words, Creativity is equal to infinity minus the area of a defined
 circle of what’s working.

 Note: [image: {pi}R^2] is the geometric formula for calculating the area
 of a circle; where [image: pi] is 3.142 rounded to the nearest thousandth,
 and R is a circle’s radius (the length from a circle’s center to edge).



 **

 Simply, it's saying - that for every problem, and ultimately that's not
 just creative but rational problems, there's a definable space of options -
 the spaces you guys work with in your programs - wh. may work, if the
 problem is rational, but normally don't if it's creative. And beyond that
 space is the undefined space of creativity, wh. encompasses the entire world
 in an infinity of combinations. (Or all the fabulous multiverse[s] of Ben's
 mind).  Creative ideas - and that can be small everyday ideas as well as
 large cultural ones - can come from anywhere in, and any combinations of,
 the entire world (incl butterflies in Brazil and caterpillars in Katmandu -
 QED I just drew that last phrase off the cuff from that vast world).
 Creative thinking - and that incl. the thinking of all humans from children
 on - is what in the world ? thinking - that can and does draw upon the
 infinite resources of the world. What in the world is he on about? Where
 in the world will I find s.o. who..? What in the world could be of help
 here?

 And that is another way of highlighting the absurdity of current approaches
 to AGI - that would seek to encompass the entire world of creative
 ideas/options in the infinitesimal spaces/options of programs.




*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
math_994_a9533b31457bd21311d15e42a60f9153.pngmath_994_d895701496b1057f4cbe3c7c38db0d30.pngmath_994.5_8edb2cf68079344a2edd739531259f6c.png

Re: [agi] The Math Behind Creativity

2010-07-25 Thread Mike Tintner
I think it's v. useful - although I was really extending his idea.

Correct me - but almost no matter what you guys do, (or anyone in AI does) , 
you think in terms of spaces, or frames. Spaces of options. Whether you're 
doing logic, maths, or programs, spaces in one form or other are fundamental.

But you won't find anyone - or show me to the contrary - applying spaces to 
creative problems (or AGI problems). T

And what's useful IMO is the idea of **trying** to encompass the space of 
creative options - the options for any creative problem [wh can be as simple 
or complex as what shall we have to eat tonight? or how do we reform the 
banks? or  what do you think of the state of AGI? ]. 

It's only when you **try** to formalise creativity , that you realise it can't 
be done in any practical, programmable way - or formal way. You can only do it 
conceptually. Informally. 

The options are infinite, or, at any rate, practically endless. - and 
infinite not just in number, but in *diversity*, in endlessly proliferating 
*domains* and categories extending right across the world.

**And this is the case for every creative problem - every AGI problem**   (one 
reason why you won't find anyone in the field of AGI, actually doing AGI, only 
narrow AI gestures at the goal).  

It's only when you attempt - and fail - to formalise the space of creativity, 
that the meaning of there are infinite creative options really comes home. 
And you should be able to start to see why narrow AI and AGI are fundamentally 
opposite affairs - thinking in closed spaces vs thinking in open worlds.

{It is fundamental BTW to the method of rationality - and rationalisation - 
epitomised in current programming - to create and think in a closed space of 
options, wh. is always artificial in nature].




From: rob levy 
Sent: Sunday, July 25, 2010 9:16 PM
To: agi 
Subject: Re: [agi] The Math Behind Creativity


Not sure how that is useful, or even how it relates to creativity if considered 
as an informal description?


On Sun, Jul 25, 2010 at 10:15 AM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I came across this, thinking it was going to be an example of maths fantasy, 
but actually it has a rather nice idea about the mathematics of creativity.

  
  The Math Behind Creativity
  By Chuck Scott on June 15, 2010

  The Science of Creativity is based on the following mathematical formula for 
Creativity:



  In other words, Creativity is equal to infinity minus the area of a defined 
circle of what’s working. 

  Note:  is the geometric formula for calculating the area of a circle; where  
is 3.142 rounded to the nearest thousandth, and R is a circle’s radius (the 
length from a circle’s center to edge).



  **

  Simply, it's saying - that for every problem, and ultimately that's not just 
creative but rational problems, there's a definable space of options - the 
spaces you guys work with in your programs - wh. may work, if the problem is 
rational, but normally don't if it's creative. And beyond that space is the 
undefined space of creativity, wh. encompasses the entire world in an infinity 
of combinations. (Or all the fabulous multiverse[s] of Ben's mind).  Creative 
ideas - and that can be small everyday ideas as well as large cultural ones - 
can come from anywhere in, and any combinations of, the entire world (incl 
butterflies in Brazil and caterpillars in Katmandu -  QED I just drew that last 
phrase off the cuff from that vast world). Creative thinking - and that incl. 
the thinking of all humans from children on - is what in the world ? 
thinking - that can and does draw upon the infinite resources of the world. 
What in the world is he on about? Where in the world will I find s.o. 
who..? What in the world could be of help here?

  And that is another way of highlighting the absurdity of current approaches 
to AGI - that would seek to encompass the entire world of creative 
ideas/options in the infinitesimal spaces/options of programs.





agi | Archives  | Modify Your Subscription  



  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Math Behind Creativity

2010-07-25 Thread rob levy
On Sun, Jul 25, 2010 at 5:05 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I think it's v. useful - although I was really extending his idea.

 Correct me - but almost no matter what you guys do, (or anyone in AI does)
 , you think in terms of spaces, or frames. Spaces of options. Whether you're
 doing logic, maths, or programs, spaces in one form or other are
 fundamental.

 But you won't find anyone - or show me to the contrary - applying spaces to
 creative problems (or AGI problems). T



I guess we may somehow be familiar with different and non-overlapping
literature, but it seems to me that most or at least many approaches to
modeling creativity involve a notion of spaces of some kind.  I won't make a
case to back that up but I will list a few examples: Koestler's bisociation
is spacial, D. T. Campbell, the Fogels, Finke et al, and William Calvin's
evolutionary notion of creativity involve a behavioral or conceptual fitness
landscape, Gilles Fauconnier  Mark Turner's theory of conceptual blending
on mental space, etc. etc.

The idea of the website you posted is very lacking in any kind of
explanatory power in my opinion.  To me any theory of creativity should be
able to show how a system is able to generate novel and good results.
 Creativity is more than just outside what is known, created, or working.
 That is a description of novelty, and with no suggestions for the why or
how of generating novelty.  Creativity also requires the semantic potential
to reflect on and direct the focusing in on the stream of playful novelty to
that which is desired or considered good.

I would disagree that creativity is outside the established/known.  A better
characterization would be that it resides on the complex boundary of the
novel and the established, which is what make it interesting instead just a
copy, or just total gobbledygook randomness.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Math Behind Creativity

2010-07-25 Thread Mike Tintner
I wasn't trying for a detailed model of creative thinking with explanatory 
power -  merely one dimension (and indeed a foundation) of it.

In contrast to rational, deterministically programmed computers and robots wh. 
can only operate in closed spaces externally, (artificial environments) and 
only think in closed spaces internally,  human (real AGI) agents are designed 
to operate in the open world externally, (real world environments) and to think 
in open worlds internally.

IOW when you think about any creative problem, like what am I going to do 
tonight? or let me write a post in reply to MT - you *don't* have a nice 
neat space/frame of options lined up as per a computer program, which your 
brain systematically checks through. You have an open world of associations - 
associated with varying degrees of power - wh. you have to search, or since AI 
has corrupted that word, perhaps we should say quest through in haphazard, 
nonsystematic fashion. You have to *explore* your brain for ideas - and it is a 
risky business, wh. (with more difficult problems) may draw a blank.

(Nor BTW does your brain set up a space for solving creative problems - as 
was vaguely mooted in a recent discussion with Ben. Closed spaces are strictly 
for rational problems).

IMO though this contrast of narrow AI/rationality as thinking in closed 
spaces vs AGI/creativity as thinking in open worlds is a very powerful one.

Re your examples, I don't think Koestler or Fauconnier are talking of defined 
or closed spaces.  The latter is v. vague about the nature of his spaces. I 
think they're rather like the formulae for creativity that our folk culture 
often talks about. V. loosely. They aren't used in the strict senses the terms 
have in rationality - logic/maths/programming.

Note that Calvin's/Piaget's idea of consciousness as designed for when you 
don't know what to do accords with my idea of creative thinking as effectively 
starting from a blank page rather than than a ready space of options, and 
going on to explore a world of associations for ideas.

P.S. I should have stressed that the open world of the brain is 
**multidomain**, indeed **open-domain by contrast with the spaces of programs 
wh. are closed, uni-domain. When you search for what am I going to do..?  
your brain can go through an endless world of domains -  movies,call a friend, 
watch TV, browse the net, meal, go for walk, play a sport, ask s.o. for novel 
ideas, spend time with my kid ... and on and on.

The space thinking of rationality is superefficient but rigid and useless for 
AGI. The open world of the human, creative mind is highly inefficient by 
comparison but superflexible and the only way to do AGI.





From: rob levy 
Sent: Monday, July 26, 2010 1:06 AM
To: agi 
Subject: Re: [agi] The Math Behind Creativity


On Sun, Jul 25, 2010 at 5:05 PM, Mike Tintner tint...@blueyonder.co.uk wrote:

  I think it's v. useful - although I was really extending his idea.

  Correct me - but almost no matter what you guys do, (or anyone in AI does) , 
you think in terms of spaces, or frames. Spaces of options. Whether you're 
doing logic, maths, or programs, spaces in one form or other are fundamental.

  But you won't find anyone - or show me to the contrary - applying spaces to 
creative problems (or AGI problems). T




I guess we may somehow be familiar with different and non-overlapping 
literature, but it seems to me that most or at least many approaches to 
modeling creativity involve a notion of spaces of some kind.  I won't make a 
case to back that up but I will list a few examples: Koestler's bisociation is 
spacial, D. T. Campbell, the Fogels, Finke et al, and William Calvin's 
evolutionary notion of creativity involve a behavioral or conceptual fitness 
landscape, Gilles Fauconnier  Mark Turner's theory of conceptual blending on 
mental space, etc. etc.


The idea of the website you posted is very lacking in any kind of explanatory 
power in my opinion.  To me any theory of creativity should be able to show how 
a system is able to generate novel and good results.  Creativity is more than 
just outside what is known, created, or working.  That is a description of 
novelty, and with no suggestions for the why or how of generating novelty.  
Creativity also requires the semantic potential to reflect on and direct the 
focusing in on the stream of playful novelty to that which is desired or 
considered good.  


I would disagree that creativity is outside the established/known.  A better 
characterization would be that it resides on the complex boundary of the novel 
and the established, which is what make it interesting instead just a copy, or 
just total gobbledygook randomness.
  agi | Archives  | Modify Your Subscription   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] How do we hear music

2010-07-25 Thread Michael Swan

On Fri, 2010-07-23 at 23:38 +0100, Mike Tintner wrote:
 Michael:but those things do have patterns.. A mushroom (A) is like a cloud
  mushroom (B).
 
  if ( (input_source_A == An_image) AND ( input_source_B == An_image ))
 
  One pattern is that they both came from an image source, and I just used
  maths + logic to prove it
 
 Michael,
 
 This is a bit desperate isn't it?
It's a common misconception that high level queries aren't very good.
Imagine 5 senses, sight, touch taste .. etc.

We confirm the input is from sight. By doing this we potentially reduce
the combination of what it could be by 4/5 ~ 80%. which is pretty
awesome. 

Computer programs know nothing. You have to tell them everything (narrow
AI) or allow the mechanics to find out things for themselves.

 
 They both come from image sources. So do a zillion other images, from 
 Obama to dung - so they're all alike? Everything in the world is alike and 
 metaphorical for everything else?
 
 And their images must be alike because they both have an 'o' and a 'u' in 
 their words, (not their images)-  unless you're a Chinese speaker.
 
 Pace Lear, that way madness lies.
 
 Why don't you apply your animation side to the problem - and analyse the 
 images per images, and how to compare them as images? Some people in AGI 
 although not AFAIK on this forum are actually addressing the problem. I'm 
 sure *you* can too.
 
 
 
 --
 From: Michael Swan ms...@voyagergaming.com
 Sent: Friday, July 23, 2010 8:28 AM
 To: agi agi@v2.listbox.com
 Subject: Re: [agi] How do we hear music
 
 
 
 
 
 
  On Fri, 2010-07-23 at 03:45 +0100, Mike Tintner wrote:
  Let's crystallise the problem   - all the unsolved problems of AGI - 
  visual
  object recognition, conceptualisation, analogy, metaphor, creativity,
  language understanding and generation -  are problems where you're 
  dealing
  with freeform, irregular patchwork objects - objects which clearly do not
  fit any *patterns* -   the raison d'etre of maths .
 
  To focus that , these objects do not have common parts in more or less
  precisely repeating structures - i.e. fit patterns.
 
  A cartoon and a photo of the same face may have no parts or structure in
  common.
  Ditto different versions of the Google logo. Zero common parts or 
  structure
 
  Ditto cloud and mushroom - no common parts, or common structure.
 
  Yet the mind amazingly can see likenesses between all these things.
 
  Just about all the natural objects in the world , with some obvious
  exceptions, do not fit common patterns - they do not have the same parts 
  in
  precisely the same places/structures.  They may  have common loose
  organizations of parts - e.g. mouths, eyes, noses, lips  - but they are
  not precisely patterned.
 
  So you must explain how a mathematical approach, wh. is all about
  recognizing patterns, can apply to objects wh. do not fit patterns.
 
  You won't be able to - because if you could bring yourselves to look at 
  the
  real world or any depictions of it other than geometric, (metacognitively
  speaking),you would see for yourself that these objects don't have 
  precise
  patterns.
 
  It's obvious also that when the mind likens a cloud to a mushroom, it 
  cannot
  be using any math. techniques.
 
  .. but those things do have patterns.. A mushroom (A) is like a cloud
  mushroom (B).
 
  if ( (input_source_A == An_image) AND ( input_source_B == An_image ))
 
  One pattern is that they both came from an image source, and I just used
  maths + logic to prove it.
 
  But we have to understand how the mind does do that - because it's fairly
  clearly  the same technique the mind also uses to conceptualise even more
  vastly different forms such as those of  chair, tree,  dog, cat.
 
  And that technique - like concepts themselves -  is at the heart of AGI.
 
  And you can sit down and analyse the problem visually, physically and see
  also pretty obviously that if the mind can liken such physically 
  different
  objects as cloud and mushroom, then it HAS to do that with something like 
  a
  fluid schema. There's broadly no other way but to fluidly squash the 
  objects
  to match each other (there could certainly be different techniques of
  achieving that  - but the broad principles are fairly self evident). 
  Cloud
  and mushroom certainly don't match formulaically, mathematically. Neither 
  do
  those different versions of a tune. Or the different faces of Madonna.
 
  But what we've got here is people who don't in the final analysis give a
  damn about how to solve AGI - if it's a choice between doing maths and
  failing, and having some kind of artistic solution to AGI that actually
  works, most people here will happily fail forever. Mathematical AI has
  indeed consistently failed at AGI. You have to realise, mathematicians 
  have
  a certain kind of madness. Artists don't go around saying God is an 
  artist,
  or everything is art. Only