Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Matt Mahoney
--- On Sun, 12/28/08, Philip Hunt cabala...@googlemail.com wrote:

  Please remember that I am not proposing compression as
  a solution to the AGI problem. I am proposing it as a
  measure of progress in an important component (prediction).
 
 Then why not cut out the middleman and measure prediction
 directly?

Because a compressor proves the correctness of the measurement software at no 
additional cost in either space or time complexity or software complexity. The 
hard part of compression is modeling. Arithmetic coding is essentially a solved 
problem. A decompressor uses exactly the same model as a compressor. In high 
end compressors like PAQ, the arithmetic coder takes up about 1% of the 
software, 1% of the CPU time, and less than 1% of memory.

In speech recognition research it is common to use word perplexity as a measure 
of the quality of a language model. Experimentally, it correlates well with 
word error rate. Perplexity is defined as 2^H where H is the average number of 
bits needed to encode a word. Unfortunately this is sometimes done in 
nonstandard ways, such as with restricted vocabularies and different methods of 
handling words outside the vocabulary, parsing, stemming, capitalization, 
punctuation, spacing, and numbers. Without accounting for this additional data, 
it makes published results difficult to compare. Compression removes the 
possibility of such ambiguities.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Matt Mahoney
--- On Mon, 12/29/08, Philip Hunt cabala...@googlemail.com wrote:

 Incidently, reading Matt's posts got me interested in writing a
 compression program using Markov-chain prediction. The prediction bit
 was a piece of piss to write; the compression code is proving
 considerably more difficult.

Well, there is plenty of open source software.
http://cs.fit.edu/~mmahoney/compression/

If you want to write your own model and just need a simple arithmetic coder, 
you probably want fpaq0. Most of the other programs on this page use the same 
coder or some minor variation of it.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Sun, 12/28/08, John G. Rose johnr...@polyplexic.com wrote:
 
  So maybe for improved genetic
  algorithms used for obtaining max compression there needs to be a
  consciousness component in the agents? Just an idea I think there
 is
  potential for distributed consciousness inside of command line
 compressors
  :)
 
 No, consciousness (as the term is commonly used) is the large set of
 properties of human mental processes that distinguish life from death,
 such as ability to think, learn, experience, make decisions, take
 actions, communicate, etc. It is only relevant as an independent
 concept to agents that have a concept of death and the goal of avoiding
 it. The only goal of a compressor is to predict the next input symbol.
 

Well that's a question. Does death somehow enhance a lifeforms' collective
intelligence? Agents competing over finite resources.. I'm wondering if
there were multi-agent evolutionary genetics going on would there be a
finite resource of which there would be a relation to the collective goal of
predicting the next symbol. Agent knowledge is not only passed on in their
genes, it is also passed around to other agents Does agent death hinder
advances in intelligence or enhance it? And then would the intelligence
collected thus be applicable to the goal. And if so, consciousness may be
valuable.

John 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-29 Thread Robert Swaine
Mike, 
 

Six  2003
Seven  1996
Eight 2001
Eight and a half 
 
Good point with the movies, only a hardcore movie fan would make that 
association early in his trials to figure out the pattern as movie dates.  In 
this case you gave a hint, such a hint would tell the system to widen its 
attention spotlight to inlude movies, so entertainment, events, celebration, 
etc would come under attention based on what structure the movie concept's 
parent has in its domain content.
 
Thinking imaginatively to find hard solutions as you say, is possible with this 
system, by telling it to think outside the box to other domains and it can 
learn this pattern of domain- hopping based on the reward of a success or being 
authorized to value cross-domain attention search.  Thinking for the system is: 
shifting its attention to different regions (with the 4 domains), sizing and 
orienting the attention scale, and setting the focus depth (of details);  it 
can then read the contents of what comes up from that region and Compare, 
Contrast, Combine it to anyalyze or synthesize it. Thinking bigger or narrower 
is almost literal.
 
Like humans, this system stops a behavior (e.g, stops searching) because it 
runs out of motivation value, not ideas to search. Many systems known or 
described can lend themself to brute force thinking unsure of a solution, this 
structure allows it to do it elegantly using human-centric concept domains 
first (easier for us to communicate to it this way by saying build a damn 
good engine as human do vs 0010101101 or any other non-human language).  
 
It can and does re-write the concepts and content in its domain as it learns, 
but it started with the domains humans give it, e.g., I knew what movies were 
by having live in a number of situations where this concept was built up, so 
that later, I can learn about independent films and live performances or new 
types of entertainment thta gives similar or unfamiliar emotions.
 
Further rational
1) What humans do: have a biased (value system) that makes sense relative to 
our biological architecture;   Generate all human knowledge in this 
representation structure (natural language, ambiguous, low logic language).
 
2) What an early AGI can do: learn the human-bias by having a similar 
architecture which includes the value bias for pattern humans seek. Obtain as 
much of the recorded knowledge in the world from humans. Generate more, 
faster, new and better knowlege. Better is because it knows our value system 
and as well knows humans enough to convince them in a diccussion unlike most of 
us, that better is what it wants us to do(very bad!).
 
For natural language processing, humans readily communicate in song and poems, 
and understand them.  Many songs and poems do not make any logical sense, and 
few songs have wording order and story elements that are reasonable.  The model 
makes sense by looking for patterns where humans do, in the beats (situational 
border that structure all input) and the value (emotional meaning) of the 
song/poems content.
 
Hope some of this helps
Robert
 


--- On Sun, 12/28/08, Mike Tintner tint...@blueyonder.co.uk wrote:

From: Mike Tintner tint...@blueyonder.co.uk
Subject: Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] 
AGI Preschool: sketch of an evaluation framework for early stage AGI systems 
aimed at human-level, roughly humanlike AGI
To: agi@v2.listbox.com
Date: Sunday, December 28, 2008, 11:38 PM



Robert,
 
Thanks for your detailed, helpful replies. I like your approach of operating in 
multiple domains for problemsolving. But if the domains are known beforehand, 
then it's not truly creative problemsolving - where you do have to be prepared 
to go in search of the appropriate domains - and thus truly cross domains 
rather than simply combining preselected ones. I gave you a perhaps 
exaggerated example just to make the point. You had to realise that the correct 
domain to solve my problem was that of movies - the numbers were the titles of 
movies and the dates they came out. If you're dealing with real world rather 
than just artificial creative problems like our two, you may definitely have to 
make that kind of domain switch - solving any scientific detective problem, 
say, like that of binding in the brain, may require you to think in a 
surprising, new domain, for which you will have to search long and hard (and 
possibly without end).






Mike,
Very good choice.
 
 But the system always *knows* these domains beforehand  - and that it must 
 consider them in any problem?
 
 
YES  the domains content structure is what you mean, are the human-centric 
ones provided by living a childs life loading the value system with biases such 
as humans are warm and candy is really sweet.  By further being pushed thru 
western culture grade level curriculum we value the visual features symbols 
2003 and 1996 as numbers, then as dates.  The content models (concept 
patterns) are build up from 

Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-29 Thread Robert Swaine
 
The paper as a link instead of attachment:
http://mindsoftbioware.com/yahoo_site_admin/assets/docs/Swaine_R_Story_Understander_Model.36375123.pdf
 
The paper gives a quick view of the Human-centric representation and behavioral 
systems approach for problem-solving, reasoning as giving meaning (human 
values) to stories and games...Indexing relations via spatially related 
registers is it's simulated substrate.
 
cheers,
Robert


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread Matt Mahoney
--- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:

 Well that's a question. Does death somehow enhance a
 lifeforms' collective intelligence?

Yes, by weeding out the weak and stupid.

 Agents competing over finite resources.. I'm wondering if
 there were multi-agent evolutionary genetics going on would there be a
 finite resource of which there would be a relation to the collective goal of
 predicting the next symbol.

No, prediction is a secondary goal. The primary goal is to have a lot of 
descendants.

 Agent knowledge is not only passed on in their
 genes, it is also passed around to other agents Does agent death hinder
 advances in intelligence or enhance it? And then would the intelligence
 collected thus be applicable to the goal. And if so, consciousness may be
 valuable.

What does consciousness have to do with the rest of your argument?

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Hypercomputation and AGI

2008-12-29 Thread Ben Goertzel
Hi,

I expanded a previous blog entry of mine on hypercomputation and AGI into a
conference paper on the topic ... here is a rough draft, on which I'd
appreciate commentary from anyone who's knowledgeable on the subject:

http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

This is a theoretical rather than practical paper, although it does attempt
to explore some of the practical implications as well -- e.g., in the
hypothesis that intelligence does require hypercomputation, how might one go
about creating AGI?   I come to a somewhat surprising conclusion, which is
that -- even if intelligence fundamentally requires hypercomputation -- it
could still be possible to create an AI via making Turing computer programs
... it just wouldn't be possible to do this in a manner guided entirely by
science; one would need to use some other sort of guidance too, such as
chance, imitation or intuition...

-- Ben G


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread John G. Rose
 From: Matt Mahoney [mailto:matmaho...@yahoo.com]
 
 --- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:
 
  Agent knowledge is not only passed on in their
  genes, it is also passed around to other agents Does agent death
 hinder
  advances in intelligence or enhance it? And then would the
 intelligence
  collected thus be applicable to the goal. And if so, consciousness
 may be
  valuable.
 
 What does consciousness have to do with the rest of your argument?
 

Multi-agent systems should need individual consciousness to achieve advanced
levels of collective intelligence. So if you are programming a multi-agent
system, potentially a compressor, having consciousness in the agents could
have an intelligence amplifying effect instead of having non-conscious
agents. Or some sort of primitive consciousness component since higher level
consciousness has not really been programmed yet. 

Agree?

John



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-29 Thread Lukasz Stafiniak
http://www.sciencedaily.com/releases/2008/12/081224215542.htm

Nothing surprising ;-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-29 Thread Richard Loosemore

Lukasz Stafiniak wrote:

http://www.sciencedaily.com/releases/2008/12/081224215542.htm

Nothing surprising ;-)


Nothing surprising?!!

8-) Don't say that too loudly, Yudkowsky might hear you. :-)

The article is a bit naughty when it says, of Tversky and Kahnemann, 
that ...this has become conventional wisdom among cognition 
researchers.  Actually, the original facts were interpreted in a 
variety of ways, some of which strongly disagreed with T  K's original 
intepretation, just like this one you reference above.  The only thing 
that is conventional wisdom is that the topic exists, and is the subject 
of dispute.


And, as many people know, I made the mistake of challenging Yudkowsky on 
precisely this subject back in 2006, when he wrote an essay strongly 
advocating TK's original intepretation.  Yudkowsky went completely 
berserk, accused me of being an idiot, having no brain, not reading any 
of the literature, never answering questions, and generally being 
something unspeakably worse than a slime-oozing crank.  He literally 
wrote an essay denouncing me as equivalent to a flat-earth believing 
crackpot.


When I suggested that someone go check some of his ravings with an 
outside authority, he banned me from his discussion list.


Ah, such are the joys of being speaking truth to power(ful idiots).

;-)

As far as this research goes, it sits somewhere down at the lower end of 
the available theories.  My friend Mike Oaksford in the UK has written 
several papers giving a higher level cognitive theory that says that 
people are, in fact, doing something like bayesian estimation when then 
make judgments.  In fact, people are very good at being bayesians, 
contra the loud protests of the I Am A Bayesian Rationalist crowd, who 
think they were the first to do it.






Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-29 Thread J. Andrew Rogers


On Dec 29, 2008, at 10:45 AM, Ben Goertzel wrote:
I expanded a previous blog entry of mine on hypercomputation and AGI  
into a conference paper on the topic ... here is a rough draft, on  
which I'd appreciate commentary from anyone who's knowledgeable on  
the subject:


http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

This is a theoretical rather than practical paper, although it does  
attempt to explore some of the practical implications as well --  
e.g., in the hypothesis that intelligence does require  
hypercomputation, how might one go about creating AGI?   I come to a  
somewhat surprising conclusion, which is that -- even if  
intelligence fundamentally requires hypercomputation -- it could  
still be possible to create an AI via making Turing computer  
programs ... it just wouldn't be possible to do this in a manner  
guided entirely by science; one would need to use some other sort of  
guidance too, such as chance, imitation or intuition...



As more of a meta-comment, the whole notion of hypercomputation  
seems to be muddled, insofar as super-recursive algorithms may be a  
limited example of it.


I was doing a lot of work with inductive Turing machines several years  
ago, and most of the differences seemed to be definitional e.g. what  
constitutes an algorithm or answer.  For most practical purposes, the  
price of implementing them in conventional discrete space is the  
introduction of some (usually acceptable) error.  But if they  
approximate to the point of functional convergence on a normal Turing  
machine...  As best I have been able to tell, and I have not really  
been paying attention because the arguments seem to mostly be people  
talking past each other, is that ITMs raise some interesting  
philosophical questions regarding hypercomputation.



We cannot implement a *strict* hypercomputer, but to what extent does  
it count if we can asymptotically converge on the functional  
consequences of a hypercomputer using a normal computer?  It suspect  
it will be hard to evict the belief in Penrosian magic from the error  
bars in any case.


Cheers,

J. Andrew Rogers



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-29 Thread Ben Goertzel
Well, some of the papers in the references of my paper give formal
mathematical definitions of hypercomputation, though my paper is brief and
conceptual and not of that nature.  So although the generic concept may be
muddled, there are certainly some fully precise variants of it.

This paper surveys various formally defined varieties of hypercomputing,
though I haven't read it closely..

http://www.amirrorclear.net/academic/papers/many-forms.pdf

Anyway the argument in my paper is pretty strong and applies to any variant
with power beyond that of ordinary Turing machines, it would seem...

-- ben g

On Mon, Dec 29, 2008 at 4:18 PM, J. Andrew Rogers 
and...@ceruleansystems.com wrote:


 On Dec 29, 2008, at 10:45 AM, Ben Goertzel wrote:

 I expanded a previous blog entry of mine on hypercomputation and AGI into
 a conference paper on the topic ... here is a rough draft, on which I'd
 appreciate commentary from anyone who's knowledgeable on the subject:

 http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

 This is a theoretical rather than practical paper, although it does
 attempt to explore some of the practical implications as well -- e.g., in
 the hypothesis that intelligence does require hypercomputation, how might
 one go about creating AGI?   I come to a somewhat surprising conclusion,
 which is that -- even if intelligence fundamentally requires
 hypercomputation -- it could still be possible to create an AI via making
 Turing computer programs ... it just wouldn't be possible to do this in a
 manner guided entirely by science; one would need to use some other sort of
 guidance too, such as chance, imitation or intuition...



 As more of a meta-comment, the whole notion of hypercomputation seems to
 be muddled, insofar as super-recursive algorithms may be a limited example
 of it.

 I was doing a lot of work with inductive Turing machines several years ago,
 and most of the differences seemed to be definitional e.g. what constitutes
 an algorithm or answer.  For most practical purposes, the price of
 implementing them in conventional discrete space is the introduction of some
 (usually acceptable) error.  But if they approximate to the point of
 functional convergence on a normal Turing machine...  As best I have been
 able to tell, and I have not really been paying attention because the
 arguments seem to mostly be people talking past each other, is that ITMs
 raise some interesting philosophical questions regarding hypercomputation.


 We cannot implement a *strict* hypercomputer, but to what extent does it
 count if we can asymptotically converge on the functional consequences of
 a hypercomputer using a normal computer?  It suspect it will be hard to
 evict the belief in Penrosian magic from the error bars in any case.

 Cheers,

 J. Andrew Rogers



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread Matt Mahoney
--- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:

  What does consciousness have to do with the rest of your argument?
  
 
 Multi-agent systems should need individual consciousness to
 achieve advanced
 levels of collective intelligence. So if you are
 programming a multi-agent
 system, potentially a compressor, having consciousness in
 the agents could
 have an intelligence amplifying effect instead of having
 non-conscious
 agents. Or some sort of primitive consciousness component
 since higher level
 consciousness has not really been programmed yet. 
 
 Agree?

No. What do you mean by consciousness?

Some people use consciousness and intelligence interchangeably. If that is 
the case, then you are just using a circular argument. If not, then what is the 
difference?

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Ben Goertzel
Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X

;-)

ben g

On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 --- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:

   What does consciousness have to do with the rest of your argument?
  
 
  Multi-agent systems should need individual consciousness to
  achieve advanced
  levels of collective intelligence. So if you are
  programming a multi-agent
  system, potentially a compressor, having consciousness in
  the agents could
  have an intelligence amplifying effect instead of having
  non-conscious
  agents. Or some sort of primitive consciousness component
  since higher level
  consciousness has not really been programmed yet.
 
  Agree?

 No. What do you mean by consciousness?

 Some people use consciousness and intelligence interchangeably. If that
 is the case, then you are just using a circular argument. If not, then what
 is the difference?

 -- Matt Mahoney, matmaho...@yahoo.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-29 Thread Kaj Sotala
On Mon, Dec 29, 2008 at 10:15 PM, Lukasz Stafiniak lukst...@gmail.com wrote:
 http://www.sciencedaily.com/releases/2008/12/081224215542.htm

 Nothing surprising ;-)

So they have a result saying that we're good at subconsciously
estimating the direction in which dots on a screen are moving in.
Apparently this can be safely generalized into Our Unconscious Brain
Makes The Best Decisions Possible (implied: always).

You're right, nothing surprising. Just the kind of unfounded,
simplistic hyperbole I'd expect from your average science reporter.
;-)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] news bit: Evolution of Intelligence More Complex Than Once Thought

2008-12-29 Thread David Hart
Via Slashdot:

*According to a new article published in Scientific American, the nature of
and evolutionary development of animal intelligence is significantly more
complicated than many have
assumedhttp://www.sciam.com/article.cfm?id=one-world-many-minds.
In opposition to the widely held view that intelligence is largely linear in
nature, in many cases intelligent traits have developed along independent
paths. From the article: 'Over the past 30 years, however, research in
comparative neuroanatomy clearly has shown that complex brains — and
sophisticated cognition — have evolved from simpler brains multiple times
independently in separate lineages ...'*
*

*



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-29 Thread J. Andrew Rogers


On Dec 29, 2008, at 1:22 PM, Ben Goertzel wrote:


Well, some of the papers in the references of my paper give formal  
mathematical definitions of hypercomputation, though my paper is  
brief and conceptual and not of that nature.  So although the  
generic concept may be muddled, there are certainly some fully  
precise variants of it.



My comment was not really against the argument you make in the paper,  
nor do I disagree with your definition of hypercomputation. (BTW,  
run spellcheck.)  I was referring to the somewhat anomalous difficulty  
of deciding whether or not some computational models truly meet that  
definition as a practical matter.



Anyway the argument in my paper is pretty strong and applies to any  
variant with power beyond that of ordinary Turing machines, it would  
seem...



No disagreement with that, which is why I called it a meta- 
comment. :-)


Super-recursive algorithms, inductive Turing machines, and related  
computational models can be made to sit in a somewhat fuzzy place with  
respect to whether or not they are hypercomputers or normal Turing  
machines.  A Turing machine that asymptotically converges on producing  
the same result as a hypercomputer is an interesting case insofar as  
the results they produce may be close enough that you can consider the  
difference to be below the noise floor, and if they are functionally  
equivalent using that somewhat unusual definition then you effectively  
have equivalence to a hypercomputer without the hypercomputer.  Not  
strictly by definition, but within some strictly implied error bound  
for the purposes of comparing output (which is all we usually care  
about).


The concept of non-isotropic distributions of random numbers has  
always interested me for much the same reason, since there seems to be  
a similar concept at work there.


Cheers,

J. Andrew Rogers




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-29 Thread Matt Mahoney
--- On Mon, 12/29/08, Richard Loosemore r...@lightlink.com wrote:

 8-) Don't say that too loudly, Yudkowsky might hear
 you. :-)
...
 When I suggested that someone go check some of his ravings
 with an outside authority, he banned me from his discussion
 list.

Yudkowsky's side of the story might be of interest...

http://www.sl4.org/archive/0608/15895.html
http://www.sl4.org/archive/0608/15928.html

-- Matt Mahoney, matmaho...@yahoo.com


 From: Richard Loosemore r...@lightlink.com
 Subject: Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best 
 Decisions Possible
 To: agi@v2.listbox.com
 Date: Monday, December 29, 2008, 4:02 PM
 Lukasz Stafiniak wrote:
 
 http://www.sciencedaily.com/releases/2008/12/081224215542.htm
  
  Nothing surprising ;-)
 
 Nothing surprising?!!
 
 8-) Don't say that too loudly, Yudkowsky might hear
 you. :-)
 
 The article is a bit naughty when it says, of Tversky and
 Kahnemann, that ...this has become conventional wisdom
 among cognition researchers.  Actually, the original
 facts were interpreted in a variety of ways, some of which
 strongly disagreed with T  K's original
 intepretation, just like this one you reference above.  The
 only thing that is conventional wisdom is that the topic
 exists, and is the subject of dispute.
 
 And, as many people know, I made the mistake of challenging
 Yudkowsky on precisely this subject back in 2006, when he
 wrote an essay strongly advocating TK's original
 intepretation.  Yudkowsky went completely berserk, accused
 me of being an idiot, having no brain, not reading any of
 the literature, never answering questions, and generally
 being something unspeakably worse than a slime-oozing crank.
  He literally wrote an essay denouncing me as equivalent to
 a flat-earth believing crackpot.
 
 When I suggested that someone go check some of his ravings
 with an outside authority, he banned me from his discussion
 list.
 
 Ah, such are the joys of being speaking truth to power(ful
 idiots).
 
 ;-)
 
 As far as this research goes, it sits somewhere down at the
 lower end of the available theories.  My friend Mike
 Oaksford in the UK has written several papers giving a
 higher level cognitive theory that says that people are, in
 fact, doing something like bayesian estimation when then
 make judgments.  In fact, people are very good at being
 bayesians, contra the loud protests of the I Am A Bayesian
 Rationalist crowd, who think they were the first to do it.
 
 
 
 
 
 Richard Loosemore



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Philip Hunt
2008/12/29 Matt Mahoney matmaho...@yahoo.com:
 --- On Mon, 12/29/08, Philip Hunt cabala...@googlemail.com wrote:

 Incidently, reading Matt's posts got me interested in writing a
 compression program using Markov-chain prediction. The prediction bit
 was a piece of piss to write; the compression code is proving
 considerably more difficult.

 Well, there is plenty of open source software.
 http://cs.fit.edu/~mmahoney/compression/

 If you want to write your own model and just need a simple arithmetic coder, 
 you probably want fpaq0. Most of the other programs on this page use the same 
 coder or some minor variation of it.

I've just had a look at it, thanks.

Am I right in understanding that the coder from fpaq0 could be used
with any other predictor?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Matt Mahoney
--- On Mon, 12/29/08, Philip Hunt cabala...@googlemail.com wrote:
 Am I right in understanding that the coder from fpaq0 could
 be used with any other predictor?

Yes. It has a simple interface. You have a class called Predictor which is your 
bit sequence predictor. It has 2 member functions that you have to write. p() 
should return your estimated probability that the next bit will be a 1, as a 12 
bit number (0 to 4095). update(y) then tells you what that bit actually was, a 
0 or 1. The encoder will alternately call these 2 functions for each bit of the 
sequence. The predictor doesn't know whether it is compressing or decompressing 
because it sees exactly the same sequence either way.

So the easy part is done :)

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com