RE: [agi] Re: Compressed Cross-Indexed Concepts

2010-08-29 Thread John G. Rose
Mike, To put it into your own words here, mathematics is a delineation out of the infinitely diversifiable, the same zone where design comes from. And design needs a medium, the medium can be the symbolic expressions and language of mathematics. And so conveniently here the mathematics is

[agi] Natural Hyjacked Behavioral Control

2010-08-19 Thread John G. Rose
I thought this was interesting when looked at in relation to evolution and a parasitic intelligence - http://www.guardian.co.uk/science/2010/aug/18/zombie-carpenter-ant-fungus --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS

RE: [agi] Compressed Cross-Indexed Concepts

2010-08-19 Thread John G. Rose
integration of concepts and better interpretation of concepts? On Fri, Aug 13, 2010 at 4:25 PM, John G. Rose johnr...@polyplexic.com wrote: -Original Message- From: Jim Bromer [mailto:jimbro...@gmail.com] On Thu, Aug 12, 2010 at 12:40 AM, John G. Rose johnr...@polyplexic.com wrote

RE: [agi] Nao Nao

2010-08-13 Thread John G. Rose
think the idea of performing useful work should be a goal). The protocol is obviously a good idea, but you're not suggesting it per se will lead to AGI? From: John G. Rose mailto:johnr...@polyplexic.com Sent: Thursday, August 12, 2010 3:17 PM To: agi mailto:agi@v2.listbox.com Subject: RE

RE: [agi] Compressed Cross-Indexed Concepts

2010-08-13 Thread John G. Rose
-Original Message- From: Jim Bromer [mailto:jimbro...@gmail.com] On Thu, Aug 12, 2010 at 12:40 AM, John G. Rose johnr...@polyplexic.com wrote: The ideological would still need be expressed mathematically. I don't understand this.  Computers can represent related data objects

RE: [agi] Nao Nao

2010-08-12 Thread John G. Rose
: John G. Rose mailto:johnr...@polyplexic.com Sent: Thursday, August 12, 2010 5:46 AM To: agi mailto:agi@v2.listbox.com Subject: RE: [agi] Nao Nao I wasn't meaning to portray pessimism. And that little sucker probably couldn't pick up a knife yet. But this is a paradigm change

RE: [agi] Nao Nao

2010-08-11 Thread John G. Rose
into the PC. This is one topic for which I have not been able to have a satisfactory discussion or answer. People who build robots tend to think in terms of having the processing power on the robot. This I believe is wrong. - Ian Parker On 10 August 2010 00:06, John G. Rose johnr

RE: [agi] Compressed Cross-Indexed Concepts

2010-08-11 Thread John G. Rose
-Original Message- From: Jim Bromer [mailto:jimbro...@gmail.com] Well, if it was a mathematical structure then we could start developing prototypes using familiar mathematical structures.  I think the structure has to involve more ideological relationships than mathematical. 

RE: [agi] Nao Nao

2010-08-11 Thread John G. Rose
: David Jones [mailto:davidher...@gmail.com] Way too pessimistic in my opinion. On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.com wrote: Aww, so cute. I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays sensory information back to the main servers

RE: [agi] How To Create General AI Draft2

2010-08-09 Thread John G. Rose
Actually this is quite critical. Defining a chair - which would agree with each instance of a chair in the supplied image - is the way a chair should be defined and is the way the mind processes it. It can be defined mathematically in many ways. There is a particular one I would go for

RE: [agi] How To Create General AI Draft2

2010-08-09 Thread John G. Rose
-Original Message- From: Jim Bromer [mailto:jimbro...@gmail.com] The question for me is not what the smallest pieces of visual information necessary to represent the range and diversity of kinds of objects are, but how would these diverse examples be woven into highly compressed

RE: RE: [agi] How To Create General AI Draft2

2010-08-09 Thread John G. Rose
notions no matter how good my arguments are and finds yet another reason, any reason will do, to say I'm still wrong. On Aug 9, 2010 2:18 AM, John G. Rose johnr...@polyplexic.com wrote: Actually this is quite critical. Defining a chair - which would agree with each instance of a chair

RE: [agi] Nao Nao

2010-08-09 Thread John G. Rose
Aww, so cute. I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays sensory information back to the main servers with all the other Nao's all collecting personal data in a massive multi-agent geo-distributed robo-network. So cuddly! And I wonder if it receives and

RE: [agi] Epiphany - Statements of Stupidity

2010-08-08 Thread John G. Rose
to use the Mafia term and produce arguments from the Qur'an against the militant position. There would be quite a lot of contracts to be had if there were a realistic prospect of doing this. - Ian Parker On 7 August 2010 06:50, John G. Rose johnr...@polyplexic.com wrote: Philosophical

RE: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread John G. Rose
statements of stupidity - some of these are examples of cramming sophisticated thoughts into simplistic compressed text. Language is both intelligence enhancing and limiting. Human language is a protocol between agents. So there is minimalist data transfer, I had no choice but to ... is a

RE: [agi] Epiphany - Statements of Stupidity

2010-08-06 Thread John G. Rose
-Original Message- From: Ian Parker [mailto:ianpark...@gmail.com] The Turing test is not in fact a test of intelligence, it is a test of similarity with the human. Hence for a machine to be truly Turing it would have to make mistakes. Now any useful system will be made as

RE: [agi] Pretty worldchanging

2010-07-25 Thread John G. Rose
You have to give toast though to Net entities like Wikipedia, I'd dare say one of humankind's greatest achievements. Then eventually over a few years it'll be available as a plug-in, as a virtual trepan thus reducing the effort of subsuming all that. And then maybe structural intelligence add-ins

RE: [agi] Clues to the Mind: Illusions / Vision

2010-07-25 Thread John G. Rose
Here is an example of superimposed images where you have to have a predisposed perception - http://www.youtube.com/watch?v=V1m0kCdC7co John From: deepakjnath [mailto:deepakjn...@gmail.com] Sent: Saturday, July 24, 2010 11:03 PM To: agi Subject: [agi] Clues to the Mind: Illusions /

RE: [agi] How do we hear music

2010-07-24 Thread John G. Rose
-Original Message- You have all missed one vital point. Music is repeating and it has a symmetry. In dancing (song and dance) moves are repeated in a symmetrical pattern. Question why are we programmed to find symmetry? This question may be more core to AGI than appears at first

RE: [agi] OFF-TOPIC: University of Hong Kong Library

2010-07-15 Thread John G. Rose
Make sure you study that up YKY :) John From: YKY (Yan King Yin, 甄景贤) [mailto:generic.intellige...@gmail.com] Sent: Thursday, July 15, 2010 8:59 AM To: agi Subject: [agi] OFF-TOPIC: University of Hong Kong Library Today, I went to the HKU main library: =) KY agi |

RE: [agi] OFF-TOPIC: University of Hong Kong Library

2010-07-15 Thread John G. Rose
-Original Message- From: Ian Parker [mailto:ianpark...@gmail.com] Ok Off topic, but not as far as you might think. YKY has posted in Creating Artificial Intelligence on a collaborative project. It is quite important to know exactly where he is. You see Taiwan uses the classical

RE: [agi] New KurzweilAI.net site... with my silly article sillier chatbot ;-p ;) ....

2010-07-12 Thread John G. Rose
These video/rendered chatbots have huge potential and will be taken in many different directions. They are gradually over time approaching a p-zombie-esque situation. They add multi-modal communication - body/facial language/expression and prosody. So even if the text alone is not too good

RE: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-11 Thread John G. Rose
Note: Theorem 1.7.1 There eRectively exists a universal computer. If you copy and paste this declaration the ff gets replaced with a circle cap R :) Not sure how this shows up... John From: Ben Goertzel [mailto:b...@goertzel.org] Sent: Friday, July 09, 2010 8:50 AM To: agi

RE: [agi] masterpiece on an iPad

2010-07-02 Thread John G. Rose
An AGI may not really think like we do, it may just execute code. Though I suppose you could program a lot of fuzzy loops and idle speculation, entertaining possibilities, having human think envy.. John From: Matt Mahoney [mailto:matmaho...@yahoo.com] Sent: Friday, July 02, 2010

RE: [agi] masterpiece on an iPad

2010-07-02 Thread John G. Rose
a function that simulates your mind for some arbitrary purpose determined by its programmer. -- Matt Mahoney, matmaho...@yahoo.com _ From: John G. Rose johnr...@polyplexic.com To: agi agi@v2.listbox.com Sent: Fri, July 2, 2010 11:39:23 AM Subject: RE: [agi] masterpiece on an iPad

RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
to this conclusion I have the University of Surrey and CRESS in mind. - Ian Parker On 26 June 2010 14:36, John G. Rose johnr...@polyplexic.com wrote: -Original Message- From: Ian Parker [mailto:ianpark...@gmail.com] How do you solve World Hunger? Does AGI have to. I think

RE: [agi] The problem with AGI per Sloman

2010-06-27 Thread John G. Rose
-Original Message- From: Ian Parker [mailto:ianpark...@gmail.com] So an AGI would have to get established over a period of time for anyone to really care what it has to say about these types of issues. It could simulate things and come up with solutions but they would not get

RE: [agi] The problem with AGI per Sloman

2010-06-26 Thread John G. Rose
-Original Message- From: Ian Parker [mailto:ianpark...@gmail.com] How do you solve World Hunger? Does AGI have to. I think if it is truly G it has to. One way would be to find out what other people had written on the subject and analyse the feasibility of their solutions.

RE: [agi] The problem with AGI per Sloman

2010-06-24 Thread John G. Rose
I think some confusion occurs where AGI researchers want to build an artificial person verses artificial general intelligence. An AGI might be just a computational model running in software that can solve problems across domains. An artificial person would be much else in addition to AGI.

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
-Original Message- From: Steve Richfield [mailto:steve.richfi...@gmail.com] My underlying thought here is that we may all be working on the wrong problems. Instead of working on the particular analysis methods (AGI) or self-organization theory (NN), perhaps if someone found a

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
, Jun 21, 2010 at 9:12 AM, John G. Rose johnr...@polyplexic.com wrote: -Original Message- From: Steve Richfield [mailto:steve.richfi...@gmail.com] My underlying thought here is that we may all be working on the wrong problems. Instead of working on the particular analysis methods

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
-Original Message- From: Steve Richfield [mailto:steve.richfi...@gmail.com] Really? Do networks such as botnets really care about this? Or does it apply? Anytime negative feedback can become positive feedback because of delays or phase shifts, this becomes an issue. Many

RE: [agi] A fundamental limit on intelligence?!

2010-06-21 Thread John G. Rose
. Appreciate it. What little EE training I did undergo was brief and painful :) On Mon, Jun 21, 2010 at 11:16 AM, John G. Rose johnr...@polyplexic.com wrote: Of course, there is the big question of just what it is that is being attenuated in the bowels of an intelligent system. Usually

RE: [agi] just a thought

2009-01-14 Thread John G. Rose
From: Matt Mahoney [mailto:matmaho...@yahoo.com] --- On Wed, 1/14/09, Christopher Carr cac...@pdx.edu wrote: Problems with IQ notwithstanding, I'm confident that, were my silly IQ of 145 merely doubled, I could convince Dr. Goertzel to give me the majority of his assets, including control

RE: [agi] initial reaction to A2I2's call center product

2009-01-12 Thread John G. Rose
From: Ben Goertzel [mailto:b...@goertzel.org] Sent: Monday, January 12, 2009 3:42 AM To: agi@v2.listbox.com Subject: [agi] initial reaction to A2I2's call center product AGI company A2I2 has released a product for automating call center functionality, see...

RE: [agi] initial reaction to A2I2's call center product

2009-01-12 Thread John G. Rose
From: Bob Mottram [mailto:fuzz...@gmail.com] 2009/1/12 Ben Goertzel b...@goertzel.org: AGI company A2I2 has released a product for automating call center functionality We value your interest in our AGI related service. If you agree that AGI can have useful applications for call

RE: [agi] Universal intelligence test benchmark

2008-12-30 Thread John G. Rose
test benchmark Consciousness of X is: the idea or feeling that X is correlated with Consciousness of X ;-) ben g On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney matmaho...@yahoo.com wrote: --- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote: What does consciousness have to do

RE: [agi] Universal intelligence test benchmark

2008-12-30 Thread John G. Rose
is important for compression, then I suggest you write two compression programs, one conscious and one not, and see which one compresses better. Otherwise, this is nonsense. -- Matt Mahoney, matmaho...@yahoo.com --- On Tue, 12/30/08, John G. Rose johnr...@polyplexic.com wrote: From: John G. Rose johnr

RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread John G. Rose
From: Matt Mahoney [mailto:matmaho...@yahoo.com] --- On Sun, 12/28/08, John G. Rose johnr...@polyplexic.com wrote: So maybe for improved genetic algorithms used for obtaining max compression there needs to be a consciousness component in the agents? Just an idea I think

RE: [agi] Universal intelligence test benchmark

2008-12-29 Thread John G. Rose
From: Matt Mahoney [mailto:matmaho...@yahoo.com] --- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote: Agent knowledge is not only passed on in their genes, it is also passed around to other agents Does agent death hinder advances in intelligence or enhance

[agi] Alternative Cicuitry

2008-12-28 Thread John G. Rose
Reading this - http://www.nytimes.com/2008/12/23/health/23blin.html?ref=science makes me wonder what other circuitry we have that's discouraged from being accepted. John --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS

RE: [agi] Universal intelligence test benchmark

2008-12-28 Thread John G. Rose
From: Matt Mahoney [mailto:matmaho...@yahoo.com] --- On Sat, 12/27/08, John G. Rose johnr...@polyplexic.com wrote: Well I think consciousness must be some sort of out of band intelligence that bolsters an entity in terms of survival. Intelligence probably stratifies or optimizes

RE: [agi] Universal intelligence test benchmark

2008-12-27 Thread John G. Rose
From: Matt Mahoney [mailto:matmaho...@yahoo.com] --- On Sat, 12/27/08, John G. Rose johnr...@polyplexic.com wrote: How does consciousness fit into your compression intelligence modeling? It doesn't. Why is consciousness important? I was just prodding you on this. Many

RE: [agi] Universal intelligence test benchmark

2008-12-26 Thread John G. Rose
From: Matt Mahoney [mailto:matmaho...@yahoo.com] --- On Fri, 12/26/08, Philip Hunt cabala...@googlemail.com wrote: Humans aren't particularly good at compressing data. Does this mean humans aren't intelligent, or is it a poor definition of intelligence? Humans are very good at

RE: [agi] Universal intelligence test benchmark

2008-12-26 Thread John G. Rose
From: Matt Mahoney [mailto:matmaho...@yahoo.com] How does consciousness fit into your compression intelligence modeling? It doesn't. Why is consciousness important? I was just prodding you on this. Many people on this list talk about the requirements of consciousness for AGI and I was

RE: [agi] Relevance of SE in AGI

2008-12-22 Thread John G. Rose
I've been experimenting with extending OOP to potentially implement functionality that could make a particular AGI design easier to build. The problem with SE is that it brings along much baggage that can totally obscure AGI thinking. Many AGI people and AI people are automatic top of the

RE: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-20 Thread John G. Rose
From: Mike Tintner [mailto:tint...@blueyonder.co.uk] Sound silly? Arguably the most essential requirement for a true human- level GI is to be able to consider any object whatsoever as a thing. It's a cognitively awesome feat . It means we can conceive of literally any thing as a thing -

RE: [agi] Should I get a PhD?

2008-12-19 Thread John G. Rose
Mike, Exercising rational thinking need not force exposure of oneself into being sequestered as a rationalist. And utilizing creativity effectively requires a context in some domain. The domain context typically involves application of rationality. A temporary absence of creativity does not

RE: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread John G. Rose
Top posted here: Using your bricks to construct something, you have to construct it within constraints. Constraints is the key word. Whatever bricks you are using they have their own limiting properties. You CANNOT build anything anyway you please. Just by defining bricks you are already applying

RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread John G. Rose
From: Trent Waddington [mailto:[EMAIL PROTECTED] On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I mean that people are free to decide if others feel pain. For example, a scientist may decide that a mouse does not feel pain when it is stuck in the eye with a needle

RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread John G. Rose
From: Richard Loosemore [mailto:[EMAIL PROTECTED] I completed the first draft of a technical paper on consciousness the other day. It is intended for the AGI-09 conference, and it can be found at: http://susaro.com/wp- content/uploads/2008/11/draft_consciousness_rpwl.pdf Um...

RE: [agi] A paper that actually does solve the problem of consciousness

2008-11-16 Thread John G. Rose
From: Richard Loosemore [mailto:[EMAIL PROTECTED] Three things. First, David Chalmers is considered one of the world's foremost researchers in the consciousness field (he is certainly now the most celebrated). He has read the argument presented in my paper, and he has discussed it

RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-14 Thread John G. Rose
From: Jiri Jelinek [mailto:[EMAIL PROTECTED] On Fri, Nov 14, 2008 at 2:07 AM, John G. Rose [EMAIL PROTECTED] wrote: there are many computer systems now, domain specific intelligent ones where their life is more important than mine. Some would say that the battle is already lost. For now

RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread John G. Rose
From: Jiri Jelinek [mailto:[EMAIL PROTECTED] On Wed, Nov 12, 2008 at 2:41 AM, John G. Rose [EMAIL PROTECTED] wrote: is it really necessary for an AGI to be conscious? Depends on how you define it. If you think it's about feelings/qualia then - no - you don't need that [potentially

RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-13 Thread John G. Rose
From: Richard Loosemore [mailto:[EMAIL PROTECTED] I thought what he said was a good description more or less. Out of 600 millions years there may be only a fraction of that which is an improvement but it's still there. How do you know, beyond a reasonable doubt, that any other being

RE: [agi] Ethics of computer-based cognitive experimentation

2008-11-11 Thread John G. Rose
From: Richard Loosemore [mailto:[EMAIL PROTECTED] John LaMuth wrote: Reality check *** Consciousness is an emergent spectrum of subjectivity spanning 600 mill. years of evolution involving mega-trillions of competing organisms, probably selecting for obscure quantum

RE: [agi] Cloud Intelligence

2008-11-03 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] True, we can't explain why the human brain needs 10^15 synapses to store 10^9 bits of long term memory (Landauer's estimate). Typical neural networks store 0.15 to 0.25 bits per synapse. This study -

RE: [agi] the universe is computable [Was: Occam's Razor and its abuse]

2008-11-02 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Thu, 10/30/08, John G. Rose [EMAIL PROTECTED] wrote: You can't compute the universe within this universe because the computation would have to include itself. Exactly. That is why our model of physics must be probabilistic

RE: [agi] Cloud Intelligence

2008-11-02 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Thu, 10/30/08, John G. Rose [EMAIL PROTECTED] wrote: From: Matt Mahoney [mailto:[EMAIL PROTECTED] Cloud computing is compatible with my proposal for distributed AGI. It's just not big enough. I would need 10^10 processors, each

RE: [agi] Cloud Intelligence

2008-10-30 Thread John G. Rose
From: Russell Wallace [mailto:[EMAIL PROTECTED] On Thu, Oct 30, 2008 at 6:45 AM, [EMAIL PROTECTED] wrote: It sure seems to me that the availability of cloud computing is valuable to the AGI project. There are some claims that maybe intelligent programs are still waiting on sufficient

RE: [agi] Cloud Intelligence

2008-10-30 Thread John G. Rose
On Thu, Oct 30, 2008 at 11:15 AM, Russell Wallace [EMAIL PROTECTED] wrote: On Thu, Oct 30, 2008 at 3:07 PM, John G. Rose [EMAIL PROTECTED] wrote: My suspicion though is that say you had 100 physical servers and then 100 physical cloud servers. You could hand tailor your distributed

RE: [agi] Cloud Intelligence

2008-10-30 Thread John G. Rose
From: Russell Wallace [mailto:[EMAIL PROTECTED] On Thu, Oct 30, 2008 at 3:42 PM, John G. Rose [EMAIL PROTECTED] wrote: Not talking custom hardware, when you take your existing app and apply it to the distributed resource and network topology (your 100 servers) you can structure

RE: [agi] Cloud Intelligence

2008-10-30 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] Cloud computing is compatible with my proposal for distributed AGI. It's just not big enough. I would need 10^10 processors, each 10^3 to 10^6 times more powerful than a PC. The only thing we have that come close to those numbers are insect

RE: [agi] the universe is computable [Was: Occam's Razor and its abuse]

2008-10-30 Thread John G. Rose
You can't compute the universe within this universe because the computation would have to include itself. Also there's not enough energy to power the computation. But if the universe is not what we think it is, perhaps it is computable since all kinds of assumptions are made about it,

RE: [agi] Cloud Intelligence

2008-10-29 Thread John G. Rose
From: Bob Mottram [mailto:[EMAIL PROTECTED] Beware of putting too much stuff into the cloud. Especially in the current economic climate clouds could disappear without notice (i.e. unrecoverable data loss). Also, depending upon terms and conditions any data which you put into the cloud may

RE: [agi] On programming languages

2008-10-25 Thread John G. Rose
From: Ben Goertzel [mailto:[EMAIL PROTECTED] Somewhat similarly, I've done coding on Windows before, but I dislike the operating system quite a lot, so in general I try to avoid any projects where I have to use it. However, if I found some AGI project that I thought were more promising

RE: [agi] META: A possible re-focusing of this list

2008-10-20 Thread John G. Rose
Just an idea - not sure if it would work or not - 3 lists: [AGI-1], [AGI-2], [AGI-3]. Sub-content is determined by the posters themselves. Same amount of emails initially but partitioned up. Wonder what would happen? John --- agi Archives:

RE: [agi] Re: Defining AGI

2008-10-17 Thread John G. Rose
From: Ben Goertzel [mailto:[EMAIL PROTECTED] As Ben has pointed out language understanding is useful to teach AGI. But if we use the domain of mathematics we can teach AGI by formal expressions more easily and we understand these expressions as well. - Matthias That is not clear --

RE: [agi] NEWS: Scientist develops programme to understand alien languages

2008-10-17 Thread John G. Rose
From: Pei Wang [mailto:[EMAIL PROTECTED] ... even an alien language far removed from any on Earth is likely to have recognisable patterns that could help reveal how intelligent the life forms are. This is true unless the alien life form existed in mostly order and communicated via the

RE: [agi] First issue of H+ magazine ... http://hplusmagazine.com/

2008-10-17 Thread John G. Rose
This is cool it's kind of like a combo of Omni, a desktop publishing fanzine with 3DSMax cover page, and randomly gathered techno tidbits all encapsulated in a secure PDF. The skin phone is neat and the super imposition eye contact lens by U-Dub has value. I wonder where they got that idea from,

RE: [agi] META: A possible re-focusing of this list

2008-10-16 Thread John G. Rose
From: Eric Burton [mailto:[EMAIL PROTECTED] Honestly, if the idea is to wave our hands at one another's ideas then let's at least see something on the table. I'm happy to discuss my work with natural language parsing and mood evaluation for low-bandwidth human mimicry, for instance, because

RE: AW: Defining AGI (was Re: AW: [agi] META: A possible re-focusing of this list)

2008-10-16 Thread John G. Rose
From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED] In my opinion, the domain of software development is far too ambitious for the first AGI. Software development is not a closed domain. The AGI will need at least knowledge about the domain of the problems for which the AGI shall write a

RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread John G. Rose
From: Ben Goertzel [mailto:[EMAIL PROTECTED] One possibility would be to more narrowly focus this list, specifically on **how to make AGI work**. Potentially, there could be another list, something like agi- philosophy, devoted to philosophical and weird-physics and other discussions

RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread John G. Rose
From: BillK [mailto:[EMAIL PROTECTED] I agree. I support more type 1 discussions. I have felt for some time that an awful lot of time-wasting has been going on here. I think this list should mostly be for computer tech discussion about methods of achieving specific results on the

RE: [agi] META: A possible re-focusing of this list

2008-10-15 Thread John G. Rose
From: Terren Suydam [mailto:[EMAIL PROTECTED] This is a publicly accessible forum with searchable archives... you don't necessarily have to be subscribed and inundated to find those nuggets. I don't know any funding decision makers myself, but if I were in control of a budget I'd be using

RE: [agi] Dangerous Knowledge - Update

2008-10-01 Thread John G. Rose
From: Brad Paulsen [mailto:[EMAIL PROTECTED] Sorry, but in my drug-addled state I gave the wrong URI for the Dangerous Knowledge videos on YouTube. The one I gave was just to the first part of the Cantor segment. All of the segments can be reached from the link below. You can recreate

RE: [agi] Artificial humor

2008-09-11 Thread John G. Rose
From: John LaMuth [mailto:[EMAIL PROTECTED] As I have previously written, this issue boils down as one is serious or one is not to be taken this way a meta-order perspective)... the key feature in humor and comedy -- the meta-message being don't take me seriously That is why I

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-08 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Sun, 9/7/08, John G. Rose [EMAIL PROTECTED] wrote: From: John G. Rose [EMAIL PROTECTED] Subject: RE: Language modeling (was Re: [agi] draft for comment) To: agi@v2.listbox.com Date: Sunday, September 7, 2008, 9:15 AM From: Matt

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-07 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Sat, 9/6/08, John G. Rose [EMAIL PROTECTED] wrote: Compression in itself has the overriding goal of reducing storage bits. Not the way I use it. The goal is to predict what the environment will do next. Lossless compression is a way

RE: Language modeling (was Re: [agi] draft for comment)

2008-09-06 Thread John G. Rose
Thinking out loud here as I find the relationship between compression and intelligence interesting: Compression in itself has the overriding goal of reducing storage bits. Intelligence has coincidental compression. There is resource management there. But I do think that it is not ONLY

RE: [agi] Groundless reasoning -- Chinese Room

2008-08-05 Thread John G. Rose
From: Harry Chesley [mailto:[EMAIL PROTECTED] Searle's Chinese Room argument is one of those things that makes me wonder if I'm living in the same (real or virtual) reality as everyone else. Everyone seems to take it very seriously, but to me, it seems like a transparently meaningless

RE: [agi] Any further comments from lurkers??? [WAS do we need a stronger politeness code on this list?]

2008-08-03 Thread John G. Rose
Well, even though there was bloodshed, Edward was right on slamming Richard on the complex systems issue. This issue needs to be vetted, sorted out, either laid to rest or incorporated into other's ideas. Perhaps in some of the scientist's minds it has been laid to rest. In my mind it is there,

[agi]

2008-07-21 Thread John G. Rose
, John G. Rose [EMAIL PROTECTED] wrote: From: Abram Demski [mailto:[EMAIL PROTECTED] No, not especially familiar, but it sounds interesting. Personally I am interested in learning formal grammars to describe data, and there are well-established equivalences between grammars and automata, so

RE: [agi] Patterns and Automata

2008-07-20 Thread John G. Rose
From: Abram Demski [mailto:[EMAIL PROTECTED] No, not especially familiar, but it sounds interesting. Personally I am interested in learning formal grammars to describe data, and there are well-established equivalences between grammars and automata, so the approaches are somewhat compatible.

RE: [agi] Patterns and Automata

2008-07-17 Thread John G. Rose
From: Abram Demski [mailto:[EMAIL PROTECTED] John, What kind of automata? Finite-state automata? Pushdown? Turing machines? Does CA mean cellular automata? --Abram Hi Abram, FSM, semiatomata, groups w/o actions, semigroups with action in the observer, etc... CA is for cellular automata.

RE: [agi] Patterns and Automata

2008-07-16 Thread John G. Rose
From: Pei Wang [mailto:[EMAIL PROTECTED] On Mon, Jul 7, 2008 at 12:49 AM, John G. Rose [EMAIL PROTECTED] wrote: In pattern recognition, are some patterns not expressible with automata? I'd rather say not easily/naturally expressible. Automata is not a popular technique in pattern

RE: [agi] Patterns and Automata

2008-07-06 Thread John G. Rose
From: Pei Wang [mailto:[EMAIL PROTECTED] Automata is usually used with a well-defined meaning. See http://en.wikipedia.org/wiki/Automata_theory On the contrary, pattern has many different usages in different theories, though intuitively it indicates some observed structures consisting of

RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-02 Thread John G. Rose
From: Richard Loosemore [mailto:[EMAIL PROTECTED] Ah, but now you are stating the Standard Reply, and what you have to understand is that the Standard Reply boils down to this: We are so smart that we will figure a way around this limitation, without having to do any so crass as just

RE: [agi] Simple example of the complex systems problem, for those in a hurry

2008-07-01 Thread John G. Rose
Well I can spend a lot of time replying this since it is a tough subject. The CB system is a good example my thinking doesn't involve CB's yet so the organized mayhem would be of a different form and I was thinking of the complexity being integrated differently. What you are saying makes sense in

RE: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread John G. Rose
Could you say that it takes a complex system to know a complex system? If an AGI is going to try to say predict the weather, it doesn't have infinite cpu cycles to simulate so it'll have to come up with something better. Sure it can build a probabilistic historical model but that is kind of

RE: [agi] Consciousness vs. Intelligence

2008-06-08 Thread John G. Rose
From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED] The problem of consciousness is not only a hard problem because of unknown mechanisms in the brain but it is a problem of finding the DEFINITION of necessary conditions for consciousness. I think, consciousness without intelligence is not

RE: [agi] Pearls Before Swine...

2008-06-08 Thread John G. Rose
From: A. T. Murray [mailto:[EMAIL PROTECTED] The abnormalis sapiens Herr Doktor Steve Richfield wrote: Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU THINKING? prin Goertzel genesthai, ego eimi http://www.scn.org/~mentifex/mentifex_faq.html My hair

RE: [agi] Consciousness vs. Intelligence

2008-06-08 Thread John G. Rose
From: Dr. Matthias Heger [mailto:[EMAIL PROTECTED] For general intelligence some components and sub-components of consciousness need to be there and some don't. And some could be replaced with a human operator as in an augmentation-like system. Also some components could be designed

RE: [agi] Pearls Before Swine...

2008-06-08 Thread John G. Rose
John G. Rose wrote: Does this mean that now maybe you can afford to integrate some AJAX into that JavaScript AI mind of yours? John No, because I remain largely ignorant of Ajax. http://mind.sourceforge.net/Mind.html and the JavaScript Mind User Manual (JMUM) at http

RE: [agi] Paradigm Shifting regarding Consciousness

2008-06-08 Thread John G. Rose
I don't think anyone anywhere on this list ever suggested time sequential was required for consciousness. Now as data streams in from sensory receptors that initially is time sequential. But as it is processed that changes to where time is changed. And time is sort of like an index eh? Or is time

RE: [agi] teme-machines

2008-06-05 Thread John G. Rose
She doesn't really expound on the fact that humans have the power to choose. I think memetics and temes have potential. You can't deny their existence but is it only that? Sure, my middle finger is a meme. But there is mechanics behind it. And those mechanics have a lot of regression and

RE: [agi] Did this message get completely lost?

2008-06-04 Thread John G. Rose
From: Brad Paulsen [mailto:[EMAIL PROTECTED] Not exactly (to start with, you can *never* be 100% sure, try though you might :-) ). Take all of the investigations into rockness since the dawn of homo sapiens and we still only have a 0.9995 probability that rocks are not conscious.

RE: Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-04 Thread John G. Rose
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED] Actually, the nuclear spins in the rock encode a single state of an ongoing computation (which is conscious). Successive states occur in the rock's counterparts in adjacent branes of the metauniverse, so that the rock is conscious not of

RE: [agi] CONSCIOUSNESS AS AN ARCHITECTURE OF COMPUTATION

2008-06-04 Thread John G. Rose
From: Ed Porter [mailto:[EMAIL PROTECTED] ED PORTER I am not an expert at computational efficiency, but I think graph structures like semantic nets, are probably close to as efficient as possible given the type of connectionism they are representing and the type of

RE: [agi] Did this message get completely lost?

2008-06-04 Thread John G. Rose
From: Brad Paulsen [mailto:[EMAIL PROTECTED] I agree that it is for us in the modern day technological society. But it may not have been always the case. We have been grounded by reason. Before reason it may have been largely supernatural. That's why sometimes I think AGI's could start off

  1   2   3   >