Re: [agi] organising parallel processes

2008-05-04 Thread Mike Dougherty
On Sun, May 4, 2008 at 10:00 PM, Stephen Reed [EMAIL PROTECTED] wrote: Matt (or anyone else), have you gotten as far as thinking about NAT hole punching or some other solution for peer-to-peer? NAT hole punching has no solution because it's not a problem you can fix. If I administrate the

Re: [agi] organising parallel processes

2008-05-04 Thread Mike Dougherty
On Sun, May 4, 2008 at 11:28 PM, Stephen Reed [EMAIL PROTECTED] wrote: be like Skype, the popular non-scum Internet phone service that also performs NAT hole punching (a.k.a. NAT traversal). I was not aware Skype worked like that- thanks for the info. If you are using a similar form of

Re: [agi] Comments from a lurker...

2008-04-14 Thread Mike Dougherty
On Mon, Apr 14, 2008 at 4:17 PM, Steve Richfield [EMAIL PROTECTED] wrote: You've merely been a *TROLL* and gotten the appropriate response. Thanks for playing but we have no parting gifts for you. Who is the we you are referencing? Do you have a mouse in your pocket, or is that the Royal

Re: [agi] Some thoughts of an AGI designer

2008-03-12 Thread Mike Dougherty
On Wed, Mar 12, 2008 at 8:54 PM, Charles D Hixson [EMAIL PROTECTED] wrote: I think that you need to look into the simulations that have been run involving Evolutionarily Stable Strategies. Friendly covers many strategies, including (I think) Dove and Retaliator. Retaliator is almost an

Re: [agi] A possible less ass-backward way of computing naive bayesian conditional probabilities

2008-02-25 Thread Mike Dougherty
On Mon, Feb 25, 2008 at 2:51 PM, Ed Porter [EMAIL PROTECTED] wrote: But that does stop people from modeling systems in a simplified manner by acting as if these limitations were met. Naïve Bayesian methods are commonly used. I have read multiple papers saying that in many cases it

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Mike Dougherty
On Jan 19, 2008 8:24 PM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all Turing also committed suicide. That's a personal solution to the Halting problem I do not plan to

Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Mike Dougherty
On Jan 14, 2008 10:10 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Any fool can mathematize a definition of a commonsense idea without actually saying anything new. Ouch. Careful. :) That may be true, but it takes $10M worth of computer hardware to disprove. disclaimer: that was humor

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Mike Dougherty
On Jan 10, 2008 9:59 AM, Stephen Reed [EMAIL PROTECTED] wrote: and that the system is to learn constructions for your examples. The below dialog is Controlled English, in which the system understands and generates constrained syntax and vocabulary. [user] The elements of a shit-list can be

Re: [agi] Incremental Fluid Construction Grammar released

2008-01-10 Thread Mike Dougherty
On Jan 10, 2008 10:57 AM, Stephen Reed [EMAIL PROTECTED] wrote: If I understand your question correctly it asks whether a non-expert user can be guided to use Controlled English in a dialog system. In This is an idea that I wanted to try at Cycorp but Doug Lenat said that it had been tried

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Mike Dougherty
On Jan 6, 2008 3:07 PM, a [EMAIL PROTECTED] wrote: Creativity is a byproduct of analogical reasoning, or abstraction. It has nothing to do with symbols or genetic algorithms! GA is too computationally complex to generate creative solutions. care to explain what sounds so absolute as to

Re: [agi] NL interface

2007-12-28 Thread Mike Dougherty
On Dec 28, 2007 12:45 AM, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: That's why I want to build an interface that lets users provide grammatical information and the likes. The exact form of the GUI is still unknown -- maybe like a panel with a lot of templates to choose from, or like the

Re: [agi] OpenCog

2007-12-28 Thread Mike Dougherty
On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Actually, that would be a serious miusunderstanding of the framework and development environment that I am building. Your system would be just as easy to build as any other. ... considering the proliferation of AGI

Re: [agi] OpenCog

2007-12-28 Thread Mike Dougherty
On Dec 28, 2007 1:55 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Mike Dougherty wrote: On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Actually, that would be a serious miusunderstanding of the framework and development environment that I am building. Your system

Re: [agi] AGI and Deity

2007-12-22 Thread Mike Dougherty
On Dec 22, 2007 8:15 PM, Philip Goetz [EMAIL PROTECTED] wrote: Dawkins trivializes religion from his comfortable first world perspective ignoring the way of life of hundreds of millions of people and offers little substitute for what religion does and has done for civilization and what has

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Mike Dougherty
On Dec 14, 2007 8:33 AM, Benjamin Goertzel [EMAIL PROTECTED] wrote: So, if a certain nation were to make laws allowing this, and to encourage research into this, then potentially they could gain a dramatic advantage over other nations... There does therefore seem a possibility for a brain

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Mike Dougherty
On Dec 14, 2007 10:07 AM, Benjamin Goertzel [EMAIL PROTECTED] wrote: If we're not making people smarter with currently available resources, why would we invest in research to discover expensive new technologies to make people smarter? We need that money to invest in research for expensive

Re: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-12 Thread Mike Dougherty
On 12/12/07, James Ratcliff [EMAIL PROTECTED] wrote: This would allow a large amount of knowledge to be extracted in a distributed manner, keeping track of the quality of information gathered from each person as a trust metric, and many facts would be gathered and checked for truth.

Re: [agi] The Function of Emotions is Torture

2007-12-12 Thread Mike Dougherty
On Dec 12, 2007 9:27 PM, Mike Tintner [EMAIL PROTECTED] wrote: It also shows a very limited understanding of emotions. What do you hope to convey by making comments like this? I often wonder how arrogance and belittling others for their opinions has ever made a positive contribution to a

Re: [agi] AGI and Deity

2007-12-10 Thread Mike Dougherty
On Dec 10, 2007 6:59 AM, John G. Rose [EMAIL PROTECTED] wrote: Dawkins trivializes religion from his comfortable first world perspective ignoring the way of life of hundreds of millions of people and offers little substitute for what religion does and has done for civilization and what has

Re: Re[4]: [agi] Do we need massive computational capabilities?

2007-12-08 Thread Mike Dougherty
On Dec 8, 2007 5:33 PM, Dennis Gorelik [EMAIL PROTECTED] wrote: What you describe - is set of AGI nodes. AGI prototype is just one of such node. AGI researcher doesn't have to develop all set at once. It's quite sufficient to develop only one AGI node. Such node will be able to work on

Re: Re[2]: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Dougherty
On Dec 7, 2007 7:41 PM, Dennis Gorelik [EMAIL PROTECTED] wrote: No, my proposal requires lots of regular PCs with regular network connections. Properly connected set of regular PCs would usually have way more power than regular PC. That makes your hardware request special. My point is -

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Dougherty
On Dec 6, 2007 8:23 AM, Benjamin Goertzel [EMAIL PROTECTED] wrote: On Dec 5, 2007 6:23 PM, Mike Tintner [EMAIL PROTECTED] wrote: resistance to moving onto the second stage. You have enough psychoanalytical understanding, I think, to realise that the unusual length of your reply to me may

Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-04 Thread Mike Dougherty
On Dec 3, 2007 11:03 PM, Bryan Bishop [EMAIL PROTECTED] wrote: On Monday 03 December 2007, Mike Dougherty wrote: Another method of doing search agents, in the mean time, might be to take neural tissue samples (or simple scanning of the brain) and try to simulate a patch of neurons via

Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread Mike Dougherty
On Dec 3, 2007 5:07 PM, Matt Mahoney [EMAIL PROTECTED] wrote: When a user asks a question or posts information, the message would be broadcast to many nodes, which could choose to ignore them or relay them to other nodes that it believes would find the message more relevant. Eventually the

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Mike Dougherty
On Dec 3, 2007 12:12 PM, Mike Tintner [EMAIL PROTECTED] wrote: I get it : you and most other AI-ers are equating hard with very, very complex, right? But you don't seriously think that the human mind successfully deals with language by massive parallel computation, do you? Very very complex

Re: [agi] Where are the women?

2007-11-28 Thread Mike Dougherty
On Nov 28, 2007 9:20 AM, Mike Tintner [EMAIL PROTECTED] wrote: Sunday, November 25, 2007 Think Geek. Bet you're not picturing a woman. Nothing about a [computer] geek necessarily implies gender at all. To be fair, ask this same question but replace women with any other 'minority' and see if

Re: [agi] Where are the women?

2007-11-28 Thread Mike Dougherty
On Nov 28, 2007 9:23 PM, Mike Tintner [EMAIL PROTECTED] wrote: An open-ended, ambiguous language is in fact the sine qua non of AGI. Thankyou for indirectly pointing that out to me. Would you agree that an absolutely precise language with zero ambiguity would be somewhat stifling for use in a

Re: Re[4]: [agi] Funding AGI research

2007-11-21 Thread Mike Dougherty
On Nov 20, 2007 8:27 PM, Dennis Gorelik [EMAIL PROTECTED] wrote: Start with weak AI programs. That would push technology envelope further and further and in the end AGI will be possible. Yeah - because weak AI is so simple. Why not just make some run-of-the-mill narrow AI with a single goal of

Re: [agi] Human memory and number of synapses.. P.S.

2007-10-20 Thread Mike Dougherty
On 10/20/07, Mark Waser [EMAIL PROTECTED] wrote: Images are *not* an efficient way to store data. Unless they are three-dimensional images, they lack data. Normally, they include a lot of unnecessary or redundant data. It is very, very rare that a computer stores any but the smallest image

Re: [agi] Poll

2007-10-18 Thread Mike Dougherty
On 10/18/07, Derek Zahn [EMAIL PROTECTED] wrote: Because neither of these things can be done at present, we can barely even talk to each other about things like goals, semantics, grounding, intelligence, and so forth... the process of taking these unknown and perhaps inherently complex things

Re: [agi] Do the inference rules of categorical logic make sense?

2007-10-07 Thread Mike Dougherty
On 10/7/07, Charles D Hixson [EMAIL PROTECTED] wrote: ... logic is unsuited for conversation... what a great quote - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to:

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-06 Thread Mike Dougherty
On 10/6/07, Richard Loosemore [EMAIL PROTECTED] wrote: In my use of GoL in the paper I did emphasize the prediction part at first, but I then went on (immediately) to talk about the problem of finding hypotheses to test. Crucially, I ask if it is reasonable to suppose that Conway could have

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-06 Thread Mike Dougherty
On 10/6/07, Richard Loosemore [EMAIL PROTECTED] wrote: I am sorry, Mike, I have to give up. What you say is so far away from what I said in the paper that there is just no longer any point of contact. oh. So we weren't having a discussion. You were having a lecture and I was missing the

Re: [agi] Religion-free technical content

2007-10-05 Thread Mike Dougherty
On 10/5/07, Mark Waser [EMAIL PROTECTED] wrote: Then I guess we are in perfect agreement. Friendliness is what the average person would do. Which one of the words in And not my proposal wasn't clear? As far as I am concerned, friendliness is emphatically not what the average person

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Mike Dougherty
On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote: My stock example: planetary motion. Newton (actually Tycho Brahe, Kepler, et al) observed some global behavior in this system: the orbits are elliptical and motion follows Kepler's other laws. This corresponds to someone seeing Game of

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Mike Dougherty
On 10/5/07, Linas Vepstas [EMAIL PROTECTED] wrote: To be abstract, you could subsitute semi-Thue system, context-free grammar, first-order logic, Lindenmeyer system, history monoid, etc. for GoL, and still get an equivalent argument about complexity and predicatability. Singling out GoL as

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-04 Thread Mike Dougherty
On 10/4/07, Richard Loosemore [EMAIL PROTECTED] wrote: Do it then. You can start with interesting=cyclic. should GoL gliders be considered cyclic? I personally think the candidate-AGI that finds a glider to be similar to a local state of cells from N iterations earlier to be particularly

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-04 Thread Mike Dougherty
On 10/4/07, Richard Loosemore [EMAIL PROTECTED] wrote: All understood. Remember, though, that the original reason for talking about GoL was the question: Can there ever be a scientific theory that predicts all the interesting creatures given only the rules? The question of getting something

Re: [agi] intelligent compression

2007-10-03 Thread Mike Dougherty
On 10/3/07, Matt Mahoney [EMAIL PROTECTED] wrote: The higher levels detect complex objects like airplanes or printed words or faces. We could (lossily) compress images much smaller if we knew how to recognize these features. The idea would be to compress a movie to a written script, then

Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter [EMAIL PROTECTED] wrote: In fact, if the average AI post-grad of today had such hardware to play with, things would really start jumping. Within ten years the equivents of such machines could easily be sold for somewhere between $10k and $100k, and lots of

Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter [EMAIL PROTECTED] wrote: I think your notion that post-grads with powerful machines would only operate in the space of ideas that don't work is unfair. Yeah, i can agree - it was harsh. My real intention was to suggest that NOT having a bigger computer is not

[agi] intelligent compression

2007-10-02 Thread Mike Dougherty
On 9/22/07, Matt Mahoney [EMAIL PROTECTED] wrote: You understand that I am not proposing to solve AGI by using text compression. I am proposing to test AI using compression, as opposed to something like the Turing test. The reason I use compression is that the test is fast, objective, and

Re: [agi] intelligent compression

2007-10-02 Thread Mike Dougherty
On 10/2/07, Matt Mahoney [EMAIL PROTECTED] wrote: It says a lot about the human visual perception system. This is an extremely lossy function. Video contains only a few bits per second of useful information. The demo is able to remove a large amount of uncompressed image data without

Re: [agi] A problem with computer science?

2007-09-28 Thread Mike Dougherty
On 9/28/07, Matt Mahoney [EMAIL PROTECTED] wrote: Not necessarily. In my work I measure intelligence to 9 significant digits. Ok sure, by what unit are you measuring? :) - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to:

Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread Mike Dougherty
On 6/11/07, James Ratcliff [EMAIL PROTECTED] wrote: Interesting points, but I believe you can get around alot of the problems with two additional factors, a. using either large quantities of quality text, (ie novels, newspapers) or similar texts like newspapers. b. using a interactive built in

Re: [agi] The Advantages of a Conscious Mind

2007-05-06 Thread Mike Dougherty
On 5/6/07, Mark Waser [EMAIL PROTECTED] wrote: Yes, I'll match my understanding and knowledge of, and ideas on, the free will issue against anyone's. Arrogant much? I just introduced an entirely new dimension to the free will debate. You literally won't find it anywhere. Including Dennett.

Re: [agi] What would motivate you to put work into an AGI project?

2007-05-03 Thread Mike Dougherty
On 5/3/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote: On 5/3/07, Matt Mahoney [EMAIL PROTECTED] wrote: But how does Speagram resolve ambiguities like this one? ;-) Generally, Speagram would live with both interpretations until one of them fails or it gets a chance to ask the user. How would

Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-30 Thread Mike Dougherty
On 4/29/07, Mike Tintner [EMAIL PROTECTED] wrote: There is something fascinating going on here - if you could suspend your desire for precision, you might see that you are at least half-consciously offering contributions as well as objections. (Tune in to your constructive side). suspended. I

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mike Dougherty
On 4/30/07, Mike Tintner [EMAIL PROTECTED] wrote: it is in the human brain. Every concept must be a tree, which can continually be added to and fundamentally altered. Every symbolic concept must be grounded in a set of graphics and images, which are provisional and can continually be redrawn.

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-04-30 Thread Mike Dougherty
On 4/30/07, Mike Tintner [EMAIL PROTECTED] wrote: The linguistic sign bears NO RELATION WHATSOEVER to the signified. true The only signs that bear relation to, and to some extent reflect, reality and real things are graphics [maps/cartoons/geometry/ icons etc] and images [photos, statues,

Re: [agi] Uh oh ... someone has beaten Novamente to the end goal!

2007-04-29 Thread Mike Dougherty
On 4/29/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: Holding a new computer at your home such as myself will take very little space( less than 2 square meters) and this will never waste your time (you can use your new computer whenever you want) and you will be of course able to continue your

Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Mike Dougherty
On 4/29/07, Mike Tintner [EMAIL PROTECTED] wrote: He has a simple task: Move from A to B or D. But the normal answer Walk it is for whatever reason no good, blocked. Disambiguate- 1. Move from starting point A to either B or D 2. Move from either A to B or take another option D I feel we

Re: [agi] HOW ADAPTIVE ARE YOU [NOVAMENTE] BEN?

2007-04-29 Thread Mike Dougherty
On 4/29/07, Richard Loosemore [EMAIL PROTECTED] wrote: The idea that human beings should constrain themselves to a simplified, artificial kind of speech in order to make life easier for an AI, is one of those Big Excuses that AI developers have made, over the years, to cover up the fact that

Re: [agi] rule-based NL system

2007-04-28 Thread Mike Dougherty
On 4/28/07, Mike Tintner [EMAIL PROTECTED] wrote: And what if I say to you: sorry but the elephant did sit on the chair - how would you know that I could be right? I could assign a probability of truthfulness to this statement that is dependant on how many other assertions you have made and

Re: [agi] AGI and Web 2.0

2007-03-29 Thread Mike Dougherty
On 3/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: How does the new phenomenon of web-based collaboration change the way we build an AGI? I feel that something is amiss in a business model if we don't make use of some form of Web 2.0 . A problem is that we cannot take universal ballots

Re: [agi] general weak ai

2007-03-09 Thread Mike Dougherty
On 3/9/07, Pei Wang [EMAIL PROTECTED] wrote: This understanding assumes a you who does the pointing, which is a central controller not assumed in the Society of Mind. To see intelligence as a toolbox, we would have to assume that somehow the saw, hammer, etc. can figure out what they should do

Re: [agi] general weak ai

2007-03-08 Thread Mike Dougherty
On 3/6/07, Ben Goertzel [EMAIL PROTECTED] wrote: Well what is intelligence if not a collection of tools? One of the hardest Thinking of a mind as a toolkit is misleading. A mind must contain a collection of tools that synergize together so as to give rise to the appropriate high-level

Re: **SPAM** Re: [agi] Development Environments for AI (a few non-religious comments!)

2007-02-21 Thread Mike Dougherty
On 2/21/07, Eugen Leitl [EMAIL PROTECTED] wrote: The language thread has been reasonably abstruse already, but proposing doing AI by stored procedures in relational databases backed by ~10 ms access time devices... Hey, why not tapes? I think you could implement a reasonably competent Turing

Re: [agi] Re: Languages for AGI

2007-02-18 Thread Mike Dougherty
On 2/18/07, Mark Waser [EMAIL PROTECTED] wrote: personal toolbox). The programmers who are ending up out of work are the ones who keep re-inventing the wheel over and over again. Thinking about the amount of redundant (wasted) effort involved with starting from scratch on an AI project, I

Re: [agi] Priors and indefinite probabilities

2007-02-12 Thread Mike Dougherty
On 2/11/07, Ben Goertzel [EMAIL PROTECTED] wrote: We don't use Bayes Nets in Novamente because Novamente's knowledge network is loopy. And the peculiarities that allow standard Bayes net belief propagation to work in standard loopy Bayes nets, don't hold up I know what you mean by the term

Re: [agi] Probabilistic consistency

2007-02-07 Thread Mike Dougherty
On 2/7/07, Kevin Peterson [EMAIL PROTECTED] wrote: My program crashes, prints something about 8192. My program crashes, prints something about 10001. My program crashes, prints something about 3721. I'd wonder if you've seen the movie Pi and perhaps taken it too seriously :) - This list

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Mike Dougherty
On 1/19/07, Joel Pitt [EMAIL PROTECTED] wrote: It's been a while since I looked at Lojban or your Lojban++, so was wondering if english sentences translate well into Lojban without the sentence ordering changing? I.e. given two english sentences, are there any situations where in lojban the

Re: [agi] SOTA

2007-01-06 Thread Mike Dougherty
On 1/6/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: Needless to say, I don't consider cleaning up the house a particularly interesting goal for AGI projects. I can well imagine it being done by a narrow AI system with no capability to do anything besides manipulate simple objects, navigate,

Re: [agi] Sophisticated models of spiking neural networks.

2006-12-26 Thread Mike Dougherty
in general, how do we indicate the odd one out of that set? Sure it's obvious that the color is important in this case - but I see two circles and that the square is more similar to the circle(s) because of the higher number of sides. Therefor the triangle is the odd one. What rules does an

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mike Dougherty
On 12/5/06, BillK [EMAIL PROTECTED] wrote: Your reasoning is getting surreal. You seem to have a real difficulty in admitting that humans behave irrationally for a lot (most?) of the time. Don't you read newspapers? You can redefine rationality if you like to say that all the crazy people are

Re: [agi] RSI - What is it and how fast?

2006-12-04 Thread Mike Dougherty
On 12/4/06, Brian Atkins [EMAIL PROTECTED] wrote: Can you cause your brain to temporarily shut down your visual cortex and other associated visual parts, reallocate them to expanding your working memory by four times its current size in order to help you juggle consciously the bits you need to

Re: Re: [agi] Understanding Natural Language

2006-11-27 Thread Mike Dougherty
On 11/27/06, YKY (Yan King Yin) [EMAIL PROTECTED] wrote: The problem is that this thing, on, is not definable in n-space via operations like AND, OR, NOT, etc. It seems that on is not definable by *any* hypersurface, so it cannot be learned by classifiers like feedforward neural networks or

Re: [agi] Understanding Natural Language

2006-11-26 Thread Mike Dougherty
On 11/26/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote: But I really think that the metric properties of the spaces continue to help even at the very highest levels of abstraction. I'm willing to spend some time giving it a shot, anyway. So we'll see! I was thinking about the N-space

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Mike Dougherty
On 11/22/06, Ben Goertzel [EMAIL PROTECTED] wrote: Well, in the language I normally use to discuss AI planning, this would mean that 1)keeping charged is a supergoal 2)The system knows (via hard-coding or learning) that finding the recharging socket == keeping charged If charged becomes

Re: [agi] A question on the symbol-system hypothesis

2006-11-17 Thread Mike Dougherty
I'm not sure I follow every twist in this thread. No... I'm sure I don't follow every twist in this thread. I have a question about this compression concept. Compute the number of pixels required to graph the Mandelbrot set at whatever detail you feel to be a sufficient for the sake of

Re: [agi] Information Learning Systems

2006-10-27 Thread Mike Dougherty
On 10/27/06, James Ratcliff [EMAIL PROTECTED] wrote: I am working on another piece now that will scan through news articles and pull small bits of information out of them, such as: Iran's nuclear program is only aimed at generating power.The process of uranium enrichment can be used to generate

Re: [agi] [META] Is there anything we can do to keep junk out of the AGI Forum?

2006-07-26 Thread Mike Dougherty
On 7/26/06, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote: The bane of mailing lists is well-intentioned but stupid people...not only mailing lists; I'd say they're a bane everywhere. To unsubscribe, change your address, or temporarily deactivate your subscription, please go to

Re: [agi] procedural vs declarative knowledge

2006-06-02 Thread Mike Dougherty
On 6/2/06, Charles D Hixson [EMAIL PROTECTED] wrote: Rule of thumb:First get it working, doing what you want.Thenoptimize.When optimizing, first check your algorithms,then check tosee where time is actually spent.Apply extensive optimization only to the most used 10% (or less) of the code.If you

Re: [agi] procedural vs declarative knowledge

2006-05-30 Thread Mike Dougherty
After reading Ben's response I had to ask- what possible value would there be in NOT pre-compiling reusable procedures? Advocating a strict adherence to a single type of general purpose container when there is a clear advantage to specialization sounds like idealistic dogma. When my existence is

Re: [agi] Google wants AI for search... The first step..

2006-05-23 Thread Mike Dougherty
They have a long road ahead. I recently sent an email via gmail that contained the word computronium. the google spellchecker (while slickly executed) was unable to identify this word. I googled it, and the first link was a wikipedia reference. So if Google Spellcheck can't ' just*GoogleIt' when