Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Bob Mottram
On 10 August 2010 16:44, Ben Goertzel b...@goertzel.org wrote: I'm writing an article on the topic for H+ Magazine, which will appear in the next couple weeks ... I'll post a link to it when it appears I'm not advocating applying AI in the absence of new experiments of course.  I've been

Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Bob Mottram
On 10 August 2010 18:43, Bob Mottram fuzz...@gmail.com wrote: here.  For example, if an epidemic breaks out, why should you vaccinate first? That should have been who rather than why :-) Just thinking a little further, in hand waving mode, If something like the common cold were added

Re: [agi] Bayesian surprise attracts human attention

2009-01-15 Thread Bob Mottram
2009/1/15 Ronald C. Blue ronb...@u2ai.us: Bayesian surprise attracts human attention http://tinyurl.com/77p9xo Sounds interesting. In my opinion any research carried out at universities using public money should be available to the public, without additional charges.

Re: [agi] just a thought

2009-01-14 Thread Bob Mottram
2009/1/14 Valentina Poletti jamwa...@gmail.com: Anyways my point is, the reason why we have achieved so much technology, so much knowledge in this time is precisely the we, it's the union of several individuals together with their ability to communicate with one-other that has made us advance

Re: [agi] initial reaction to A2I2's call center product

2009-01-12 Thread Bob Mottram
2009/1/12 Ben Goertzel b...@goertzel.org: AGI company A2I2 has released a product for automating call center functionality We value your interest in our AGI related service. If you agree that AGI can have useful applications for call centres, press 1 If our AGI repeatedly misinterprets your

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-21 Thread Bob Mottram
2008/12/21 Ben Goertzel b...@goertzel.org: However, IMO the rhetoric associating it with thinking machine building is premature and borderline dishonest. It's marketing rhetoric. It's more like interesting brain simulation research that could eventually play a role in some future

Re: [agi] Seeking CYC critiques PS

2008-12-12 Thread Bob Mottram
2008/12/11 Mike Tintner tint...@blueyonder.co.uk: If you try and reduce those maps to any other form, e.g. some mathematical or program form, you *lose the object.* It's equivalent to taking a jigsaw puzzle to pieces - all you have are the pieces, and you've lost the picture - the whole.

Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Bob Mottram
2008/12/11 Mike Tintner [EMAIL PROTECTED]: *Ben Goertzel is a continuously changing reality. At 10.05 pm he will be different from 10.00pm, and so on. He is in fact many individuals. Based on some of the stuff which I've been doing with SLAM algorithms I'd agree with this sort of

Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Bob Mottram
2008/12/11 Mike Tintner tint...@blueyonder.co.uk: But an image/movie can only be compared with a verbal statement in terms of what it *actually shows* - the *surface, visible action.* His actual, observable dialogue and gestures and expressions - that and only that is what a movie records

Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Bob Mottram
2008/12/11 Mike Tintner tint...@blueyonder.co.uk: There is no problem though seeing the entities and movements in a movie - Ben, say, raising his hand, or shaking Steve's hand, or laughing or making some other facial expression. Sure, we can argue and/or be confused about the significance and

Re: [agi] Religious attitudes to NBIC technologies

2008-12-08 Thread Bob Mottram
2008/12/8 Richard Loosemore [EMAIL PROTECTED]: Another indication that we need to take the public relations issue very seriously indeed: as time passes, this problem of the public attitude (and especially the religious attitude) to NBIC technologies will only become more extreme: People who

Re: [agi] If aliens are monitoring us, our development of AGI might concern them

2008-11-26 Thread Bob Mottram
2008/11/26 Ed Porter [EMAIL PROTECTED]: As we learn just how common exoplanets are, the possibility that aliens have visited earth seems increasingly scientifically believable I'm not sure that alien visitation logically follows from the discovery of exoplanets. There have, in fact, been

Re: [agi] If aliens are monitoring us, our development of AGI might concern them

2008-11-26 Thread Bob Mottram
2008/11/26 Ed Porter [EMAIL PROTECTED]: I have never experienced a UFO, but several people I have known and generally trusted, and who are not drug users or wackos, have claimed to have seen them directly. belief (Y, foo) belief (X, credibility (Y) minimum credibility (X)) || dominance

Re: [agi] MSRobot vs E3

2008-11-21 Thread Bob Mottram
2008/11/21 Mike Tintner [EMAIL PROTECTED]: http://www.marketwatch.com/news/story/Battle-lines-forming-nascent-robotics/story.aspx?guid={FA2B30F1-B78B-4E33-91A4-F7F3D07DECCB} The biggest growth area for robotics in the next few years I think is going to be telerobots, allowing mobile

Re: [agi] MSRobot vs E3

2008-11-21 Thread Bob Mottram
2008/11/21 Charles Hixson [EMAIL PROTECTED]: The thing is, MS systems tend to be extremely inflexible. I.e., they are flexible within their predefined fixed limitations, and outside of that you need to constantly fight the system to get anywhere. To me this doesn't sound like a good

Re: [agi] Professor Asim Roy Finally Publishes Controversial Brain Theory

2008-11-20 Thread Bob Mottram
2008/11/20 Vladimir Nesov [EMAIL PROTECTED]: Here's a link to the paper: http://wpcarey.asu.edu/pubs/index.cfm?fct=detailsarticle_cobid=2216410author_cobid=1039524journal_cobid=2216411 This doesn't sound especially controversial to me. Clearly there are systems in the brain which control

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Bob Mottram
2008/11/18 Steve Richfield [EMAIL PROTECTED]: I am considering putting up a web site to filter the crazies as follows, and would appreciate all comments, suggestions, etc. This all sounds peachy in principle, but I expect it would exclude virtually everyone except perhaps a few of the most

Re: [agi] The New World Order

2008-11-17 Thread Bob Mottram
2008/11/17 Mike Tintner [EMAIL PROTECTED]: Comment on Marketwatch forum today: Lots of talk about the New World Order (MWO)... what really bothers me about the NWO is that there are bound to be lots of robots involved. I hate robots. The way I look at it, once we have robots with

Re: [agi] General musings on AI, humans and empathy...

2008-11-09 Thread Bob Mottram
2008/11/8 Ben Goertzel [EMAIL PROTECTED]: http://multiverseaccordingtoben.blogspot.com/2008/11/in-search-of-machines-of-loving-grace.html On the Ishiguru robot and uncanny valley I think the simulation which we're creating of other people is closely based upon the sort of multi-modal

Re: [agi] Whole Brain Emultion (WBE) - A Roadmap

2008-11-05 Thread Bob Mottram
2008/11/5 Richard Loosemore [EMAIL PROTECTED]: At the end of the day, if you end up with some problems in the code because you transcribed it wrong, how would you even begin to debug it? Brains and digital computers are very different kinds of machinery. If I were to copy the circuits of a

Re: [agi] Fwd: Job offering Astro-naughty!

2008-11-03 Thread Bob Mottram
2008/11/1 Joel Pitt [EMAIL PROTECTED]: My commitment is with OpenCog at the moment - but this looks like a really cool project/job that may suit some of you on this list :) It sounds like cool stuff for sure, but in the last few years I've had experience of American tech companies

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Bob Mottram
http://kryten.mm.rpi.edu/PRES/SYNCHARIBM0807/sb_ka_etal_cogrobustsynchar_082107v1.mov --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

Re: [agi] Cloud Intelligence

2008-10-29 Thread Bob Mottram
2008/10/29 Samantha Atkins [EMAIL PROTECTED]: John G. Rose wrote: Has anyone done some analysis on cloud computing, in particular the recent trend and coming out of clouds with multiple startup efforts in this space? And their relationship to AGI type applications? Beware of putting too

Re: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Bob Mottram
2008/10/21 Matt Mahoney [EMAIL PROTECTED]: More generally, people learn algebra and higher mathematics by induction, by generalizing from lots of examples. 5 * 7 = 35 - 35 / 7 = 5 4 * 6 = 24 - 24 / 6 = 4 etc... a * b = c - c = b / a Not only this though. If I remember correctly from

Re: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Bob Mottram
2008/10/21 Dr. Matthias Heger [EMAIL PROTECTED]: There is another point which indicates that the ability to understand language or to learn language does not imply *general* intelligence. You can often observe in school that linguistic talents are poor in mathematics and vice versa. The

Re: [agi] Will Wright's Five Artificial Intelligence Prophecies

2008-10-18 Thread Bob Mottram
2008/10/18 Eric Burton [EMAIL PROTECTED]: http://www.popularmechanics.com/technology/industry/4287680.html?series=60 Some thoughts on this: http://streebgreebling.blogspot.com/2008/10/will-wright-on-ai.html --- agi Archives:

Re: [agi] First issue of H+ magazine ... http://hplusmagazine.com/

2008-10-17 Thread Bob Mottram
2008/10/17 Ben Goertzel [EMAIL PROTECTED]: Including a brief article by me about open-source robotics, that I wrote back in April... Open source robotics may eventually occur, but I think it will require some common and relatively affordable platforms. It becomes much easier to usefully

Re: [agi] First issue of H+ magazine ... http://hplusmagazine.com/

2008-10-17 Thread Bob Mottram
2008/10/17 Bryan Bishop [EMAIL PROTECTED]: Bob, it's already happening behind your back, and I'm not talking about iCub. While platform standardization is important, there's other things that you can do like write cross-platform compatible applications and compilers, or working on rounding up

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Bob Mottram
2008/10/15 Ben Goertzel [EMAIL PROTECTED]: What are your thoughts on this? I, for one, would welcome more Type 1s and fewer Type 2s. I realize, having observed AI related forums and lists for longer than I care to admit, that Type 2s constitute the principle mass of the gossip distribution.

Re: [agi] open or closed source for AGI project?

2008-10-10 Thread Bob Mottram
2008/10/10 YKY (Yan King Yin) [EMAIL PROTECTED]: On Tue, Oct 7, 2008 at 11:33 PM, Russell Wallace [EMAIL PROTECTED] wrote: I was trying to find a way so we can collaborate on one project, but people don't seem to like the virtual credit idea. No, no we don't :-) Why not? As has been

Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Bob Mottram
2008/10/2 Brad Paulsen [EMAIL PROTECTED]: It boasts a 50% recognition accuracy rate +/-5 years and an 80% recognition accuracy rate +/-10 years. Unless, of course, the subject is wearing a big floppy hat, makeup or has had Botox treatment recently. Or found his dad's Ronald Reagan mask.

Re: [agi] Free AI Courses at Stanford

2008-09-20 Thread Bob Mottram
2008/9/20 Valentina Poletti [EMAIL PROTECTED]: The lectures are pretty good in quality, compared with other major university on-line lectures (such as MIT and so forth) I followed a couple of them and definitely recommend. You learn almost as much as in a real course. The introduction to

Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source

2008-09-18 Thread Bob Mottram
2008/9/18 Trent Waddington [EMAIL PROTECTED]: And this is the problem. Although some people have the goal of making an artificial person with all the richness and nuance of a sentient creature with thoughts and feelings and yada yada yada.. some of us are just interested in making more

Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Bob Mottram
As the article says, this has long been suspected but until now hadn't been demonstrated. Edelman was describing the same phenomena as the remembered present well over a decade ago, and his idea seems to have been loosely inspired by ideas from Freud and James. Remembering seems to be an act of

Re: [agi] Remembering Caught in the Act

2008-09-05 Thread Bob Mottram
2008/9/5 Mike Tintner [EMAIL PROTECTED]: Past studies have shown how many neurons are involved in a single, simple memory. Researchers might be able to isolate a few single neurons in the process of summoning a memory, but that is like saying that they have isolated a few water molecules in

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Bob Mottram
2008/8/27 Mike Tintner [EMAIL PROTECTED]: You on your side insist that you don't have to have such precisely defined goals - your intuitive (and by definition, ill-defined) sense of intelligence will do. As a child I don't believe that I set out with the goal of becoming a software developer.

Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Bob Mottram
2008/8/28 Mike Tintner [EMAIL PROTECTED]: (I still think of course that current AGI should have a not-so-ill structured definition of its problem-solving goals). It's certainly true that an AGI could be endowed with well defined goals. Some people also begin from an early age with well

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Bob Mottram
2008/8/24 Mike Tintner [EMAIL PROTECTED]: Just a v. rough, first thought. An essential requirement of an AGI is surely that it must be able to play - so how would you design a play machine - a machine that can play around as a child does? Play may be about characterising the state space.

Re: [agi] Meet the world's first robot controlled exclusively by living brain tissue

2008-08-15 Thread Bob Mottram
2008/8/15 Ed Porter [EMAIL PROTECTED]: The training issue is a real one, but presumably over time electronics that would be part of these wetware/hardware combination brains could be developed to train the wetware/hardware machines --- under the control guidance of external systems at the

Re: [agi] Meet the world's first robot controlled exclusively by living brain tissue

2008-08-14 Thread Bob Mottram
2008/8/14 Ed Porter [EMAIL PROTECTED]: A 'Frankenrobot' with a biological brain I doubt that there will be much practical application of biological neuron powered robots, since the overhead of keeping the biology alive would be too troublesome (requiring feeding and removal of waste products),

Re: [agi] The Necessity of Embodiment

2008-08-14 Thread Bob Mottram
2008/8/14 Mike Tintner [EMAIL PROTECTED]: What it comes down to is: what can you learn about any object[s] from flat drawings of them? Cardboard cutouts? This is essentially the same problem as in computer vision. The objects that you're looking at are three dimensional, but a camera image is

Re: [agi] Meet the world's first robot controlled exclusively by living brain tissue

2008-08-14 Thread Bob Mottram
2008/8/14 Ciro Aisa [EMAIL PROTECTED]: On Thu, Aug 14, 2008 at 10:25:57AM +0100, Bob Mottram wrote: I doubt that there will be much practical application of biological neuron powered robots, since the overhead of keeping the biology alive would be too troublesome (requiring feeding and removal

Re: [agi] The Necessity of Embodiment

2008-08-14 Thread Bob Mottram
2008/8/14 Mike Tintner [EMAIL PROTECTED]: But - correct me - when you engineer the 3D shape, you are merely applying previous,existing knowledge about other objects to do so - which is a useful but narrow AI function. You are not actually discovering anything new about this particular object?

Re: [agi] Human experience

2008-08-08 Thread Bob Mottram
2008/8/8 Linas Vepstas [EMAIL PROTECTED]: I'm starting to wonder, is embodied experience *really* that important? As a roboticist I can say that a physical body resembling that of a human isn't really all that important. You can build the most sophisticated humanoid possible, but the problems

Re: [agi] The Necessity of Embodiment

2008-08-08 Thread Bob Mottram
2008/8/8 Mike Tintner [EMAIL PROTECTED]: Now my v. garbled understanding ( please comment) is that those Carnegie Mellon starfish robots show that such an integrated whole self is both possible - and perhaps vital - for robots too. Yes I agree with the idea of understanding others through

Re: [agi] For an indication of the complexity of primate brain hardware

2008-08-07 Thread Bob Mottram
2008/8/6 Ed Porter [EMAIL PROTECTED]: For an indication of the complexity of primate brain hardware check out the article at http://www.technologyreview.com/Biotech/21175/page1/ and better yet the associated images at http://www.technologyreview.com/player/08/08/06Singer/1.aspx (the latter

Re: [agi] META: do we need a stronger politeness code on this list?

2008-08-03 Thread Bob Mottram
2008/8/3 Ben Goertzel [EMAIL PROTECTED]: Anyone else have an opinion on this? Also, can we limit the use of capitalization (aka shouting). There may be rare circumstances under which this is necessary, but most of the time it seems to be used gratuitously. - Bob

Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Bob Mottram
2008/7/21 John LaMuth [EMAIL PROTECTED]: Announcing the recently issued U.S. patent concerning ethical artificial intelligence titled: Inductive Inference Affective Language Analyzer Simulating AI. This just show what a farce the US patent system has become.

Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-21 Thread Bob Mottram
2008/7/21 Matt Mahoney [EMAIL PROTECTED]: This is a real patent, unfortunately... http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1Sect2=HITOFFp=1u=%2Fnetahtml%2FPTO%2Fsearch-bool.htmlr=1f=Gl=50d=PALLRefSrch=yesQuery=PN%2F6587846 But I think it will expire before anyone has the technology

Re: [agi] Interesting article about EU's open source AGI robot program

2008-07-11 Thread Bob Mottram
2008/7/11 Ed Porter [EMAIL PROTECTED]: Interesting article about EU's open source AGI robot program at http://www.eetimes.com/showArticle.jhtml?articleID=208808365 http://www.eetimes.com/showArticle.jhtml?articleID=208808365 The fact that iCub is open source is to be welcomed. In the past

Re: [agi] Can We Start Somewhere was Approximations of Knowledge

2008-06-26 Thread Bob Mottram
2008/6/26 Steve Richfield [EMAIL PROTECTED]: Perhaps we can completely sidestep the countless contentious issues regarding what intelligence is, what an AGI is, what consciousness is, what is needed, etc., with an entirely different approach: It's the usual pattern for participants on AI

Re: [agi] Equivalent ..P.S.I just realised - how can you really understand what I'm talking about - without supplementary images/evidence?

2008-06-24 Thread Bob Mottram
2008/6/24 Mike Tintner [EMAIL PROTECTED]: So here's simple evidence - look at the following foto - and note that you can distinguish each individual in it immediately. And you can only do it imagistically. No maths, no language, no algebraic variables, no programming languages can tell you

Re: [agi] Equivalent ..P.S.I just realised - how can you really understand what I'm talking about - without supplementary images/evidence?

2008-06-24 Thread Bob Mottram
2008/6/24 Mike Tintner [EMAIL PROTECTED]: What makes every body in this world different is that it is to some extent, IRREGULAR - CRAZY. Look at those faces again - what stamps them as different is their irregularities, the slightly off jawlines, imbalanced eyebrows, twisted smile and the

Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-23 Thread Bob Mottram
2008/6/22 William Pearson [EMAIL PROTECTED]: 2008/6/22 Vladimir Nesov [EMAIL PROTECTED]: Well since intelligence explosions haven't happened previously in our light cone, it can't be a simple physical pattern Probably the last intelligence explosion - a relatively rapid increase in the degree

Re: [agi] Pearls Before Swine...

2008-06-08 Thread Bob Mottram
2008/6/8 Ben Goertzel [EMAIL PROTECTED]: Those of us w/ experience in the field have heard the objections you and Tintner are making hundreds or thousands of times before. We have already processed the arguments you're making and found them wanting. I entirely agree with this response. To

Re: Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-04 Thread Bob Mottram
2008/6/4 J Storrs Hall, PhD [EMAIL PROTECTED]: What is the rock thinking? T h i s i s w a a a y o f f t o p i c . . . Rocks are obviously superintelligences. By behaving like inert matter and letting us build monuments and gravel pathways out of them they're just lulling us into a

Re: [agi] news bit: Is this a unified theory of the brain? Do Bayesian statistics rule the brain?

2008-06-02 Thread Bob Mottram
This week's New Scientist has a fascinating article on a possible 'grand theory' of the brain that suggests that virtually all brain functions can be modelled with insert fashionable technique here But seriously, I use bayes rule on an industrial scale in robotics software. There is always a

Re: [agi] Uncertainty

2008-06-02 Thread Bob Mottram
2008/6/2 Ben Goertzel [EMAIL PROTECTED]: I think the PLN / indefinite probabilities approach is a complete and coherent solution to the problem. It is complex, true, but these are not simple issues... I was wondering whether indefinite probabilities could be used to represent a particle

[agi] Memory as a movie

2008-05-31 Thread Bob Mottram
An interesting case of a woman who never forgets. She describes her memories as a continuously running movie, which she can't turn off. http://www.onpointradio.org/shows/2008/05/20080520_b_main.asp Perhaps we all have this kind of memory, but most of the time we only have limited or no

Re: Merging threads was Re: Code generation was Re: [agi] More Info Please

2008-05-28 Thread Bob Mottram
2008/5/28 Mike Tintner [EMAIL PROTECTED]: No one's yet actually trying to develop movie AI/AGI - an intelligence that can live in and/or respond to a continuous movie[s] of the world, are they? Ben's system, from the v. little I saw, gestures at this, but falls short. I'm doing stuff with

Re: Merging threads was Re: Code generation was Re: [agi] More Info Please

2008-05-28 Thread Bob Mottram
2008/5/28 Mike Tintner [EMAIL PROTECTED]: Sounds interesting. Can you give us a little more detail (or link). What kind of robot, where? Doing what? Watching what movie? And how does it dream - optimise/correct actions? Link: http://code.google.com/p/sentience/ A picture of the robot:

Re: [agi] More Info Please

2008-05-26 Thread Bob Mottram
2008/5/26 J. Andrew Rogers [EMAIL PROTECTED]: Europe specifically excludes .NET as a development target for similar pragmatic reasons. And developing .NET is going to suck on a non-Windows workstation, eliminating one of the major advantages you tout. To be honest, I do not know of anyone that

Re: [agi] More Info Please

2008-05-25 Thread Bob Mottram
2008/5/25 Nathan Cravens [EMAIL PROTECTED]: yet AGI has potentially dramatic concrete consequences in one direction or another. Money will only be made from this in the short run, and if not, for those with a capacity to muster life, misery will prevail, unless you are the last one or ones

Re: [agi] Porting MindForth AI into JavaScript Mind.html

2008-05-17 Thread Bob Mottram
2008/5/17 Jey Kottalam [EMAIL PROTECTED]: Some of the recent discussion on this list is making me wonder whether a very similar AGI List FAQ is needed warning about the unscientific, wacky and crank-ish discussion that takes place here. I think Yudkowsky once said that AI remains at present

[agi] Graph mining

2008-05-07 Thread Bob Mottram
This might be of interest. http://blogs.zdnet.com/emergingtech/?p=911 The ability to discover patterns, especially from partial information, would seem to be a central concern of AGI. --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS

Re: [agi] organising parallel processes

2008-05-06 Thread Bob Mottram
The blog entry is amusing. I started writing software at quite young age (about 10), and I always assumed that it was an art rather like writing a novel or a musical composition. So when I grew older and became employed to write programs I was shocked in my early career to find that some people

Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-05 Thread Bob Mottram
2008/5/5 Richard Loosemore [EMAIL PROTECTED]: The goal of symbol grounding is not to guarantee uniqueness but to ensure that the connection between the symbols and the objects they are systematically interpretable as being about does not depend exclusively on an interpretation projected onto

Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-05 Thread Bob Mottram
2008/5/5 Richard Loosemore [EMAIL PROTECTED]: I was pointing out that the 'interpreter' (i.e. the programmer) could build mechanisms that are only meaningful of the symbols conform to their interpretation of what the symbols mean. But if the system itself then builds symbols and uses them

Re: [agi] AGI-08 videos

2008-05-05 Thread Bob Mottram
I was just watching Ben's AGI-08 presentation on neural nets (http://video.google.co.uk/videoplay?docid=8672459372566545966) and this does seem like an interesting and novel idea as far as I know. I hope Hugo de Garis was taking notes because that could be something which he might be able to

Re: [agi] AGI-08 videos

2008-05-04 Thread Bob Mottram
2008/5/4 Derek Zahn [EMAIL PROTECTED]: * Limiting people to 10-12 minutes makes it basically impossible to present the contents of a paper, so the talks turn into project overviews. Actually I found that to be a GOOD thing, and hope it continues that way (as long as we don't get the same

Re: [agi] AGI-08 videos

2008-05-04 Thread Bob Mottram
2008/5/4 Derek Zahn [EMAIL PROTECTED]: I have a suggestion for such a task: figuring out how to operate the buttons-and-light system that determines whose turn it is to talk during panel discussions. It may be too ambitious though, as clearly it requires superhuman intelligence (har har

Re: [agi] upcoming oral at Princeton

2008-05-02 Thread Bob Mottram
My guess would be that this kind of approach will only be partly successful, since fundamentally it's only based upon an elaborate kind of 2D template matching. I think what actually happens is that during early childhood experience we are able to statistically correlate certain types of geometry

Re: [agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Bob Mottram
2008/5/1 Ben Goertzel [EMAIL PROTECTED]: If you gathered data about how people move in a certain context, using motion capture, then you could use their GA/NN stuff to induce a program that would generate data similar to the motion-captured data. A system which could do this generally

Re: [agi] Deliberative vs Spatial intelligence

2008-04-30 Thread Bob Mottram
2008/4/30 J. Andrew Rogers [EMAIL PROTECTED]: One of the amusing and fruitless patterns of behavior in the AI community is the incessant categorization of various processes into nominally distinct buckets in the absence of a theoretically justifiable reason for doing so. The above is such an

Re: [agi] An interesting project on embodied AGI

2008-04-29 Thread Bob Mottram
2008/4/29 Ed Porter [EMAIL PROTECTED]: But I agree the project is really quite ambitious in that it is trying to create an embodied robot with a real AGI for a brain. It may well make major contributions to AGI. It sounds like a promising start, but it should also be noted that there have

Re: [agi] An interesting project on embodied AGI

2008-04-28 Thread Bob Mottram
2008/4/28 J Storrs Hall, PhD [EMAIL PROTECTED]: I drool over the physical robot -- it's built like a brick outhouse. It has 53 degrees of freedom, binocular vision, touch, audition, and inertial sensors, harmonic drives, top-grade aircraft aluminum members, the works. That doofy face

Re: [agi] An interesting project on embodied AGI

2008-04-28 Thread Bob Mottram
Incidentally this is also an open source robot. http://eris.liralab.it/wiki/RobotCubSoftware Mechanically sophisticated humanoids have a long history. What's interesting about these is not how much money is spent or how many axes are actuated but the sophistication of the software and

Re: [agi] Perception required for AGI? Problem solved

2008-04-25 Thread Bob Mottram
Personally I would like to see AIs which can operate in the real world outside of a factory environment, and so for me there is really no way to dodge the perception issue by confining my systems to a limited universe of discourse. However, I concede that animal-like or human-like intelligence

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Bob Mottram
2008/4/24 Mike Tintner [EMAIL PROTECTED]: Just to illustrate further, here's the opening lines of today's Times sports report on a football match.[Liverpool v Chelsea] How on earth could this be understood without massive imaginative simulation? [Stephen?] And without mainly imaginative

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Bob Mottram
A paper which may be of interest to the pure linguists, or anyone looking for information about cross modal referencing. http://www.sv.uit.no/seksjon/psyk/pdf/laeng/LaengTeodorescu.pdf It might be possible in theory to construct an intelligence comprised only of linguistic concepts, but such

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Bob Mottram
2008/4/24 Mike Tintner [EMAIL PROTECTED]: The recurrent, but underlying question in many related discussions here is whether you, ( Bob a linas), think a visual scene - let's say some people dancing, (real life or in a picture) - can be understood *geometrically/mathematically,* (by the

Re: [agi] Why Symbolic Representation P.S.

2008-04-24 Thread Bob Mottram
2008/4/24 Mike Tintner [EMAIL PROTECTED]: Thanks for reply, but you haven't quite answered. Geometry, in my definition, is a systematic set of regular forms, which can be, and are, used to deconstruct visual forms in various kinds of images. This implies that the image contains sufficient

Re: Open source (was Re: [agi] The Strange Loop of AGI Funding: now logically proved!)

2008-04-21 Thread Bob Mottram
On 21/04/2008, Steve Richfield [EMAIL PROTECTED] wrote: Of course, this constitutes a reductio ad absurdum situation establishing that the underlying assumption, that someone is going to build AGI, is very probably wrong. Whoever comes up with a working AGI may be the last person you expect

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-21 Thread Bob Mottram
On 21/04/2008, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: Problem is, in brains, there are actually more nerve fibers transmitting data from higher numbers to lower, i.e. backwards, than forwards. I think that the interpretation of sensory input is a much more active process than we AGIers

Re: Open source (was Re: [agi] The Strange Loop of AGI Funding: now logically proved!)

2008-04-20 Thread Bob Mottram
Until a true AGI is developed I think it will remain necessary to pay programmers to write programs, at least some of the time. You can't always rely upon voluntary effort, especially when the problem you want to solve is fairly obscure. On 19/04/2008, Ben Goertzel [EMAIL PROTECTED] wrote:

Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Bob Mottram
Another problem is how to judge the impressiveness of a demo, especially if you're a non expert. It's relatively easy to come up with superficially impressive demos, which then turn out upon closer investigation to be fraught with problems or just not scalable. This seems to happen all the time

Re: [agi] Posting Strategies - A Gentle Reminder

2008-04-14 Thread Bob Mottram
Good advice. There are of course sometimes people who are ahead of the field, but in conversation you'll usually find that the genuine inovators have a deep - bordering on obsessive - knowledge of the field that they're working in and are willing to demonstrate/test their claims to anyone even

Re: [agi] He Wrote 200,000 Books (but Computers Did Some of the Work)

2008-04-14 Thread Bob Mottram
This reminds me of Rod Brooks saying that AGI may already be here but nobody has noticed it yet. With an AGI running a nice little business for you there may be no great incentive to advertise the fact openly to the world. If done well with a suitably flexible AI this kind of automatic content

Re: [agi] Big Dog

2008-04-11 Thread Bob Mottram
On 11/04/2008, Brad Paulsen [EMAIL PROTECTED] wrote: What's really impressive is how natural the leg movements are. I was flashing to images of young horses navigating rough terrain. These only appear natural because what you're seeing is an example of convergent evolution. The robot has to

Re: [agi] Comments from a lurker...

2008-04-10 Thread Bob Mottram
Claims of having created an impressive AI - sans any credible evidence - are a dime a dozen. I've lost track of how many times I've read similar claims being made over the last decade or so, which often lead to a brief flap of excitement. However, I have a feeling that one of these days someone

Re: [agi] Nine Misunderstandings About AI

2008-04-08 Thread Bob Mottram
This made me chuckle: So after the first safe AI is built, the situation will stabilize completely and any further change will always occur in a controlled way that is consistent with the original design. --- agi Archives:

Re: [agi] Symbols

2008-04-01 Thread Bob Mottram
On 31/03/2008, Mark Waser [EMAIL PROTECTED] wrote: Did you get the fact that once you generalize your idea enough, we're all in complete agreement -- but that *a lot* of your specific facts are just plain wrong (to whit -- the phrase *vision isn't just saccade-ing. The retina does also

Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Bob Mottram
On 27/03/2008, Mike Tintner [EMAIL PROTECTED] wrote: 3. While philosophically, intellectually, most people dealing with this area may expect words to have precise meanings, they know practically and intuitively that this is impossible and work on the basis that words can have different

Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Bob Mottram
On 25/03/2008, Mark Waser [EMAIL PROTECTED] wrote: You're thinking too small. The AGI will distribute itself. And money is likely to be: - rapidly deflated, - then replaced with a new, alternate currency that truly values talent and effort (rather than just playing with the

Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Bob Mottram
On 25/03/2008, Aki Iskandar [EMAIL PROTECTED] wrote: You can call future currency whatever you like. Yes, it is like to change form - but certainly not purpose. And Marxism, where maybe AGI or the real deal with deflate currency, is an unlikely aftermath of the advent of AGI. I think the

Re: [agi] Microsoft Launches Singularity

2008-03-24 Thread Bob Mottram
A more likely scenario is that someone else creates an AGI and then Microsoft copies it some time later. But seriously, if someone does manage to produce a working AGI it's probably game over for software engineering and software companies as we know them today. On 24/03/2008, Aki Iskandar

Re: [agi] if yu cn rd tihs, u slhud tke a look

2008-03-13 Thread Bob Mottram
Interesting. I assume that OCR programmers already know about this. On 13/03/2008, Linas Vepstas [EMAIL PROTECTED] wrote: A bit of vision processing fun: http://www.friends.hosted.pl/redrim/Reading_Test.jpg --linas --- agi Archives:

Re: [agi] reasoning knowledge

2008-03-13 Thread Bob Mottram
On 13/03/2008, Linas Vepstas [EMAIL PROTECTED] wrote: object itself. How, say, do you get from a human face to the distorted portraits of Modigliani, Picasso, Francis Bacon, Scarfe, or any cartoonist? By logical or mathematical formulae? Actually, yes. Computer vision processing

Re: [agi] if yu cn rd tihs, u slhud tke a look

2008-03-13 Thread Bob Mottram
One thing worth noticing is that it looks like this effect only works provided that words with three letters or fewer are not garbled. I think what this shows is that there is a statistical element to reading. So provided that the beginning and ending characters are correct, and what's in

Re: [agi] NewScientist piece on AGI-08

2008-03-12 Thread Bob Mottram
I agree with one of the comments: You show a video of 3 virtual characters and some dialog. There is no way to prove what is happening is due to anything other than predefined actions and a script. It's always possible to contrive situations which appear intelligent to a naive observer, but

  1   2   3   >