Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Bryan Bishop
On Tue, Aug 10, 2010 at 6:25 AM, Steve Richfield wrote: Note my prior posting explaining my inability even to find a source of used mice for kids to use in high-school anti-aging experiments, all while university labs are now killing their vast numbers of such mice. So long as things remain

Re: [agi] an advance in brain/computer interfaces

2008-11-21 Thread Bryan Bishop
On 11/21/08, Ed Porter [EMAIL PROTECTED] wrote: For those of you who don't read Kurzweil's mailing list, here is a link to an article that describes progress being made in a type of brain/computer interface that may in the future have the potential of provided a high bandwidth communication

Re: [agi] First issue of H+ magazine ... http://hplusmagazine.com/

2008-10-17 Thread Bryan Bishop
On Fri, Oct 17, 2008 at 4:10 AM, Bob Mottram [EMAIL PROTECTED] wrote: Open source robotics may eventually occur, but I think it will require some common and relatively affordable platforms. It becomes much easier to usefully share code when you're dealing with the same hardware (or at least

Re: [agi] universal logical form for natural language

2008-09-30 Thread Bryan Bishop
On Tuesday 30 September 2008, YKY (Yan King Yin) wrote: Yeah, and I'm designing a voting system of virtual credits for working collaboratively on the project... Write a plugin to cvs, svn, git, or some other. - Bryan http://heybryan.org/ Engineers:

Re: [agi] NLP? Nope. NLU? Yep!

2008-09-20 Thread Bryan Bishop
On Saturday 20 September 2008, Trent Waddington wrote: Hehe, indeed.  Although I'm sure Powerset has some nice little relationship links between words, I'm a little skeptical about the claim to meaning.  I don't mean that in a philosophical not grounded sense.. I'm of the belief that you

Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Bryan Bishop
On Friday 19 September 2008, BillK wrote: Last I heard Peter Norvig was saying that Google had no interest in putting a natural language front-end on Google. http://slashdot.org/article.pl?sid=07/12/18/1530209 Arguably that's still natural language, even if it's just tags instead of

Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Bryan Bishop
On Friday 19 September 2008, Mike Tintner wrote: Your unconscious keeps talking to you. It is precisely paper that mainly shapes your thinking about AI. Paper has been the defining medium of literate civilisation. And what characterises all literate forms is nice, discrete, static, fragmented,

Re: [agi] self organization

2008-09-18 Thread Bryan Bishop
On Wednesday 17 September 2008, Terren Suydam wrote: I think a similar case could be made for a lot of large open source projects such as Linux itself. However, in this case and others, the software itself is the result of a high-level super goal defined by one or more humans. Even if no

Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Bryan Bishop
On Thursday 18 September 2008, Mike Tintner wrote: In principle, I'm all for the idea that I think you (and perhaps Bryan) have expressed of a GI Assistant - some program that could be of general assistance to humans dealing with similar problems across many domains. A diagnostics expert,

Re: [agi] self organization

2008-09-17 Thread Bryan Bishop
On Wednesday 17 September 2008, Terren Suydam wrote: OK, how's that different from the collaboration inherent in any human project? Can you just explain your viewpoint? When you have something like 20,000+ contributors writing software that can very, very easily break, I think it's an

Re: [agi] self organization

2008-09-16 Thread Bryan Bishop
On Monday 15 September 2008, Terren Suydam wrote: I send this along because it's a great example of how systems that self-organize can result in structures and dynamics that are more complex and efficient than anything we can purposefully design. The applicability to the realm of designed

Re: [agi] self organization

2008-09-16 Thread Bryan Bishop
On Monday 15 September 2008, Terren Suydam wrote: By your argumentation, it would seem you won't find any argument about intelligence of worth unless it explains everything. I've never understood the strong resistance of many in the AI community to the concepts involved with complexity theory,

Re: [agi] self organization

2008-09-16 Thread Bryan Bishop
On Tuesday 16 September 2008, Terren Suydam wrote: Not really familiar with apt-get.  How is it a complex system?  It looks like it's just a software installation tool. How many people are writing the software? - Bryan http://heybryan.org/ Engineers:

Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-14 Thread Bryan Bishop
On Sunday 14 September 2008, Dimitry Volfson wrote: Well, then I don't understand what you're looking for. Brain chemistry is part of the model. Check out one of the sentences: The thalamus in the limbic system ('leopard brain') converts the physical need into an urge within the cortex. So

Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-14 Thread Bryan Bishop
On Sunday 14 September 2008, Dimitry Volfson wrote: Actually, I remember reading something about scientists finding a list structure in the brain of a bird singing a song (a moving pointer to the next item in a list sort of thing). But whatever. That does sound interesting, yes, I'd like to

Re: [agi] A model for RSI

2008-09-14 Thread Bryan Bishop
On Sunday 14 September 2008, Pei Wang wrote: There is no guaranteed improvement in an open system. On this note, somebody suggested I reread Wolfram's NKS pg 340~ yesterday. It was around this section of his book that he mentions his lack of optimism in iteration to bring about 'improvement'

Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-13 Thread Bryan Bishop
On Saturday 13 September 2008, Dimitry Volfson wrote: Look at The Brain's Urge System at ChangingMinds.org http://changingminds.org/explanations/brain/urge_system.htm: . Notice that the stimulus can be pure thought. Meaning that a mental image of a goal-state can form the basis of

Re: [agi] Ability to improve ones own efficiency as a measure of intelligence

2008-09-12 Thread Bryan Bishop
On Wednesday 10 September 2008, Rene de Visser wrote: Any comments? Yes. Please look into computational complexity and Big O notation. http://en.wikipedia.org/wiki/Computational_complexity Computational complexity theory, as a branch of the theory of computation in computer science,

Re: [agi] Artificial [Humor ] vs Real Approaches to Information

2008-09-12 Thread Bryan Bishop
On Friday 12 September 2008, Mike Tintner wrote: to understand a piece of information and its information objects, (eg words) , is to realise (or know) how they refer to real objects in the real world, (and, ideally, and often necessarily,  to be able to point to and engage with those real

Re: [agi] Non-evolutionary models of recursive self-improvement (was: Ability to improve ones own efficiency as a measure of intelligence)

2008-09-12 Thread Bryan Bishop
On Wednesday 10 September 2008, Matt Mahoney wrote: I have asked this list as well as the singularity and SL4 lists whether there are any non-evolutionary models (mathematical, software, physical, or biological) for recursive self improvement (RSI), i.e. where the parent and not the

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote: You start v. constructively thinking how to test the non-programmed nature of  - or simply record - the actual writing of programs, and then IMO fail to keep going. You could trace their keyboard presses back to the cerebellum and motor

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, William Pearson wrote: 2008/9/5 Mike Tintner [EMAIL PROTECTED]: By contrast, all deterministic/programmed machines and computers are guaranteed to complete any task they begin. If only such could be guaranteed! We would never have system hangs, dead locks. Even

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Mike Tintner wrote: Were your computer like a human mind, it would have been able to say (as you/we all do) - well if that part of the problem is going to be difficult, I'll ignore it  or.. I'll just make up an answer... or by God I'll keep trying other ways until

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Mike Tintner wrote: fundamental programming problem, right?) A creative free machine, like a human, really can follow any of what may be a vast range of routes - and you really can't predict what it will do or, at a basic level, be surprised by it. What do you say

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Saturday 06 September 2008, William Pearson wrote: I'm very interested in computers that self-maintain, that is reduce (or eliminate) the need for a human to be in the loop or know much about the internal workings of the computer. However it doesn't need a vastly different computing

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Saturday 06 September 2008, Mike Tintner wrote: Our unreliabilty is the negative flip-side of our positive ability to stop an activity at any point, incl. the beginning and completely change tack/ course or whole approach, incl. the task itself, and even completely contradict ourself. But

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Terren Suydam wrote: So, Mike, is free will: 1) an illusion based on some kind of unpredictable, complex but *deterministic* interaction of physical components 2) the result of probabilistic physics - a *non-deterministic* interaction described by something like

Re: [agi] Recursive self-change: some definitions

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote: Bryan, How do you know the brain has a code? Why can't it be entirely impression-istic - a system for literally forming, storing and associating sensory impressions (including abstracted, simplified, hierarchical impressions of other

Re: [agi] open models, closed models, priors

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote: Yes you do. Every time you make a decision, you are assigning a higher probability of a good outcome to your choice than to the alternative. You'll have to prove to me that I make decisions, whatever that means. - Bryan

Re: [agi] open models, closed models, priors

2008-09-07 Thread Bryan Bishop
On Sunday 07 September 2008, Matt Mahoney wrote: Depends on what you mean by I. You started it - your first message had that dependency on identity. :-) - Bryan http://heybryan.org/ Engineers: http://heybryan.org/exp.html irc.freenode.net #hplusroadmap

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote: And as a matter of scientific, historical fact, computers are first and foremost keyboards - i.e.devices for CREATING programs  on keyboards, - and only then following them. [Remember how AI gets almost everything about intelligence back to

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Terren Suydam wrote: Thus is creativity possible while preserving determinism. Of course, you still need to have an explanation for how creativity emerges in either case, but in contrast to what you said before, some AI folks have indeed worked on this issue.

Re: [agi] Recursive self-change: some definitions

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote: I think this is a good important point. I've been groping confusedly here. It seems to me computation necessarily involves the idea of using a code (?). But the nervous system seems to me something capable of functioning without a code -

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote: And what I am asserting is a  paradigm of a creative machine, which starts as, and is, NON-algorithmic and UNstructured  in all its activities, albeit that it acquires and creates a multitude of algorithms, or routines/structures, for *parts*

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote: And how to produce creativity is the central problem of AGI - completely unsolved.  So maybe a new approach/paradigm is worth at least considering rather than more of the same? I'm not aware of a single idea from any AGI-er past or present

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote: Do you honestly think that you write programs in a programmed way? That it's not an *art* pace Matt, full of hesitation, halts, meandering, twists and turns, dead ends, detours etc?  If you have to have some sort of program to start with, how

Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Valentina Poletti wrote: When we want to step further and create an AGI I think we want to externalize the very ability to create technology - we want the environment to start adapting to us by itself, spontaneously by gaining our goals. There is a sense of

Re: [agi] open models, closed models, priors

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote: A closed model is unrealistic, but an open model is even more unrealistic because you lack a means of assigning likelihoods to statements like the sun will rise tomorrow or the world will end tomorrow. You absolutely must have a means of

Re: [agi] draft for comment

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote: Another aspect of embodiment (as the term is commonly used), is the false appearance of intelligence. We associate intelligence with humans, given that there are no other examples. So giving an AI a face or a robotic body modeled after a human

Re: [agi] open models, closed models, priors

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Abram Demski wrote: My intention here is that there is a basic level of well-defined, crisp models which probabilities act upon; so in actuality the system will never be using a single model, open or closed... I think Mike's model is one more of approach,

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Bryan Bishop
On Friday 27 June 2008, Richard Loosemore wrote: Pardon my fury, but the problem is understanding HOW TO DO IT, and HOW TO BUILD THE TOOLS TO DO IT, not having expensive hardware.  So long as some people on this list repeat this mistake, this list will degenerate even further into

Re: [agi] Cognitive Neuropsychology

2008-05-08 Thread Bryan Bishop
On Thu, May 8, 2008 at 3:21 PM, Mike Tintner [EMAIL PROTECTED] wrote: It oughtn't to be all neuro- though. There is a need for some kind of corporate science - that studies whole body simulation and not just the cerebral end,.After all, a lot of the simulations being talked about are v.

Re: [agi] Accidental Genius

2008-05-08 Thread Bryan Bishop
On Thu, May 8, 2008 at 10:02 AM, Richard Loosemore [EMAIL PROTECTED] wrote: Anyhow it is very interesting. Perhaps savantism is an attention mechanism disorder? Like, too much attention. Yes. Autism is a devastating neurodevelopmental disorder with a polygenetic predisposition that seems to

Re: [agi] BMI/BCI Growing Fast

2007-12-23 Thread Bryan Bishop
On Saturday 22 December 2007, Philip Goetz wrote: If we define mindreading as knowing whether someone is telling the truth, whether someone likes you, or is sexually attracted to you, or recognizes you; knowing whether someone is paying attention; knowing whether someone is reasoning logically

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Bryan Bishop
On Friday 14 December 2007, Benjamin Goertzel wrote: But, we're still quite clueless about how to, say, hook the brain up to a calculator or to Google in a useful way... due to having a vastly insufficiently detailed knowledge of how the brain carries out cognitive operations... While overall

Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Bryan Bishop
On Friday 14 December 2007, Mike Dougherty wrote: Are there any efforts at using Nootropic drugs in a 'brain enhancement race' ?  I haven't heard about it, but then I wouldn't because the program would be kept secret. There might be one behind the scenes. *cough* - Bryan - This list is

Re: [agi] CyberLover passing Turing Test

2007-12-12 Thread Bryan Bishop
On Wednesday 12 December 2007, Dennis Gorelik wrote: In my taste, testing with clueless judges is more appropriate approach. It makes test less biased. How can they judge when they don't know what they are judging? Surely, when they hang out for some cyberlovin', they are not scanning for

Re: [agi] Worst case scenario

2007-12-11 Thread Bryan Bishop
On Tuesday 11 December 2007, Matt Mahoney wrote: --- Bryan Bishop [EMAIL PROTECTED] wrote: Re: how much computing power is needed for ai. My worst-case scenario accounts for nearly any finite computing power, via the production of semiconductant silicon wafer tech. A human brain sized

Re: [agi] CyberLover passing Turing Test

2007-12-11 Thread Bryan Bishop
On Tuesday 11 December 2007, Dennis Gorelik wrote: If CyberLover works as described, it will qualify as one of the first computer programs ever written that is actually passing the Turing Test. I thought the Turing Test involved fooling/convincing judges, not clueless men hoping to get some

Re: [agi] Worst case scenario

2007-12-10 Thread Bryan Bishop
On Monday 10 December 2007, Matt Mahoney wrote: The worst case scenario is that AI wipes out all life on earth, and then itself, although I believe at least the AI is likely to survive. http://lifeboat.com/ex/ai.shield Re: how much computing power is needed for ai. My worst-case scenario

Re: [agi] AGI and Deity

2007-12-09 Thread Bryan Bishop
On Sunday 09 December 2007, Mark Waser wrote: Pascal's wager starts with the false assumption that belief in a deity has no cost. Formally, yes. However, I think it's easy to imagine a Pascal's wager where we replace diety with anything Truly Objective, such as whatever it is that we hope the

[agi] Worst case scenario

2007-12-07 Thread Bryan Bishop
Here's the worst case scenario I see for ai: that there has to be hardware complexity to the extent that generally nobody is going to be able to get the initial push. Indeed, there's Moore's law to take account of, but the economics might just prevent us from accumulating enough nodes, enough

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Bryan Bishop
On Friday 07 December 2007, Mike Tintner wrote: P.S. You also don't answer my question re: how many neurons  in total *can* be activated within a half second, or given period, to work on a given problem - given their relative slowness of communication? Is it indeed possible for hundreds of

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Bryan Bishop
On Sunday 02 December 2007, John G. Rose wrote: Building up parse trees and word sense models, let's say that would be a first step. And then say after a while this was accomplished and running on some peers. What would the next theoretical step be? I am not sure what the next step would be.

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Bryan Bishop
On Thursday 29 November 2007, Ed Porter wrote: Somebody (I think it was David Hart) told me there is a shareware distributed web crawler already available, but I don't know the details, such as how good or fast it is. http://grub.org/ Previous owner went by the name of 'kordless'. I found him

Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread Bryan Bishop
On Monday 03 December 2007, Mike Dougherty wrote: I believe the next step of such a system is to become an abstraction between the user and the network they're using.  So if you can hook into your P2P network via a firefox extension, (consider StumbleUpon or Greasemonkey) so it (the agent) can

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-16 Thread Bryan Bishop
On Thursday 15 November 2007 08:16, Benjamin Goertzel wrote: non-brain-based AGI.  After all it's not like we know how real chemistry gives rise to real biology yet --- the dynamics underlying protein-folding remain ill-understood, etc. etc. Can anybody elaborate on the actual problems

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bryan Bishop
On Thursday 15 November 2007 02:30, Bob Mottram wrote: I think the main problem here is the low complexity of the environment Complex programs can only be written in an environment capable of bearing that complexity: http://sl4.org/archive/0710/16880.html - Bryan - This list is

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bryan Bishop
On Thursday 15 November 2007 20:02, Benjamin Goertzel wrote: On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote: Can anybody elaborate on the actual problems remaining (beyond etc. etc.-- which is appropriate from Ben who is most notably not a biochemist/chemist/bioinformatician

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bryan Bishop
On Thursday 15 November 2007 21:19, Benjamin Goertzel wrote: so we still don't know exactly how poor a model the formal neuron used in computer science is Speaking of which: isn't this the age-old simple math function involving an integral or two and a summation over the inputs? I

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:55, Richard Loosemore wrote: I was really thinking of the data collection problem:  we cannot take one brain and get full information about all those things, down to a sufficient level of detail.  I do not see such a technology even over the horizon (short of

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:28, Richard Loosemore wrote: The complaint is not your symbols are not connected to experience. Everyone and their mother has an AI system that could be connected to real world input.  The simple act of connecting to the real world is NOT the core problem. Are

Re: [agi] Human uploading

2007-11-13 Thread Bryan Bishop
Ben, This is all very interesting work. I have heard of brain slicing before, as well as viral gene therapy to add a way for our neurons to debug themselves into the blood stream, which is not yet technologically possible (or here yet, rather), and the age-old concept of using MNT to signal

Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Bryan Bishop
On Tuesday 13 November 2007 09:11, Richard Loosemore wrote: This is the whole brain emulation approach, I guess (my previous comments were about evolution of brains rather than neural level duplication). Ah, you are right. But this too is an interesting topic. I think that the order of

Re: [agi] advice-level dev collaboration

2007-11-13 Thread Bryan Bishop
On Tuesday 13 November 2007 17:12, Benjamin Johnston wrote: Why not try this list, and then move to the private discussion model (or start an [agi-developer] list) if there's a backlash? I'd certainly join. - Bryan - This list is sponsored by AGIRI: http://www.agiri.org/email To

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 15:56, Richard Loosemore wrote: You never know what new situation might arise that might be a problem, and you cannot market a driverless car on the understanding that IF it starts killing people under particular circumstances, THEN someone will follow that by adding

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 19:31, Richard Loosemore wrote: Yikes, no:  my strategy is to piggyback on all that work, not to try to duplicate it. Even the Genetic Algorithm people don't (I think) dream of evolution on that scale. Yudkowsky recently wrote an email on preservation of the

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 19:48, Richard Loosemore wrote: Even with everyone on the planet running evolutionary simulations, I do not believe we could reinvent an intelligent system by brute force. Of your message, this part is the most peculiar. Brute force is all that we have. - Bryan

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 22:16, Richard Loosemore wrote: If anyone were to throw that quantity of resources at the AGI problem (recruiting all of the planet), heck, I could get it done in about 3 years. ;-) I have done some research on this topic in the last hour and have found that a

Re: [agi] Re: What best evidence for fast AI?

2007-11-11 Thread Bryan Bishop
Excellent post, and I hope that I may come across enough time to give it a more thorough reading. Is it possible that at the moment our working with 'intelligence' is just like flapping in an attempt to fly? It seems like the concept of intelligence is a good way to preserve the nonabsurdity

Re: [agi] Upper Ontologies

2007-11-10 Thread Bryan Bishop
On Friday 09 November 2007 23:27, Benjamin Goertzel wrote: I would bet that merging two KB's obtained by mining natural language would work a lot better than merging two KB's like Cyc and SUMO that were artificially created by humans. Upon reading Waser's response I reread that segment to say

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 09:29, Derek Zahn wrote: On such a chart I think we're supposed to be at something like mouse level right now -- and in fact we have seen supercomputers beginning to take a shot at simulating mouse-brain-like structures. Ref? - Bryan - This list is sponsored

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 10:07, Kaj Sotala wrote: http://news.bbc.co.uk/2/hi/technology/6600965.stm The researchers say that although the simulation shared some similarities with a mouse's mental make-up in terms of nerves and connections it lacked the structures seen in real mice brains.

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 11:31, Derek Zahn wrote: Unfortunately, not enough is yet known about specific connectivity so the best that can be done is play with structures of similar scale in anticipation of further advances. What signs will tell us that we do know enough about the

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 12:52, Edward W. Porter wrote: In fact, if the ITRS roadmap projections continue to be met through What is the ITRS roadmap? Do you have a link? - Bryan - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please

Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 12:52, Edward W. Porter wrote: There is a small, but increasing number of people who pretty much understand how to build artificial brains I would be interested in learning who these people are and meeting them. Artificial brains are tough things to build. A sac of

Re: [agi] Connecting Compatible Mindsets

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 13:40, Charles D Hixson wrote: OTOH, to make a go of this would require several people willing to dedicate a lot of time consistently over a long duration. A good start might be a few bibliographies. http://liinwww.ira.uka.de/bibliography/ - Bryan - This list

Re: [agi] Connecting Compatible Mindsets

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 14:10, Charles D Hixson wrote: Bryan Bishop wrote: On Saturday 10 November 2007 13:40, Charles D Hixson wrote: OTOH, to make a go of this would require several people willing to dedicate a lot of time consistently over a long duration. A good start might

Re: [agi] Connecting Compatible Mindsets

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 14:17, I wrote: Bibliography + paper archive, then. http://arxiv.org/ (perhaps we need one for AGI) It has come to my attention that there is no open source software for (p)reprint archives. This is unacceptable- I was hoping to quickly download something from

Re: [agi] Re: Superseding the TM [Was: How valuable is Solmononoff...]

2007-11-09 Thread Bryan Bishop
On Friday 09 November 2007 20:01, John G. Rose wrote: I already have some basics of merging OOP and Group/Category Theory. Am working on some ideas on jamming, or I should say intertwining automata in that. The complexity integration still trying to figure out... trying to stay as far from

Re: [agi] can superintelligence-augmented humans compete

2007-11-04 Thread Bryan Bishop
On Saturday 03 November 2007 16:53, Edward W. Porter wrote: In my below recent list of ways to improve the power of human intelligent augmentation I forgot to think about possible ways to actually increase the bandwidth of the top level decision making of the brain, which I had listed as a

Re: [agi] can superintelligence-augmented humans compete

2007-11-04 Thread Bryan Bishop
On Sunday 04 November 2007 14:37, Edward W. Porter wrote: Re: augmenting/replacing the PFC. We can advance this field of knowledge via attempting to extend Dr. White's work on brain transplantation in monkeys, instead with mice, in an attempt to keep brain regions of the mice on life