Re: [agi] A thought.

2003-02-05 Thread Brad Wyble
of creating an AI to study brain function is that the result might be even more inscrutable than our brain. -Brad Wyble --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Re: [agi] A thought.

2003-02-06 Thread Brad Wyble
3) Any successful AGI system is also going to have components in two other categories: a) specialized-intelligence components that solve particular problems in ways having little or nothing to do with truly general intelligence capability b) specialized-intelligence components that are

Re: [agi] A thought.

2003-02-07 Thread Brad Wyble
Philip, I can understand the brain structure we see in intelligent animals would emerge from a process of biological evolution where no conscious design is involved (ie. specialised non conscious functions emerge first, generalised processes emerge later), but why should AGI design

Re: [agi] AGI morality

2003-02-10 Thread Brad Wyble
There might even be a benefit to trying to develop an ethical system for the earliest possible AGIs - and that is that it forces everyone to strip the concept of an ethical system down to its absolute basics so that it can be made part of a not very intelligent system. That will

Re: [agi] Consciousness

2003-02-11 Thread Brad Wyble
A good, if somewhat lightweight, article on the nature of mind and whether = silicon can eventually manifest conscioussness.. http://www.theage.com.au/articles/2003/02/09/1044725672185.html Kevin I don't know if consciousness debates are verbotten here or not, but I will say that I

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I don't think any human alive has the moral and ethical underpinnings to allow them to resist the corruption of absolute power in the long run. We are all kept in check by our lack of power, the competition of our fellow humans, the laws of society, and the instructions of our peers. Remove

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I am exceedingly glad that I do not share your opinion on this. Human altruism *is* possible, and indeed I observe myself possessing a significant measure of it. Anyone doubting thier ability to 'resist corruption' should not IMO be working in AGI, but should be doing some serious

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I can't imagine the military would be interested in AGI, by its very definition. The military would want specialized AI's, constructed around a specific purpose and under their strict control. An AGI goes against everything the military wants from its weapons and agents. They train soldiers

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Brad Wyble
There are simple external conditions that provoke protective tendencies in humans following chains of logic that seem entirely natural to us. Our intuition that reproducing these simple external conditions serve to provoke protective tendencies in AIs is knowably wrong, failing an

Re: [agi] Breaking AIXI-tl

2003-02-15 Thread Brad Wyble
I guess that for AIXI to learn this sort of thing, it would have to be rewarded for understanding AIXI in general, for proving theorems about AIXI, etc. Once it had learned this, it might be able to apply this knowledge in the one-shot PD context But I am not sure. For those of us

Re: [agi] doubling time revisted.

2003-02-17 Thread Brad Wyble
Processing speed is a necessary but far from sufficient criterion of AGI design. The software engineering aspect is going to be the bigger limitation by far. It is common to speak of the brain as x neurons and Y synapses but the truth of it is that there are layers of complexity beneath the

Re: [agi] doubling time revisted.

2003-02-17 Thread Brad Wyble
Hmmm. I think the critical problem is neither processing speed, NOR software engineering per se -- it's having a mind design that's correct in all the details. Or is that what you meant by software engineering? To me, software engineering is about HOW you build it, not about WHAT you

Re: [agi] doubling time revisted.

2003-02-17 Thread Brad Wyble
It is obvious that no one on this list agrees with me. This does not mean = that I am obviously wrong. The division is very simple. My position: the doubling time has been reducing and will continue to do s= o. Their position: the doubling time is constant. It is incredibly unlikely

Re: [agi] doubling time revisted.

2003-02-17 Thread Brad Wyble
I know this topic is already beaten to death in previous discussions, but I'll throw out one more point after reading that we may already have the equivalent power of some 3000 minds in raw CPU available worldwide. The aggregate neural mass of the world's population of insects and animals are

Re: [agi] doubling time watcher.

2003-02-18 Thread Brad Wyble
I would like to contribute new SPEC CINT 2000 results as they are posted to the SPEC benchmark list by semiconductor manufacturers. I expect to post perhaps 10 times per year with this news. This is the source data for my Human Equivalent Computing spreadsheet and regression line. I'm

Re: [agi] doubling time watcher.

2003-02-18 Thread Brad Wyble
Brad writes, Might it not be a more accurate measure to chart mobo+CPU com= bo prices? Maybe. If you wanted to research and post this data I'm sure it would be = helpful to have. Check out www.pricewatch.com. They have a search engine which ranks products by vendors. Using this,

Re: [agi] doubling time watcher.

2003-02-18 Thread Brad Wyble
I used the assumptions of Hans Moravec to arrive at Human Equivalent Computer processing power: http://www.frc.ri.cmu.edu/~hpm/ Of course as we get closer to AGI then the error delta becomes smaller. I am comfortable with the name for now and will adjust the metric as more info

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
The brain is actually fantasticly simple... It is nothing compared with the core of a linux operating system (kernel+glibc+gcc). Heck, even the underlying PC hardware is more complex in a number of ways than the brain, it seems... The brain is very RISCy... using a relatively simple

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
Well, we invented our own specialized database system (in effect) but not our own network protocol. In each case, it's a tough decision whether to reuse or reimplement. The right choice always comes down to the nasty little details... The biggest Ai waste of time has probably been

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
[META: please turn line-wrap on, for each of these responses my own standards for outgoing mail necessitate that I go through each line and ensure all quotations are properly formatted...] I think we're suffering from emacs issues, I'm using elm. Iff the brain is not unique in its

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
Not exactly. It isn't that I think we should give up on AGI, but rather that we should be consciously planning for it to take several decades to get there. We should still tackle the problems in front of us, instead of giving up on real AI work altogether. But we need to get past the idea

Re: AGI Complexity (WAS: RE: [agi] doubling time watcher.)

2003-02-18 Thread Brad Wyble
The nature of neuroscience research doesn't really differentiate between the two at present. In order to understand WHAT a brain part does, we have to understand HOW it, and all structures connected to it function. We need to understand the inputs and the outputs, and that's all HOW.

[agi] Goal systems designed to epxlore novelty

2003-02-19 Thread Brad Wyble
The AIXI would just contruct some nano-bots to modify the reward-button so that it's stuck in the down position, plus some defenses to prevent the reward mechanism from being further modified. It might need to trick humans initially into allowing it the ability to construct such

Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Brad Wyble
Now, there is no easy way to predict what strategy it will settle on, but build a modest bunker and ask to be left alone surely isn't it. At the very least it needs to become the strongest military power in the world, and stay that way. It might very well decide that exterminating the human

Re: [agi] A probabilistic/algorithmic puzzle...

2003-02-20 Thread Brad Wyble
But anyway, using the weighted-averaging rule dynamically and iteratively can lead to problems in some cases. Maybe the mechanism you suggest -- a nonlinear average of some sort -- would have better behavior, I'll think about it. The part of the idea that guaranteed an eventual equilibrium

Re: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Brad Wyble
1) Humans use special-case algorithms to solve these problem, a different algorithm for each domain 2) Humans have a generalized mental tool for solving these problems, but this tool can only be invoked when complemented by some domain-specific knowledge My intuitive inclination is

Re: Re: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Brad Wyble
Hi Ben, Thanks for the brain teaser! As a sometimes believer in Occam's Razor, I think it makes sense to assume that Xi and Xj are indepenent, unless we know otherwise. This simplifies things, and is the rational thing to do (for some definition of rational ;-). So why not construct a

Re: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Brad Wyble
This is also an example of how weird the brain can be from an algorithmic perspective. In designing an AI system, one tends to abstract cognitive processes and create specific processes based on these abstractions. (And this is true in NN type AI architectures, not just logicist ones.) But

Re: Re: Re: [agi] A probabilistic/algorithmic puzzle...

2003-02-21 Thread Brad Wyble
Brad wrote: I think this is a core principle of AGI design and that a system that only makes inferences it *knows* are correct would be fairly uninteresting and incapable of performing in the real world. The fact that the information in the P(xi|xj) list is very incomplete is what

Re: [agi] Goal systems designed to epxlore novelty

2003-02-22 Thread Brad Wyble
Novelty is recognized when a new PredicateNode (representing an observed pattern) is created, and it's assessed that prior to the analysis of the particular data the PredicateNode was just recognized in, the system would have assigned that PredicateNode a much lower truth value. (That is:

[agi] the Place system of the rodent hippocampus

2003-02-24 Thread Brad Wyble
I whipped this up this afternoon in case any of your are interested. I tried to gear it towards functionally relevant features. Enjoy Reference document: The Hippocampal navigational system by Brad Wyble A primer of neurophysiological correlates of spatial navigation in the rodent

Re: [agi] the Place system of the rodent hippocampus

2003-02-24 Thread Brad Wyble
On the face of it, these place maps are very reminiscent of attractors as found in formal attractor neural networks. When multiple noncorrelated maps are stored in the same collection of neurons, this sounds like multiple attractors being stored in the same formal neural net. Yeap, there's

Re: [agi] the Place system of the rodent hippocampus

2003-02-24 Thread Brad Wyble
Yeap, there's well developed theories about how an autoassociate network like CA3 could support multiple, uncorrelated attractor maps and sustain activity once one of them was activated. The big debate is about how they are formed. The standard way attractors are formed in formal

Re: [agi] more interesting stuff

2003-02-25 Thread Brad Wyble
Cliff wrote: It's not a firm conclusion, but I'm basing it on information / complexity theory. This relates, in certain ways, to ideas about entropy -- and energy is negentropy. I.e. without the sun's input we would be nothing. I'm not convinced of this idea on an intuitive basis, but

Re: [agi] more interesting stuff

2003-02-25 Thread Brad Wyble
Kevin said: I would say that complex information about anything can be conveyed in ways outside of your current thinking, but if you ask me to prove it, I cannot. There is evidence of it in things like the ERP experiment which show the existence of a possible substrate that we have not yet

Re: [agi] really cool

2003-02-25 Thread Brad Wyble
They are not mapping to IP addresses, probably geography as Ben suggests. I went to the search window and intercepted searches done by other people. -Brad --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to

[agi] seed AI vs Cyc, where does Novamente fall?

2003-02-25 Thread Brad Wyble
Ok, let's get rolling then. Ben, here's a question. To what extent are parts of Novamente hand built ala Cyc? I can easily imagine a dimension here. At one end is Cyc, which is carefully and meticulously constructed by people. The design work is two fold, creating the structure within which

Re: [agi] swarm intellience

2003-02-26 Thread Brad Wyble
The limitation in multi-agent systems is usually the degree of interaction they can have. The bandwidth between ants, for example, is fairly low even when they are in direct contact, let alone 1 inch apart. This limitation keeps their behavior relatively simple, simple relative to what you

Re: [agi] swarm intellience

2003-02-26 Thread Brad Wyble
But hopefully the bandwidth of communication is compensated by the power of parallel processing. So long as communication between ants or processing nodes is not completely blocked, some sort of intelligence should self-organize, then its just a matter of time. As programmers or

Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-26 Thread Brad Wyble
Just to pick a point, Eliezer defines Seed AI as Artificial Intelligence designed for self-understanding, self-modification, and recursive self-enhancement. I do not agree with you that pure Seed AI is a know-nothing baby. I was perhaps a bit extreme in my word choice, but I do not believe

Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble
Yep. Novamente contains particular mechanisms for converting between declarative and procedural knowledge... something that is learned procedurally can become declarative and vice versa. In fact, if all goes according to plan (a big if of course ;) Novamente should *eventually* be much

Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble
Indeed, making the declarative knowledge derived from rattling off parameters describing procedures useful is a HARD problem... but at least Novamente can get the data, which as you have greed, would seem to give AI systems an in-principle advantage over humans in this area... It's hard

Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble
That was exactly my impression when I last looked seriously into neuroscience (1995-96). I wanted to understand cognitive dynamics, and I hoped that tech like PET and fMRI would do the trick. But nothing existing giving the combination of temporal and spatial acuity that you'd need to

Re: [agi] seed AI vs Cyc, where does Novamente fall?

2003-02-27 Thread Brad Wyble
I actually have a big MEG datafile on my hard drive, which I haven't gotten around to playing with. It consists of about 120 time series, each with about 100,000 points in it. It represents the magnetic field of someone's brain, measured through 120 sensors on their skull, while they

Re: [agi] Playing with fire

2003-03-03 Thread Brad Wyble
Extra credit: I've just read the Crichton novel PREY. Totally transparent movie-scipt but a perfect text book on how to screw up really badly. Basically the formula is 'let the military finance it'. The general public will see this inevitable movie and we we will be drawn towards the moral

Re: [agi] Playing with fire

2003-03-03 Thread Brad Wyble
One thing I should add: It's the same hubris I mentioned in my previous message that prompted us to send out satellites effectively bearing our home address and basic physiology on a plaque in the hope that aliens would find it and come to us. Even NASA scientists seem to have no fear of

Re: [agi] Hard Wired Switch

2003-03-05 Thread Brad Wyble
An AGI system will turn against us probably if humans turn against it first. It's like raising a child, if you beat the child every day, they are not going to grow up very friendly. If you raise a child to co-operate and co-exist with its environment, what possible motivation is there for

Re: [agi] Dog-Level Intelligence

2003-03-21 Thread Brad Wyble
It might be easier to build a human intelligence than a dog intelligence simply because we don't have a dog's perspective and we can't ask them to reflect on it. Don't be quick to assume it would be easier just because they are less intelligent. -Brad --- To unsubscribe, change your

Re: [agi] Intelligence enhancement

2003-06-23 Thread Brad Wyble
What wasn't made very clear in that article is that the sole function of TMS is shutting down specific areas of the brain for a short while. So it's not that he's improving a given piece of brain tissue, he's shutting off certain areas which changes the balance of power in the mind, and

Re: [agi] Educating an AI in a simulated world

2003-07-14 Thread Brad Wyble
It's an interesting idea, to raise Novababy knowing that it can adopt different bodies at will. Clearly this will lead to a rather different psychology than we see among humans --- making the in-advance design of educational environments particularly tricky!! First of all, please read

Re: [agi] Robert Hecht-Nielsen's stuff

2003-07-19 Thread Brad Wyble
Well the short gist of this guy's spiel is that Lenat is on the right track. The key is to accumulate terabytes of stupid, temporally forward associations between elements. A little background check reveals that this guy isn't a complete nutcase. He's got some publications (but not many),

Re: [agi] Fw: Do This! Its hysterical!! It works!!

2003-07-20 Thread Brad Wyble
I use elm so I couldn't tell, was there a virus riding on that? Just curious. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

[agi] sick of AI flim-flam

2003-08-14 Thread Brad Wyble
A tiny rant about bogus AI. I was depressed to find this site: http://www.intelagent.org/ and another by the same snakeoil salesman (Sol Endelman) http://hardwear.org/personal/PC/ See that pencil drawing of the wearable computer? Lifted straight from the MIT Mithril project website. I'm

Re: [agi] Perl AI Weblog

2003-08-14 Thread Brad Wyble
The open source concept to AI, which is essential what you are doing here, is a very interesting one. However, the open source success stories have always involved lots of tiny achievable goals surrounding one mammoth success (the functional kernel). i.e. there were many stepping stones

Re: [agi] Web Consciousness and self consciousness

2003-08-24 Thread Brad Wyble
Just a word of advice, you'd get more and better feedback if your .htm didn't crash IE. If you've got some wierd html in there, tone it down a bit. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL

RE: [agi] Early AGI training - multiple communications channels /multi-tasking

2003-09-03 Thread Brad Wyble
You're right, it's possible, but don't underestimate the problems of having multiple interaction channels. It's not a almost a freebie (to heavily paraphrase your and Phil's comments). Multiple streams of interaction across broadly varying contexts would require some forms of independent

RE: [agi] Discovering the Capacity of Human Memory

2003-09-16 Thread Brad Wyble
Good point Shane, I didn't even pay attention to the ludicrous size of the number, so keen was I to get my rant out. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

RE: [agi] Discovering the Capacity of Human Memory

2003-09-16 Thread Brad Wyble
It's also disconcerting that something like this can make it through the review process. Transdisciplinary is oftentimes a pseudonym for combining half-baked and ill-formed ideas from multiple domains into an incoherent mess. This paper is an excellent example. (bad math + bad neuroscience

Re: [agi] Evolution and complexity (Reply to Brad Wyble)

2003-10-09 Thread Brad Wyble
to shortcut the evolutionary process by a few(hundred) orders of magnitude, which is essentially the goal of AI. Seems pretty cut and dried. I think you're thinking too hard. Evolution is conceptually really simple. Take closed system, add energy, bake for 4 billion years, get complexity. -Brad

RE: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-24 Thread Brad Wyble
Guess I'm too used to more biophysical models in which that approach won't work. In the models I've used (which I understand aren't relevant to your approach) you can't afford to ignore a neuron or its synapses because they are under threshold. Interesting dynamics are occurring even when the

Re: [agi] The emergence of probabilistic inference from hebbian learning in neural nets

2003-12-25 Thread Brad Wyble
This is exactly backward, and which makes using it as an unqualified presumption a little odd. Fetching an object from true RAM is substantially more expensive than executing an instruction in the CPU, and the gap has only gotten worse with time. That wasn't my point, which you may have

Re: [agi] Dr. Turing, I presume?

2004-01-10 Thread Brad Wyble
On Sat, 10 Jan 2004, deering wrote: Ben, you are absolutely correct. It was my intention to exaggerate the situation a bit without actually crossing the line. But I don't think it is much of an exaggeration to say that a 'baby' Novamente even with limited hardware and speed is a tremendous

RE: [agi] Dr. Turing, I presume?

2004-01-11 Thread Brad Wyble
I see your point, but I'm not so sure you're correct. If you're devoting resources specifically to getting some attention, you may indeed speed up the process. I wish you luck. However even if you do get such attention, it will still take quite a while for the repercussions to percolate

Re: [agi] Real world effects on society after development of AGI????

2004-01-13 Thread Brad Wyble
On Mon, 12 Jan 2004, deering wrote: Brad, you are correct. The definition of the Singularity cited by Verner Vinge is the creation of greater-than-human intelligence. And his quite logical contention is that if this entity is more intelligent than us, we can't possibly predict what it will

Re: [agi] Real world effects on society after development of AGI????

2004-01-14 Thread Brad Wyble
On Tue, 13 Jan 2004, deering wrote: Brad, I completely agree with you that the computer/human crossover point is meaningless and all the marbles are in the software engineering not the hardware capability. I didn't emphasize this point in my argument because I considered it a side issue and

RE: [agi] Real world effects on society after development of AGI????

2004-01-15 Thread Brad Wyble
Ben Wrote: 1) AI is a tool and we're the user, or 2) AI is our successor and we retire, or 3) The Friendliness scenario, if it's really feasible. This collapse of a huge spectrum of possibilities into three human-society-based categories isn't all that convincing to me... Yes, a list

Re: [agi] Within-cell computation in biological neural systems??

2004-02-10 Thread Brad Wyble
The jury is very much out Phillip. Eliezer goes too far in saying it's a myth perpetuated by computer scientists. They use the simplest representations they know to exist in their models for purposes of parsimony. It's hard to fault them for being rigorous in this respect. But neurons

Re: [agi] Within-cell computation in biological neural systems??

2004-02-23 Thread Brad Wyble
Nonlinear dendritic integration can be accurately captured by the comparmental model which divides dendrites into small sections with ion channels and other internal reaction mechanisms. This is the most accurate level of modeling. It may be possible to simplify this model with machine

RE: [agi] AGI's and emotions

2004-02-25 Thread Brad Wyble
On Wed, 25 Feb 2004, Ben Goertzel wrote: Emotions ARE thoughts but they differ from most thoughts in the extent to which they involve the primordial brain AND the non-neural physiology of the body as well. This non-brain-centricity means that emotions are more out of 'our' control than

RE: [agi] AGI's and emotions

2004-02-25 Thread Brad Wyble
I guess we call emotions 'feelings' because we feel them - ie. we can feel the effect they trigger in our whole body, detected via our internal monitoring of physical body condition. Given this, unless AGIs are also programmed for thoughts or goal satisfactions to trigger 'physical'

Re: [agi] AGI research consortium

2004-06-28 Thread Brad Wyble
On Mon, 28 Jun 2004, J. Andrew Rogers wrote: There is most certainly not an infinite range of solutions, and there is an extremely narrow range of economically viable solutions. There are certainly an infinite range of solutions in AI, even for a specific problem, let alone for a space of many

Re: [agi] AGI research consortium

2004-06-28 Thread Brad Wyble
Great stuff Andrew. I should have specified extremely narrow for implementations in our universe as we generally understand it. This is an old discussion, so I'm not going to rehash it. The enemy of implementation is *tractability*, not will this work in theory if I throw astronomical quantities

Re: [agi] Unlimited intelligence.

2004-10-24 Thread Brad Wyble
On Thu, 21 Oct 2004, deering wrote: True intelligence must be aware of the widest possible context and derive super-goals based on direct observation of that context, and then generate subgoals for subcontexts. Anything with preprogrammed goals is limited intelligence. You have pre-programmed

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
On Sun, 24 Oct 2004, Ben Goertzel wrote: One idea proposed by Minsky at that conference is something I disagree with pretty radically. He says that until we understand human-level intelligence, we should make our theories of mind as complex as possible, rather than simplifying them -- for fear of

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
Hi Brad, really excited about Novamente as an AGI system, we'll need splashy demos. They will come in time, don't worry ;-) We have specifically chosen to Looking forward to it as ever :) I can understand your frustration with this state of affairs. Getting people to buy into your

RE: [agi] Ben vs. the AI academics...

2004-10-24 Thread Brad Wyble
So much for getting work done today :) I noticed at this conference that different researchers were using basic words like knowledge and representation and learning and evolution in very different ways -- which makes communication tricky! Don't get me started on Working Memory. In an AI context,

Re: [agi] Model simplification and the kitchen sink

2004-10-24 Thread Brad Wyble
Another point to this discussion is that the problems of AI and cognitive science are unsolvable by a single person. 1 brain can't understand itself, but perhaps 10,000 brains can understand or design 1 brain. Therefore, these sciences depend on the interaction of communities of scientists in

Re: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
Intelligence is not necessary to create intelligence. Case in point: us. The evolutionary process is a simple algorithm. In the very text that you quoted, I didn't say intelligence was necessary, I said a resource pool far larger than that of the entity being designed/deconstructed is

RE: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
Of J.Andrew Rogers Sent: Sunday, October 24, 2004 11:19 PM To: [EMAIL PROTECTED] Subject: Re: [agi] Model simplification and the kitchen sink On Oct 24, 2004, at 2:14 PM, Brad Wyble wrote: Another point to this discussion is that the problems of AI and cognitive science are unsolvable by a single

Re: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
Yes, but if any kid can buy a system with Avogadro number of switches, and large corporations 10^6 of that, no reason why we can't breed AI starting from an educated guess (a spiking network of automata controlling virtual co-evolving critters). That future is some 30-50 years remote. I think

Re: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
Engineering massively emergent systems is not something we're familiar with. But it doesn't mean it can't be done. You know the fitness function, let the system design itself. I'm not saying it can't be done. I'm saying it can't be done by one person. I'm saying the discipline requires the

RE: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
On Mon, 25 Oct 2004, Ben Goertzel wrote: Brad wrote: I know you are all probably getting sick of me talking about how all of this is complicated, but it really is. Hearing that inflating the cortex is a trivial parameter grates on me terribly. Brad, I agree with you re human brains, but of course

Re: [agi] Model simplification and the kitchen sink

2004-10-25 Thread Brad Wyble
The Godel statement represents itself, completely, via diagonalization. Unfortunately I'm not equipped to discuss Godel in depth. All I can do is argue by simple analogy, that is, it takes N1 neurons in the brain to mentally represent the idea of a neuron. Therefore the brain cannot represent

Re: [agi] 25000 rat brain cells in a dish can beat you at Flight Simulator

2004-10-31 Thread Brad Wyble
This research represents a major series of techincal triumphs, but the lay press versions of the story are somewhat misleading. There is no real learning going on, at least in the sense of synaptic modification. This is not really a brain system which exhibits information processing, reward

Re: [agi] A theorem of change and persistence????

2004-12-20 Thread Brad Wyble
On Sun, 19 Dec 2004, Ben Goertzel wrote: Hmmm... Philip, I like your line of thinking, but I'm pretty reluctant to extend human logic into the wildly transhuman future... Ben, this isn't so much about logic as it is about thermodynamics and it's going to be a very long time indeed before we can

Re: [agi] A theorem of change and persistence????

2004-12-20 Thread Brad Wyble
The Robot's Rebellion : Finding Meaning in the Age of Darwin by Keith E. Stanovich University of Chicago Press (May 15, 2004) ISBN: 0226770893 Cheers, Philip I'm glad you looked this up and posted it, as there are two books titled The Robot's Rebellion, the other being a very controversial

Re: [agi] What are qualia...

2005-01-26 Thread Brad Wyble
On Sat, 22 Jan 2005, Philip Sutton wrote: Once complex brained / complecly motivated creatures start using qualia they could play into lifepatterns so profoundly that even obscure trends in the use of qualia for aesthetic purposes could actually effect reproductive prospects. For example, male

RE: [agi] What are qualia...

2005-01-26 Thread Brad Wyble
Yes, that's consistent with my line of thinking. Qualia are intensity of patterns ... in human brains these are mostly neural patterns ... and what we *call* qualia are qualia that are patterns closely associated with the part of the brain that deals with calling ... -- Ben I'd like to make a

Re: [agi] Cell

2005-02-09 Thread Brad Wyble
Hardware advancements are necessary, but I think you guys spend alot of time chasing white elephants. AGI's are not going to magically appear just because hardware gets fast enough to run them, a myth that is strongly implied by some of the singularity sites I've read. The hardware is a moot

Re: [agi] Cell

2005-02-10 Thread Brad Wyble
On Wed, 9 Feb 2005, Martin Striz wrote: --- Brad Wyble [EMAIL PROTECTED] wrote: Hardware advancements are necessary, but I think you guys spend alot of time chasing white elephants. AGI's are not going to magically appear just because hardware gets fast enough to run them, a myth that is strongly

Re: [agi] Cell

2005-02-10 Thread Brad Wyble
There are several major stepping stones with hardware speed. One, is when you have enough for a nontrivial AI (price tag can be quite astronomic). Second, enough in an *affordable* installation. Third, enough crunch to map the parameter space/design by evolutionary algorithms. Fourth, the

Re: [agi] Cell

2005-02-10 Thread Brad Wyble
The brain is thoroughly riddled with such control architechture, starting at the retina and moving back, it's a constant process of throwing out information and compressing what's left into a more compact form. That's really all your brain is doing from the moment a photon hits your eye,

Re: [agi] Cell

2005-02-10 Thread Brad Wyble
I'd like to start off by saying that I have officially made the transition into old crank. It's a shame it's happened so early in my life, but it had to happen sometime. So take my comments in that context. If I've ever had a defined role on this list, it's in trying to keep the pies from

Re: [agi] Cell

2005-02-10 Thread Brad Wyble
I'm confused, all you want are Ants? Or did you mean AGI in ant-bodies? Social insects are a good model, actually. Yes, all I want is a framework flexible and efficient enough to produce social insect level on intelligence on hardware of the next decades. If you can come that far, the rest is

Re: [agi] Cell

2005-02-11 Thread Brad Wyble
On Fri, 11 Feb 2005, Eugen Leitl wrote: Just want to be clear Eugen, when you talk about evolutionary simulations, you are talking about simulating the physical world, down to a cellular and perhaps even molecular level? -B --- To unsubscribe, change your address, or temporarily deactivate

Re: [agi] AI Buzz in Mainstream

2005-02-18 Thread Brad Wyble
On Thu, 17 Feb 2005, JW Johnston wrote: Bob Cowell's At Random column in this month's IEEE Computer magazine was about his renewed excitement in AI given Jeff Hawkins' book and work: http://www.computer.org/computer/homepage/0105/random/index.htm Then today, found a similar article in Computer

RE: [agi] AI Buzz in Mainstream

2005-02-18 Thread Brad Wyble
Brad, I read Hawkins' book, and while I don't agree with his ideas about AI, I don't think he falls prey to any simple homunculus fallacy.. Some of my thoughts on his book are at: http://www.goertzel.org/dynapsyc/2004/ProbabilisticVisionProcessing.htm (BTW, my site seems to be down today but it