of creating
an AI to study brain function is that the result might be even more inscrutable than
our brain.
-Brad Wyble
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
3)
Any successful AGI system is also going to have components in two other
categories:
a) specialized-intelligence components that solve particular problems in
ways having little or nothing to do with truly general intelligence
capability
b) specialized-intelligence components that are
Philip,
I can understand the brain structure we see in intelligent animals would
emerge from a process of biological evolution where no conscious
design is involved (ie. specialised non conscious functions emerge first,
generalised processes emerge later), but why should AGI design
There might even be a benefit to trying to develop an ethical system for
the earliest possible AGIs - and that is that it forces everyone to strip
the concept of an ethical system down to its absolute basics so that it
can be made part of a not very intelligent system. That will
A good, if somewhat lightweight, article on the nature of mind and whether =
silicon can eventually manifest conscioussness..
http://www.theage.com.au/articles/2003/02/09/1044725672185.html
Kevin
I don't know if consciousness debates are verbotten here or not, but I will say that I
I don't think any human alive has the moral and ethical underpinnings to allow them to
resist the corruption of absolute power in the long run. We are all kept in check by
our lack of power, the competition of our fellow humans, the laws of society, and the
instructions of our peers. Remove
I am exceedingly glad that I do not share your opinion on this. Human
altruism *is* possible, and indeed I observe myself possessing a
significant measure of it. Anyone doubting thier ability to 'resist
corruption' should not IMO be working in AGI, but should be doing some
serious
I can't imagine the military would be interested in AGI, by its very definition. The
military would want specialized AI's, constructed around a specific purpose and under
their strict control. An AGI goes against everything the military wants from its
weapons and agents. They train soldiers
There are simple external conditions that provoke protective tendencies in
humans following chains of logic that seem entirely natural to us. Our
intuition that reproducing these simple external conditions serve to
provoke protective tendencies in AIs is knowably wrong, failing an
I guess that for AIXI to learn this sort of thing, it would have to be
rewarded for understanding AIXI in general, for proving theorems about AIXI,
etc. Once it had learned this, it might be able to apply this knowledge in
the one-shot PD context But I am not sure.
For those of us
Processing speed is a necessary but far from sufficient criterion of AGI design. The
software engineering aspect is going to be the bigger limitation by far.
It is common to speak of the brain as x neurons and Y synapses but the truth of it
is that there are layers of complexity beneath the
Hmmm. I think the critical problem is neither processing speed, NOR
software engineering per se -- it's having a mind design that's correct in
all the details.
Or is that what you meant by software engineering? To me, software
engineering is about HOW you build it, not about WHAT you
It is obvious that no one on this list agrees with me. This does not mean =
that I am obviously wrong. The division is very simple.
My position: the doubling time has been reducing and will continue to do s=
o.
Their position: the doubling time is constant.
It is incredibly unlikely
I know this topic is already beaten to death in previous discussions, but I'll throw
out one more point after reading that we may already have the equivalent power of some
3000 minds in raw CPU available worldwide.
The aggregate neural mass of the world's population of insects and animals are
I would like to contribute new SPEC CINT 2000 results as they are posted
to the SPEC benchmark list by semiconductor manufacturers. I expect
to post perhaps 10 times per year with this news. This is the source data
for my Human Equivalent Computing spreadsheet and regression line.
I'm
Brad writes, Might it not be a more accurate measure to chart mobo+CPU com=
bo prices?
Maybe. If you wanted to research and post this data I'm sure it would be =
helpful to have.
Check out www.pricewatch.com. They have a search engine which ranks products by
vendors. Using this,
I used the assumptions of Hans Moravec to arrive at Human Equivalent
Computer processing power:
http://www.frc.ri.cmu.edu/~hpm/
Of course as we get closer to AGI then the error delta becomes smaller. I
am comfortable with the name for now and will adjust the metric as more
info
The brain is actually fantasticly simple...
It is nothing compared with the core of a linux operating system
(kernel+glibc+gcc).
Heck, even the underlying PC hardware is more complex in a number of
ways than the brain, it seems...
The brain is very RISCy... using a relatively simple
Well, we invented our own specialized database system (in effect) but not
our own network protocol.
In each case, it's a tough decision whether to reuse or reimplement. The
right choice always comes down to the nasty little details...
The biggest Ai waste of time has probably been
[META: please turn line-wrap on, for each of these responses my own
standards for outgoing mail necessitate that I go through each line and
ensure all quotations are properly formatted...]
I think we're suffering from emacs issues, I'm using elm.
Iff the brain is not unique in its
Not exactly. It isn't that I think we should give up on AGI, but rather that
we should be consciously planning for it to take several decades to get
there. We should still tackle the problems in front of us, instead of giving
up on real AI work altogether. But we need to get past the idea
The nature of neuroscience research doesn't really differentiate
between the two at present. In order to understand WHAT a brain part
does, we have to understand HOW it, and all structures connected to it
function. We need to understand the inputs and the outputs, and that's
all HOW.
The AIXI would just contruct some nano-bots to modify the reward-button so
that it's stuck in the down position, plus some defenses to
prevent the reward mechanism from being further modified. It might need to
trick humans initially into allowing it the ability to construct such
Now, there is no easy way to predict what strategy it will settle on, but
build a modest bunker and ask to be left alone surely isn't it. At the
very least it needs to become the strongest military power in the world, and
stay that way. It might very well decide that exterminating the human
But anyway, using the weighted-averaging rule dynamically and iteratively
can lead to problems in some cases. Maybe the mechanism you suggest -- a
nonlinear average of some sort -- would have better behavior, I'll think
about it.
The part of the idea that guaranteed an eventual equilibrium
1) Humans use special-case algorithms to solve these problem, a different
algorithm for each domain
2) Humans have a generalized mental tool for solving these problems, but
this tool can only be invoked when complemented by some domain-specific
knowledge
My intuitive inclination is
Hi Ben,
Thanks for the brain teaser! As a sometimes believer in Occam's Razor, I
think it makes sense to assume that Xi and Xj are indepenent, unless we know
otherwise. This simplifies things, and is the rational thing to do (for
some definition of rational ;-). So why not construct a
This is also an example of how weird the brain can be from an algorithmic
perspective. In designing an AI system, one tends to abstract cognitive
processes and create specific processes based on these abstractions. (And
this is true in NN type AI architectures, not just logicist ones.) But
Brad wrote:
I think this is a core principle of AGI design and that a system that
only makes inferences it *knows* are correct would be fairly
uninteresting and incapable of performing in the real world. The fact
that the information in the P(xi|xj) list is very incomplete is what
Novelty is recognized when a new PredicateNode (representing an observed
pattern) is created, and it's assessed that prior to the analysis of the
particular data the PredicateNode was just recognized in, the system would
have assigned that PredicateNode a much lower truth value. (That is:
I whipped this up this afternoon in case any of your are interested. I tried to gear
it towards functionally relevant features. Enjoy
Reference document: The Hippocampal navigational system
by Brad Wyble
A primer of neurophysiological correlates of spatial navigation in the rodent
On the face of it, these place maps are very reminiscent of attractors as
found in formal attractor neural networks. When multiple noncorrelated
maps are stored in the same collection of neurons, this sounds like multiple
attractors being stored in the same formal neural net.
Yeap, there's
Yeap, there's well developed theories about how an autoassociate
network like CA3 could support multiple, uncorrelated attractor
maps and sustain activity once one of them was activated. The
big debate is about how they are formed.
The standard way attractors are formed in formal
Cliff wrote:
It's not a firm conclusion, but I'm basing it on information /
complexity theory. This relates, in certain ways, to ideas about
entropy -- and energy is negentropy. I.e. without the sun's input we
would be nothing.
I'm not convinced of this idea on an intuitive basis, but
Kevin said:
I would say that complex information about anything can be conveyed in ways
outside of your current thinking, but if you ask me to prove it, I cannot.
There is evidence of it in things like the ERP experiment which show the
existence of a possible substrate that we have not yet
They are not mapping to IP addresses, probably geography as Ben suggests. I went to
the search window and intercepted searches done by other people.
-Brad
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to
Ok, let's get rolling then.
Ben, here's a question. To what extent are parts of Novamente hand built ala Cyc?
I can easily imagine a dimension here. At one end is Cyc, which is carefully and
meticulously constructed by people. The design work is two fold, creating the
structure within which
The limitation in multi-agent systems is usually the degree of interaction they can
have. The bandwidth between ants, for example, is fairly low even when they are in
direct contact, let alone 1 inch apart.
This limitation keeps their behavior relatively simple, simple relative to what you
But hopefully the bandwidth of communication is compensated by the power of
parallel processing. So long as communication between ants or processing nodes
is not completely blocked, some sort of intelligence should self-organize, then
its just a matter of time. As programmers or
Just to pick a point, Eliezer defines Seed AI as Artificial
Intelligence designed for self-understanding, self-modification, and
recursive self-enhancement. I do not agree with you that pure Seed AI
is a know-nothing baby.
I was perhaps a bit extreme in my word choice, but I do not believe
Yep. Novamente contains particular mechanisms for converting between
declarative and procedural knowledge... something that is learned
procedurally can become declarative and vice versa. In fact, if all goes
according to plan (a big if of course ;) Novamente should *eventually* be
much
Indeed, making the declarative knowledge derived from rattling off
parameters describing procedures useful is a HARD problem... but at least
Novamente can get the data, which as you have greed, would seem to give AI
systems an in-principle advantage over humans in this area...
It's hard
That was exactly my impression when I last looked seriously into
neuroscience (1995-96). I wanted to understand cognitive dynamics, and I
hoped that tech like PET and fMRI would do the trick. But nothing existing
giving the combination of temporal and spatial acuity that you'd need to
I actually have a big MEG datafile on my hard drive, which I haven't gotten
around to playing with.
It consists of about 120 time series, each with about 100,000 points in it.
It represents the magnetic field of someone's brain, measured through 120
sensors on their skull, while they
Extra credit:
I've just read the Crichton novel PREY. Totally transparent movie-scipt but
a perfect text book on how to screw up really badly. Basically the formula
is 'let the military finance it'. The general public will see this
inevitable movie and we we will be drawn towards the moral
One thing I should add:
It's the same hubris I mentioned in my previous message that prompted us to send out
satellites effectively bearing our home address and basic physiology on a plaque in
the hope that aliens would find it and come to us. Even NASA scientists seem to have
no fear of
An AGI system will turn against us probably if humans turn against it first.
It's like raising a child, if you beat the child every day, they are not going
to grow up very friendly. If you raise a child to co-operate and co-exist with
its environment, what possible motivation is there for
It might be easier to build a human intelligence than a dog intelligence simply
because we don't have a dog's perspective and we can't ask them to reflect on it.
Don't be quick to assume it would be easier just because they are less intelligent.
-Brad
---
To unsubscribe, change your
What wasn't made very clear in that article is that the sole function of TMS is
shutting down specific areas of the brain for a short while.
So it's not that he's improving a given piece of brain tissue, he's shutting off
certain areas which changes the balance of power in the mind, and
It's an interesting idea, to raise Novababy knowing that it can adopt
different bodies at will. Clearly this will lead to a rather different
psychology than we see among humans --- making the in-advance design of
educational environments particularly tricky!!
First of all, please read
Well the short gist of this guy's spiel is that Lenat is on the right track. The key
is to accumulate terabytes of stupid, temporally forward associations between elements.
A little background check reveals that this guy isn't a complete nutcase. He's got
some publications (but not many),
I use elm so I couldn't tell, was there a virus riding on that?
Just curious.
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
A tiny rant about bogus AI.
I was depressed to find this site:
http://www.intelagent.org/
and another by the same snakeoil salesman (Sol Endelman)
http://hardwear.org/personal/PC/
See that pencil drawing of the wearable computer? Lifted straight from
the MIT Mithril project website. I'm
The open source concept to AI, which is essential what you are doing here,
is a very interesting one.
However, the open source success stories have always involved lots of tiny
achievable goals surrounding one mammoth success (the functional kernel).
i.e. there were many stepping stones
Just a word of advice, you'd get more and better feedback if your .htm
didn't crash IE.
If you've got some wierd html in there, tone it down a bit.
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL
You're right, it's possible, but don't underestimate the problems of
having multiple interaction channels. It's not a almost a freebie (to
heavily paraphrase your and Phil's comments).
Multiple streams of interaction across broadly varying contexts would
require some forms of independent
Good point Shane, I didn't even pay attention to the ludicrous size of the
number, so keen was I to get my rant out.
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
It's also disconcerting that something like this can make it through the
review process.
Transdisciplinary is oftentimes a pseudonym for combining half-baked and
ill-formed ideas from multiple domains into an incoherent mess.
This paper is an excellent example. (bad math + bad neuroscience
to shortcut the evolutionary process
by a few(hundred) orders of magnitude, which is essentially the goal of
AI.
Seems pretty cut and dried. I think you're thinking too hard. Evolution
is conceptually really simple. Take closed system, add energy, bake for 4
billion years, get complexity.
-Brad
Guess I'm too used to more biophysical models in which that approach won't
work. In the models I've used (which I understand aren't relevant to your
approach) you can't afford to ignore a neuron or its synapses because they
are under threshold. Interesting dynamics are occurring even when the
This is exactly backward, and which makes using it as an unqualified
presumption a little odd. Fetching an object from true RAM is substantially
more expensive than executing an instruction in the CPU, and the gap has
only gotten worse with time.
That wasn't my point, which you may have
On Sat, 10 Jan 2004, deering wrote:
Ben, you are absolutely correct. It was my intention to exaggerate the
situation a bit without actually crossing the line. But I don't think
it is much of an exaggeration to say that a 'baby' Novamente even with
limited hardware and speed is a tremendous
I see your point, but I'm not so sure you're correct.
If you're devoting resources specifically to getting some attention, you
may indeed speed up the process. I wish you luck.
However even if you do get such attention, it will still take quite a
while for the repercussions to percolate
On Mon, 12 Jan 2004, deering wrote:
Brad, you are correct. The definition of the Singularity cited by
Verner Vinge is the creation of greater-than-human intelligence. And
his quite logical contention is that if this entity is more intelligent
than us, we can't possibly predict what it will
On Tue, 13 Jan 2004, deering wrote:
Brad, I completely agree with you that the computer/human crossover
point is meaningless and all the marbles are in the software engineering
not the hardware capability. I didn't emphasize this point in my
argument because I considered it a side issue and
Ben Wrote:
1) AI is a tool and we're the user, or
2) AI is our successor and we retire, or
3) The Friendliness scenario, if it's really feasible.
This collapse of a huge spectrum of possibilities into three
human-society-based categories isn't all that convincing to me...
Yes, a list
The jury is very much out Phillip. Eliezer goes too far in saying it's a
myth perpetuated by computer scientists. They use the simplest
representations they know to exist in their models for purposes of
parsimony. It's hard to fault them for being rigorous in this respect.
But neurons
Nonlinear dendritic integration can be accurately captured by the
comparmental model which divides dendrites into small sections
with ion channels and other internal reaction mechanisms. This
is the most accurate level of modeling. It may be possible to
simplify this model with machine
On Wed, 25 Feb 2004, Ben Goertzel wrote:
Emotions ARE thoughts but they differ from most thoughts in the extent to
which they involve the primordial brain AND the non-neural physiology of
the body as well. This non-brain-centricity means that emotions are more
out of 'our' control than
I guess we call emotions 'feelings' because we feel them - ie. we can
feel the effect they trigger in our whole body, detected via our internal
monitoring of physical body condition.
Given this, unless AGIs are also programmed for thoughts or goal
satisfactions to trigger 'physical'
On Mon, 28 Jun 2004, J. Andrew Rogers wrote:
There is most certainly not an infinite range of solutions, and there is
an extremely narrow range of economically viable solutions.
There are certainly an infinite range of solutions in AI,
even for a specific problem, let alone for a space of many
Great stuff Andrew.
I should have specified extremely narrow for implementations in our
universe as we generally understand it.
This is an old discussion, so I'm not going to rehash it. The enemy of
implementation is *tractability*, not will this work in theory if I
throw astronomical quantities
On Thu, 21 Oct 2004, deering wrote:
True intelligence must be aware of the widest possible context and derive super-goals based on direct observation of that context, and then generate subgoals for subcontexts. Anything with preprogrammed goals is limited intelligence.
You have pre-programmed
On Sun, 24 Oct 2004, Ben Goertzel wrote:
One idea proposed by Minsky at that conference is something I disagree with
pretty radically. He says that until we understand human-level
intelligence, we should make our theories of mind as complex as possible,
rather than simplifying them -- for fear of
Hi Brad,
really excited about Novamente as an AGI system, we'll need splashy demos.
They will come in time, don't worry ;-) We have specifically chosen to
Looking forward to it as ever :) I can understand your frustration with
this state of affairs. Getting people to buy into your
So much for getting work done today :)
I noticed at this conference that different researchers were using basic
words like knowledge and representation and learning and evolution
in very different ways -- which makes communication tricky!
Don't get me started on Working Memory.
In an AI context,
Another point to this discussion is that the problems of AI and cognitive
science are unsolvable by a single person. 1 brain can't understand
itself, but perhaps 10,000 brains can understand or design 1 brain.
Therefore, these sciences depend on the interaction of communities of
scientists in
Intelligence is not necessary to create intelligence. Case in point: us.
The evolutionary process is a simple algorithm.
In the very text that you quoted, I didn't say intelligence was necessary,
I said a resource pool far larger than that of the entity being
designed/deconstructed is
Of J.Andrew Rogers
Sent: Sunday, October 24, 2004 11:19 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Model simplification and the kitchen sink
On Oct 24, 2004, at 2:14 PM, Brad Wyble wrote:
Another point to this discussion is that the problems of AI and
cognitive science are unsolvable by a single
Yes, but if any kid can buy a system with Avogadro number of switches, and
large corporations 10^6 of that, no reason why we can't breed AI starting
from an educated guess (a spiking network of automata controlling virtual
co-evolving critters). That future is some 30-50 years remote.
I think
Engineering massively emergent systems is not something we're familiar with.
But it doesn't mean it can't be done. You know the fitness function, let the
system design itself.
I'm not saying it can't be done. I'm saying it can't be done by one
person. I'm saying the discipline requires the
On Mon, 25 Oct 2004, Ben Goertzel wrote:
Brad wrote:
I know you are all probably getting sick of me talking about how all of
this is complicated, but it really is. Hearing that inflating the cortex
is a trivial parameter grates on me terribly.
Brad, I agree with you re human brains, but of course
The Godel statement represents itself, completely, via diagonalization.
Unfortunately I'm not equipped to discuss Godel in depth.
All I can do is argue by simple analogy, that is, it takes N1 neurons in
the brain to mentally represent the idea of a neuron. Therefore the brain
cannot represent
This research represents a major series of techincal triumphs, but the lay
press versions of the story are somewhat misleading. There is no real
learning going on, at least in the sense of synaptic modification. This
is not really a brain system which exhibits information processing, reward
On Sun, 19 Dec 2004, Ben Goertzel wrote:
Hmmm...
Philip, I like your line of thinking, but I'm pretty reluctant to extend
human logic into the wildly transhuman future...
Ben, this isn't so much about logic as it is about thermodynamics and it's
going to be a very long time indeed before we can
The Robot's Rebellion : Finding Meaning in the Age of Darwin
by Keith E. Stanovich
University of Chicago Press (May 15, 2004)
ISBN: 0226770893
Cheers, Philip
I'm glad you looked this up and posted it, as there are two books titled
The Robot's Rebellion, the other being a very controversial
On Sat, 22 Jan 2005, Philip Sutton wrote:
Once complex brained / complecly motivated creatures start using qualia they
could play into lifepatterns so profoundly that even obscure trends in the use
of qualia for aesthetic purposes could actually effect reproductive prospects.
For example, male
Yes, that's consistent with my line of thinking.
Qualia are intensity of patterns ... in human brains these are mostly neural
patterns ...
and what we *call* qualia are qualia that are patterns closely associated
with the part of the brain that deals with calling ...
-- Ben
I'd like to make a
Hardware advancements are necessary, but I think you guys spend alot of
time chasing white elephants. AGI's are not going to magically appear
just because hardware gets fast enough to run them, a myth that is
strongly implied by some of the singularity sites I've read.
The hardware is a moot
On Wed, 9 Feb 2005, Martin Striz wrote:
--- Brad Wyble [EMAIL PROTECTED] wrote:
Hardware advancements are necessary, but I think you guys spend alot of
time chasing white elephants. AGI's are not going to magically appear
just because hardware gets fast enough to run them, a myth that is
strongly
There are several major stepping stones with hardware speed. One, is when you
have
enough for a nontrivial AI (price tag can be quite astronomic). Second,
enough in an *affordable* installation. Third, enough crunch to map the
parameter space/design by evolutionary algorithms. Fourth, the
The brain is thoroughly riddled with such control architechture, starting
at the retina and moving back, it's a constant process of throwing out
information and compressing what's left into a more compact form. That's
really all your brain is doing from the moment a photon hits your eye,
I'd like to start off by saying that I have officially made the transition
into old crank. It's a shame it's happened so early in my life, but it
had to happen sometime. So take my comments in that context. If I've
ever had a defined role on this list, it's in trying to keep the pies from
I'm confused, all you want are Ants?
Or did you mean AGI in ant-bodies?
Social insects are a good model, actually. Yes, all I want is a framework
flexible and efficient enough to produce social insect level on intelligence
on hardware of the next decades.
If you can come that far, the rest is
On Fri, 11 Feb 2005, Eugen Leitl wrote:
Just want to be clear Eugen, when you talk about evolutionary simulations,
you are talking about simulating the physical world, down to a
cellular and perhaps even molecular level?
-B
---
To unsubscribe, change your address, or temporarily deactivate
On Thu, 17 Feb 2005, JW Johnston wrote:
Bob Cowell's At Random column in this month's IEEE Computer magazine was
about his renewed excitement in AI given Jeff Hawkins' book and work:
http://www.computer.org/computer/homepage/0105/random/index.htm
Then today, found a similar article in Computer
Brad,
I read Hawkins' book, and while I don't agree with his ideas about AI, I
don't think he falls prey to any simple homunculus fallacy..
Some of my thoughts on his book are at:
http://www.goertzel.org/dynapsyc/2004/ProbabilisticVisionProcessing.htm
(BTW, my site seems to be down today but it
97 matches
Mail list logo