http://www.tyan.com/PRODUCTS/html/typhoon_b2881.html
Notice the Direct Connect Architecture part. Online
pricing looks very reasonable.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820
On Fri, Jun 09, 2006 at 01:12:49AM -0400, Philip Goetz wrote:
Does anyone know how to compute how much information, in bits, arrives
at the frontal lobes from the environment per second in a human?
Most of information is visual, and retina purportedly compresses 1:126
(obviously, some of it
I had similar feelings about William Pearson's recent message aboutsystems that use reinforcement learning: A reinforcement scenario, from wikipedia is defined as "Formally, the basic reinforcement learning model consists of: 1. a set of environment states S; 2. a set of actions A; and 3. a
Richard, Can you explain differently, in other words the second part of this post. I am very interested in this as a large part of an AI system. I believe in some fashion there needs to be a controlling algorithm that tells the AI that it is doing "Right" be it either an internal or external human
On 6/9/06, Eugen Leitl [EMAIL PROTECTED] wrote:
Most of information is visual, and retina purportedly compresses 1:126
(obviously, some of it lossy).
http://www.4colorvision.com/dynamics/mechanism.htm
claims 23000 receptor cells on the foveola, so I would just
do a rough calculation of some 50
James,
It is a little hard to know where to start, to be honest. Do you have a
background in any particular area already, or are you pre-college? If
the latter, and if you are interested in the field in a serious way, I
would recommend that you hunt down a good programme in cognitive
Phil,
But the visual calculations below only give you the flow going back to
the visual cortex: didn't you specifically want the frontal lobes?
Given that you could go into a sensory-deprivation tank and yet still
get a good flow into your frontal lobes, how would you know what
proportion
On 09/06/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Likewise, an artificial general
intelligence is not a set of environment states S, a set of actions A,
and a set of scalar rewards in the Reals.)
Watching history repeat itself is pretty damned annoying.
While I would agree with you
Ben: It's a little more than that (more than just speed optimization), because the declarative knowledge may be uncertain, but the procedure derived from it will often be more determinate... [...] Well, we are trying to make NOvamente actually do stuff (and succeeding, to a limited but
It IS my contention that there is a relatively simple, inductively-robust (in a mathematical proof sense) formulation of friendliness that will guarantee that there won't be effects that *I* consider undesirable, horrible, or immoral. It will, of course/however, produce a number of effects
William,
It is very simple and I wouldn't apply it to everything that
behaviourists would (we don't get direct rewards for solving crossword
puzzles).
How do you know that we don't get direct rewards on solving crossword
puzzles (or any other mental task)?
Chances are that under certain
Richard, I am a grad student and have studied this for a number of years already. I have dabbled in a few of the areas, but been unhappy in general with most peoples approaches as generally too specific (expert systems) or studying fringe problems of AI. I have been spending all my time reading
I definitely get pleasure out of doing them, that appears to be adirect feedback that is easily seen.Another harder one I saw the other day, is long term gains, which seem to be much harder to visualize.Take for instance flossing your teeth, it hurts sometimes, and could make your mouth bleed, not
On 09/06/06, Dennis Gorelik [EMAIL PROTECTED] wrote:
William,
It is very simple and I wouldn't apply it to everything that
behaviourists would (we don't get direct rewards for solving crossword
puzzles).
How do you know that we don't get direct rewards on solving crossword
puzzles (or any
William,
1) I agree that direct reward has to be in-built
(into brain / AI system).
2) I don't see why direct reward cannot be used for rewarding mental
achievements. I think that this direct rewarding mechanism is
preprogrammed in genes and cannot be used directly by mind.
This mechanism
Various people have the notion that events, concepts, etc., are
represented in the brain as a combination of various sensory percepts,
contexts, subconcepts, etc. This leads to a representational scheme
in which some associational cortex links together the sub-parts making
up a concept or a
Why do we have both an email list and a forum?
Seems they both serve the same purpose.
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
17 matches
Mail list logo