Matt:It is like the way evolution works, except that there is a human in the
loop to make the process a little more intelligent.
IOW this is like AGI, except that it's narrow AI. That's the whole point - you
have to remove the human from the loop. In fact, it also sounds like a
misconceived
Mike Tintner wrote:
Matt:It is like the way evolution works, except that there is a human in the
loop to make the process a little more intelligent.
IOW this is like AGI, except that it's narrow AI. That's the whole point -
you have to remove the human from the loop. In fact, it also
Ignoring Steve because we are simply going to have to agree to disagree...
And I don't see enough value in trying to understand his paper. I said the
math was overly complex, but what I really meant is that the approach is
overly complex and so filled with research specific jargon, I don't care to
Your computer monitor flashes 75 frames per second, but you don't notice any
flicker because light sensing neurons have a response delay of about 100 ms.
Motion detection begins in the retina by cells that respond to contrast between
light and dark moving in specific directions computed by
Thank you Matt. That's very useful input.
On Mon, Jun 21, 2010 at 9:57 AM, Matt Mahoney matmaho...@yahoo.com wrote:
Your computer monitor flashes 75 frames per second, but you don't notice
any flicker because light sensing neurons have a response delay of about 100
ms. Motion detection begins
The brain does not get the high frame rate signals as the eye itself
only gives brain images at 24 frames per second. Else u wouldn't be
able to watch a movie.
Any comments?
On 6/21/10, Matt Mahoney matmaho...@yahoo.com wrote:
Your computer monitor flashes 75 frames per second, but you don't
I'm reading about the retina motion processing. Maybe the brain does only
get 24 frames per second, but the retina may send it information about
hypothesized or likely movements. The brain can then do some further
processing, such as using including kinesthetic feedback that tells the
brain about
Steve,
You didn't mention this, so I guess I will: larger animals do generally have
larger brains, coming close to a fixed brain/body ratio. Smarter animals
appear to be the ones with a higher brain/body ratio rather than simply a
larger brain. This to me suggests that the amount of sensory
Hi
I'm new to this list, but I've been thinking about consciousness, cognition
and AI for about half of my life (I'm 32 years old). As is probably the
case for many of us here, my interests began with direct recognition of the
depth and wonder of varieties of phenomenological experiences-- and
(I'm a little late in this conversation. I tried to send this message the
other day but I had my list membership configured wrong. -Rob)
-- Forwarded message --
From: rob levy r.p.l...@gmail.com
Date: Sun, Jun 20, 2010 at 5:48 PM
Subject: Re: [agi] An alternative plan to discover
-Original Message-
From: Steve Richfield [mailto:steve.richfi...@gmail.com]
My underlying thought here is that we may all be working on the wrong
problems. Instead of working on the particular analysis methods (AGI) or
self-organization theory (NN), perhaps if someone found a
Abram,
On Mon, Jun 21, 2010 at 8:38 AM, Abram Demski abramdem...@gmail.com wrote:
Steve,
You didn't mention this, so I guess I will: larger animals do generally
have larger brains, coming close to a fixed brain/body ratio. Smarter
animals appear to be the ones with a higher brain/body ratio
I think a real world solution to grid stability would require greater use of
sensory devices (and a some sensory-feedback devices). I really don't know
for sure, but my assumption is that electrical grid management has relied
mostly on the electrical reactions of the grid itself, and here you are
John,
Your comments appear to be addressing reliability, rather than stability...
On Mon, Jun 21, 2010 at 9:12 AM, John G. Rose johnr...@polyplexic.comwrote:
-Original Message-
From: Steve Richfield [mailto:steve.richfi...@gmail.com]
My underlying thought here is that we may all
rob levy wrote:
I am secondarily motivated by the fact that (considerations of morality or
amorality aside) AGI is inevitable, though it is far from being a forgone
conclusion that powerful general thinking machines will have a first-hand
subjective relationship to a world, as living
Jim,
Yours is the prevailing view in the industry. However, it doesn't seem to
work. Even given months of time to analyze past failures, they are often
unable to divine rules that would have reliably avoided the problems. In
short, until you adequately understand the system that your sensors are
rob levy wrote:
On a related note, what is everyone's opinion on why evolutionary algorithms
are such a miserable failure as creative machines, despite their successes
in narrow optimization problems?
Lack of computing power. How much computation would you need to simulate the 3
billion
-Original Message-
From: Steve Richfield [mailto:steve.richfi...@gmail.com]
John,
Your comments appear to be addressing reliability, rather than
stability...
Both can be very interrelated. It can be an oversimplification to separate
them, or too impractical/theoretical.
On Mon,
One constant in ALL proposed methods leading to computational intelligence
is formulaic operation, where agents, elements, neurons, etc., process
inputs to produce outputs. There is scant biological evidence for this,
and plenty of evidence for a balanced equation operation. Note that
unbalancing
John,
On Mon, Jun 21, 2010 at 10:06 AM, John G. Rose johnr...@polyplexic.comwrote:
Solutions for large-scale network stabilities would vary per network
topology, function, etc..
However, there ARE some universal rules, like the 12db/octave
requirement.
Really? Do networks such as
Steve: For example, based on ability to follow instruction, cats must be REALLY
stupid.
Either that or really smart. Who wants to obey some dumb human's instructions?
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
Isn't this the argument for GAs running on multicored processors? Now each
organism has one core/fraction of a core. The brain will then evaluate *
fitness* having a fitness criterion.
The fact they can be run efficiently in parallel is one of the advantages of
GAs.
Let us look at this another
My comment is this. The brain in fact takes whatever speed it needs. For
simple processing it takes the full speed. More complex processing does not
require the same speed and so is taken more slowly. This is really an
extension of what DESTIN does spatially.
- Ian Parker
On 21 June 2010
-Original Message-
From: Steve Richfield [mailto:steve.richfi...@gmail.com]
Really? Do networks such as botnets really care about this? Or does it
apply?
Anytime negative feedback can become positive feedback because of delays
or phase shifts, this becomes an issue. Many
Matt,
I'm not sure I buy that argument for the simple reason that we have massive
cheap processing now and pretty good knowledge of the initial conditions of
life on our planet (if we are going literal here and not EC in the
abstract), but it's definitely a possible answer. Perhaps not enough
Rob,
Real evolution had full freedom to evolve. Genetic algorithms usually don't.
If they did, the number of calculations it would have to make to really
simulate evolution on the scale that created us would be so astronomical, it
would not be possible. So, what matt said is absolutely correct.
On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
That being the case, why don't elephants and other large creatures have
really gigantic brains? This seems to be SUCH an obvious evolutionary step.
Personally I've always wondered how elephants managed to evolve
http://www.zerohedge.com/article/fast-reading-computers-are-about-drink-your-trading-milkshake
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
Russell,
On Mon, Jun 21, 2010 at 1:29 PM, Russell Wallace
russell.wall...@gmail.comwrote:
On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
That being the case, why don't elephants and other large creatures have
really gigantic brains? This seems to be SUCH
On Mon, Jun 21, 2010 at 11:05 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
Another pet peeve of mine. They could/should do MUCH more fault tolerance
than they now are. Present puny efforts are completely ignorant of past
developments, e.g. Tandem Nonstop computers.
Or perhaps they
John,
Hmmm, I though that with your EE background, that the 12db/octave would
bring back old sophomore-level course work. OK, so you were sick that day.
I'll try to fill in the blanks here...
On Mon, Jun 21, 2010 at 11:16 AM, John G. Rose johnr...@polyplexic.comwrote:
Of course, there is the
My view on the efficiency of the brain's learning has to do with low latency
communications in general, which is a similar concept to high frame rates,
but not limited to the visual senses. Low latency produces rapid feedback.
Rapid feedback produces rapid adaptation, and the reduced weight of
That should be a reduction of the penalty caused from short-term memory
loss.
On Mon, Jun 21, 2010 at 4:20 PM, Mark Nuzzolilo nuzz...@gmail.com wrote:
My view on the efficiency of the brain's learning has to do with low
latency communications in general, which is a similar concept to high
Hi,
* AGI should be scalable - More data just mean the potential for more
accurate results.
* More data can chew up more computation time without a benefit. ie If
all you want to do is identify a bird, it's still a bird at 1 fps and
1000 fps.
* Don't aim for precision, aim for generality. Eg. AGI
-Original Message-
From: Steve Richfield [mailto:steve.richfi...@gmail.com]
John,
Hmmm, I though that with your EE background, that the 12db/octave would
bring back old sophomore-level course work. OK, so you were sick that day.
I'll try to fill in the blanks here...
Thanks man.
35 matches
Mail list logo