2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
I'm getting several replies to this that indicate that people don't understand
what a utility function is.
If you are an AI (or a person) there will be occasions where you have to make
choices. In fact, pretty much everything you do involves
Richard,
On 6/11/08, Richard Loosemore [EMAIL PROTECTED] wrote:
I am using cognitive science as a basis for AGI development,
If my fear of paradigm shifting proves to be unfounded, then you may well
be right. However, I would be surprised if there weren't a LOT of paradigm
shifting going on.
Jiri, Josh, et al,
On 6/11/08, Jiri Jelinek [EMAIL PROTECTED] wrote:
On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
If you can modify your mind, what is the shortest path to satisfying all
your
goals? Yep, you got it: delete the goals.
We can set whatever
If anyone is interested, I have some additional information on the C870
NVIDIA Tesla card. I'll be happy to send it to you off-list. Just
contact me directly.
Cheers,
Brad
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed:
--- On Wed, 6/11/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Hmmph. I offer to build anyone who wants one a
human-capacity machine for
$100K, using currently available stock parts, in one rack.
Approx 10 teraflops, using Teslas.
(http://www.nvidia.com/object/tesla_c870.html)
The
If you have a program structure that can make decisions that would otherwise
be vetoed by the utility function, but get through because it isn't executed
at the right time, to me that's just a bug.
Josh
On Thursday 12 June 2008 09:02:35 am, Mark Waser wrote:
If you have a fixed-priority
Isn't your Nirvana trap exactly equivalent to Pascal's Wager? Or am I
missing something?
- Original Message -
From: J Storrs Hall, PhD [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, June 11, 2008 10:54 PM
Subject: Re: [agi] Nirvana
On Wednesday 11 June 2008 06:18:03 pm,
--- On Thu, 6/12/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
But it doesn't work for full fledged AGI. Suppose you
are a young man who's always been taught not to get yourself killed, and
not to kill people (as top
priorities). You are confronted with your country being invaded and faced
Right. You're talking Kurzweil HEPP and I'm talking Moravec HEPP (and shading
that a little).
I may want your gadget when I go to upload, though.
Josh
On Thursday 12 June 2008 10:59:51 am, Matt Mahoney wrote:
--- On Wed, 6/11/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Hmmph. I
TeslasTwo things I think are interesting about these trends in
high-performance commodity hardware:
1) The flops/bit ratio (processing power vs memory) is skyrocketing. The
move to parallel architectures makes the number of high-level operations per
transistor go up, but bits of memory per
You're missing the *major* distinction between a program structure that can
make decisions that would otherwise be vetoed by the utility function and a
program that can't even THINK ABOUT a choice (both your choice of phrase).
Among other things not being able to even think about a choice
Andrew, Vladamir, Mark, et al,
This discussion is parallel to an ongoing discussion I had with several
neuroscientists back in the 1970s-1980s. My assertion was that once you
figure out just what it is that the neurons are doing, that the difference
between neural operation and optimal operation
I think the ratio of processing power to memory to bandwidth is just about
right for AGI. Processing power and memory increase at about the same rate
under Moore's Law. The time it takes a modern computer to clear all of its
memory is on the same order as the response time as a neuron, and this
On Thu, Jun 12, 2008 at 3:36 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
... and here we have the makings of AGI run amok...
My point.. it is usually possible to make EVERYONE happy with the results,
but only with a process that roots out the commonly held invalid assumptions.
Like Gort
Mike, et al,
There are several interesting neural situations in nature. Indeed, much of
what we know about synapses comes from the lobster stomatogastric ganglion -
that twenty-some neuron structure that controls the manufacture of lobster
poop. The thing that is so special here is that the
On Thu, Jun 12, 2008 at 6:44 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
If you have a fixed-priority utility function, you can't even THINK ABOUT the
choice. Your pre-choice function will always say Nope, that's bad and
you'll be unable to change. (This effect is intended in all the RSI
--- On Thu, 6/12/08, Mike Tintner [EMAIL PROTECTED] wrote:
Matt:I think the ratio of processing power to memory to
bandwidth is just about right for AGI.
All these calculations (wh. are v. interesting) presume
that all computing
is done in the brain. They ignore the possibility (well,
Jiri,
The point that you apparently missed is that substantially all problems fall
cleanly into two categories:
1. The solution is known (somewhere in the world and hopefully to the AGI),
in which case, as far as the user is concerned, this is an issue of
ignorance that is best cured by
On Jun 12, 2008, at 9:25 AM, Steve Richfield wrote:
My assertion was that once you figure out just what it is that the
neurons are doing, that the difference between neural operation and
optimal operation will be negligible. This because of the 200
million years they have had to refine
--- On Wed, 6/11/08, Jey Kottalam [EMAIL PROTECTED] wrote:
On Wed, Jun 11, 2008 at 5:24 AM, J Storrs Hall, PhD
[EMAIL PROTECTED] wrote:
The real problem with a self-improving AGI, it seems
to me, is not going to be
that it gets too smart and powerful and takes over the
world. Indeed, it
On Thu, Jun 12, 2008 at 10:23 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Huh? I used those phrases to describe two completely different things: a
program that CAN change its highest priorities (due to what I called a bug),
and one that CAN'T. How does it follow that I'm missing a
Josh,
You said - If you have a fixed-priority utility function, you can't even
THINK ABOUT the choice. Your pre-choice function will always say Nope,
that's bad and you'll be unable to change. (This effect is intended in all
the RSI stability arguments.)
I replied - Doesn't that depend upon
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
On Thursday 12 June 2008 02:48:19 am, William Pearson wrote:
The kinds of choices I am interested in designing for at the moment
are should program X or program Y get control of this bit of memory or
IRQ for the next time period. X and Y can
I think processor to memory, and inter processor communications are
currently far short
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 12, 2008 12:33 PM
To: agi@v2.listbox.com
Subject: RE: [agi] IBM, Los Alamos scientists claim fastest computer
As far as I know, GPU's are not very optimal for neural net calculation. For
some applications, speedup factors come in the 1000 range, but for NN's I
have only seen speedups of one order of magnitude (10x).
For example, see attached paper
On Thu, Jun 12, 2008 at 4:59 PM, Matt Mahoney [EMAIL
--- On Thu, 6/12/08, Ed Porter [EMAIL PROTECTED] wrote:
I think processor to memory, and inter processor
communications are currently far short
Your concern is over the added cost of implementing a sparsely connected
network, which slows memory access and requires more memory for
J Storrs Hall, PhD wrote:
The real problem with a self-improving AGI, it seems to me, is not going to be
that it gets too smart and powerful and takes over the world. Indeed, it
seems likely that it will be exactly the opposite.
If you can modify your mind, what is the shortest path to
27 matches
Mail list logo