PROTECTED]
Sent: Friday, June 13, 2008 2:30 PM
To: agi@v2.listbox.com
Subject: RE: [agi] IBM, Los Alamos scientists claim fastest computer
With regard to representing different types of synapses (various time
delays, strength bounds, learning rates, etc), this information can be
recorded
--- On Sat, 6/14/08, Ed Porter [EMAIL PROTECTED] wrote:
[Ed Porter] I still think you are going to need multi-bit weights at
row-column element in the matrix -- since most all representations of
synapses I have seen have assumed a weight having at least 6 bits of
information, and there is
Matt,
Thank you for your reply. For me it is very thought provoking.
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 12, 2008 7:23 PM
To: agi@v2.listbox.com
Subject: RE: [agi] IBM, Los Alamos scientists claim fastest computer
--- On Thu, 6/12/08
--- On Fri, 6/13/08, Ed Porter [EMAIL PROTECTED] wrote:
[Ed Porter] -- Why couldn't each of the 10^6 fibers
have multiple connections along its length within the cm^3 (although it
could be represented as one row in the matrix, with individual
connections represented as elements in such a row)
If anyone is interested, I have some additional information on the C870
NVIDIA Tesla card. I'll be happy to send it to you off-list. Just
contact me directly.
Cheers,
Brad
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed:
--- On Wed, 6/11/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Hmmph. I offer to build anyone who wants one a
human-capacity machine for
$100K, using currently available stock parts, in one rack.
Approx 10 teraflops, using Teslas.
(http://www.nvidia.com/object/tesla_c870.html)
The
Right. You're talking Kurzweil HEPP and I'm talking Moravec HEPP (and shading
that a little).
I may want your gadget when I go to upload, though.
Josh
On Thursday 12 June 2008 10:59:51 am, Matt Mahoney wrote:
--- On Wed, 6/11/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Hmmph. I
TeslasTwo things I think are interesting about these trends in
high-performance commodity hardware:
1) The flops/bit ratio (processing power vs memory) is skyrocketing. The
move to parallel architectures makes the number of high-level operations per
transistor go up, but bits of memory per
mind.
-- Matt Mahoney, [EMAIL PROTECTED]
--- On Thu, 6/12/08, Derek Zahn [EMAIL PROTECTED] wrote:
From: Derek Zahn [EMAIL PROTECTED]
Subject: RE: [agi] IBM, Los Alamos scientists claim fastest computer
To: agi@v2.listbox.com
Date: Thursday, June 12, 2008, 11:36 AM
Two things I think
--- On Thu, 6/12/08, Mike Tintner [EMAIL PROTECTED] wrote:
Matt:I think the ratio of processing power to memory to
bandwidth is just about right for AGI.
All these calculations (wh. are v. interesting) presume
that all computing
is done in the brain. They ignore the possibility (well,
I think processor to memory, and inter processor communications are
currently far short
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 12, 2008 12:33 PM
To: agi@v2.listbox.com
Subject: RE: [agi] IBM, Los Alamos scientists claim fastest computer
As far as I know, GPU's are not very optimal for neural net calculation. For
some applications, speedup factors come in the 1000 range, but for NN's I
have only seen speedups of one order of magnitude (10x).
For example, see attached paper
On Thu, Jun 12, 2008 at 4:59 PM, Matt Mahoney [EMAIL
--- On Thu, 6/12/08, Ed Porter [EMAIL PROTECTED] wrote:
I think processor to memory, and inter processor
communications are currently far short
Your concern is over the added cost of implementing a sparsely connected
network, which slows memory access and requires more memory for
Hmmph. I offer to build anyone who wants one a human-capacity machine for
$100K, using currently available stock parts, in one rack. Approx 10
teraflops, using Teslas. (http://www.nvidia.com/object/tesla_c870.html)
The software needs a little work...
Josh
On Wednesday 11 June 2008 08:50:58
14 matches
Mail list logo