Hi,
as I want to by graphic card for CNN: do I need double precision
performance? I give caffe (http://caffe.berkeleyvision.org/) a try, and
as far as I understood most is done in single precision?!
You get comparable single precision performance NVIDA (as caffe uses
CUDA I look for NVIDA) for
No, you don't need double precision at all.
Álvaro.
On Thu, Dec 25, 2014 at 5:00 AM, Detlef Schmicker d...@physik.de wrote:
Hi,
as I want to by graphic card for CNN: do I need double precision
performance? I give caffe (http://caffe.berkeleyvision.org/) a try, and
as far as I understood
as I want to by graphic card for CNN: do I need double precision
performance?
Personally, i was thinking of experimenting with ints, bytes, and shorts, even
less precise than singles :-)___
Computer-go mailing list
Computer-go@computer-go.org
You are going to be computing gradients of functions, and most people find
it easier to think about these things using a type that roughly corresponds
to the notion of real number. You can use a fixed-point representation of
reals, which uses ints in the end, but then you have to worry about what
You can do some GPU experiments on Amazon AWS before you buy. 65 cents per hour
David
http://aws.amazon.com/ec2/instance-types/
G2
This family includes G2 instances intended for graphics and general purpose GPU
compute applications.
Features:
High Frequency Intel Xeon E5-2670 (Sandy Bridge)
Hi Aja,
Couple of questions:
1. connectivity, number of parameters
Just to check, each filter connects to all the feature maps below it,
is that right? I tried to check that by ball-park estimating number
of parameters in that case, and comparing to the section paragraph in
your section 4.
This is my guess as to what the number of parameters actually is:
First layer: 128 * (5*5*36 + 19*19) (128 filters of size 5x5 on 36 layers
of input, position-dependent biases)
11 hidden layers: 11 * 128 * (3*3*128 + 19*19) (128 filters of size 3x3 on
128 layers of input, position-dependent