Steven wrote:
> http://arxiv.org/abs/1412.6564 (nvidia gtx titan black)
> http://arxiv.org/abs/1412.3409 (nvidia gtx 780)

Thanks - I had read those papers but hadn't realized the neural nets
were run on GPUs.

Nikos wrote:
>> https://timdettmers.wordpress.com/2015/03/09/deep-learning-hardware-guide/

This was very useful, thanks!

>> As for hardware breakthroughs, Nvidia has announced that its next
>> generation GPUs (codenamed Pascal) will offer 10x the performance in 2016,
>> so you might want to wait a little more.

One of the comments, on the above blog, questions that 10x speed-up:

https://timdettmers.wordpress.com/2015/03/09/deep-learning-hardware-guide/comment-page-1/#comment-336

Confirmed here:
  http://blogs.nvidia.com/blog/2015/03/17/pascal/

So, currently they use a 32-bit float, rather than a 64-bit double, but
will reduce that to 16-bit to get a double speed-up. Assuming they've
been listening to customers properly, that must mean 16-bit floats are
good enough for neural nets?

Darren

_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to