Body of the Japanese text says "TPU is an ASIC developed for Deep Learning
with 'ten' times better performance per Watt compared to other technologies
such as GPU or FPGA, according to Pichai."
So may be it is easier to consider "10 times over Stratix(1) or
Vertex7/Xilinx(2)" and not 13 times
> http://itpro.nikkeibp.co.jp/atcl/column/15/061500148/051900060/
> (in Japanese). The performance/watt is about 13 times better,
> a photo in the article shows.
Has anyone found out exactly what the "Other" in the photo is? The
Google blog was also rather vague on this.
(If you didn't click
Some photos here.
http://itpro.nikkeibp.co.jp/atcl/column/15/061500148/051900060/
(in Japanese). The performance/watt is about 13 times better,
a photo in the article shows.
Hideki
Petr Baudis: <20160519105443.go22...@machine.or.cz>:
> Hi,
>
> it seems that Google in fact used TPUs for
DGX-1 looks not shipped yet. TPUs has been used more than 1
year. This could be a big advantage for Google.
Hideki
Darren Cook: <573da3db.8000...@dcook.org>:
>> It's be interesting to know what the speedup factor against, say,
>> Tesla K40 is.
>
>Or against the P100 chip [1], which claims
Hi,
it seems that Google in fact used TPUs for AlphaGo rather than GPUs:
https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html
It's be interesting to know what the speedup factor against, say,
Tesla K40 is. In the Nature