Body of the Japanese text says "TPU is an ASIC developed for Deep Learning
with 'ten' times better performance per Watt compared to other technologies
such as GPU or FPGA, according to Pichai."

So may be it is easier to consider "10 times over Stratix(1) or
Vertex7/Xilinx(2)" and not 13 times over 'other' lesser hardware?

*1 http://research.microsoft.com/pubs/240715/CNN%20Whitepaper.pdf  (see
Table 1)
*2 http://cadlab.cs.ucla.edu/~cong/slides/fpga2015_chen.pdf

Tokumoto

On Sat, May 21, 2016 at 4:17 PM, Darren Cook <[email protected]> wrote:

> > http://itpro.nikkeibp.co.jp/atcl/column/15/061500148/051900060/
> > (in Japanese).  The performance/watt is about 13 times better,
> > a photo in the article shows.
>
> Has anyone found out exactly what the "Other" in the photo is? The
> Google blog was also rather vague on this.
>
> (If you didn't click through, the chart just say "Relative TPU
> Performance/Watt", with Other being between 0 and 4, and TPU being
> between 11 and 14.)
>
> Darren
>
> _______________________________________________
> Computer-go mailing list
> [email protected]
> http://computer-go.org/mailman/listinfo/computer-go
>
_______________________________________________
Computer-go mailing list
[email protected]
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to