On Thu, Jul 28, 2016 at 11:40 AM, C Bergström <[email protected]> wrote: > The original quote of 2x was using a comparison against a GTX1080 -
I'm not the one who mentioned GP100, am I? > Quote some numbers for that. Sure: 8.88 TFLOPS FP32. FP64 and FP16 are pretty much inexistant (something like 0.1-0.2 TFLOPS each), which is understandable given the market it's aimed at (who needs anything else thatn FP32 in gaming?) So regarding FP32, we're not at 2.0x, granted, more 1.75x. > The GP100 isn't really fair because mere mortals don't have access to > it.. I'd like to see a show of hands for people outside of a tiny > circle who do.. Sure, that's very true. > AMD's s9300 is meant to compete against the Pascal class hardware, but > they haven't released the single asic part yet. Lets see those > numbers.. In March, AMD mentioned 13.9 TFLOPS FP32 for the S9300 x2, which would surely be interesting if it comes to that. But only 0.8 TFLOPS FP64, so that wouldn't make it a great alternative to the GP100 for HPC (not talking MD or Deep-learning here, more traditional CFD-like things). Especially if you consider that it's a dual-chip card. Cheers, -- Kilian _______________________________________________ Beowulf mailing list, [email protected] sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
