Everything you mentioned, apart from computational savings AFAIK. (Half is used extensively in the visual effects industry.)
On Thursday, 30 July 2015, Sebastian Markbåge <[email protected]> wrote: > On Wed, Jul 29, 2015 at 4:05 PM, Alexander Jones <[email protected] > <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote: > >> In case it's not obvious, faster DMA and larger buffer/texture capacity >> vs. float32. Many applications benefit hugely from having floating point >> data but certainly do not need float32's range and precision - for those, >> half/float16 is a great choice. >> >> > It is not obvious exactly what part you're targeting. E.g. Is it texture > transfer over the network that is the biggest saving? Is it > transfer/conversion cost to GPU memory? Is it computational complexity in > the shader? Is it the fact that you can fit more data into GPU memory, i.e. > you run out of space later? > > Just because it is in the OpenGL spec doesn't mean that it is actually > useful on modern hardware since implementations theoretically are free to > expand it to full float. Maybe they don't, I don't know. That's my question. > > It also the open question, if this is used in practice? E.g. float64 for > SIMD was deemed as not important. > > So, no, it is not obvious to some of us that are not continuously in this > world, and my question hasn't been answered. I think it is plausible and I > would like to help out, but we need to clarify exactly where this is a > benefit and why that benefit isn't going away. > > > >
_______________________________________________ es-discuss mailing list [email protected] https://mail.mozilla.org/listinfo/es-discuss

