Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-28 Thread minh thu
2009/9/28 Casey Hawthorne cas...@istar.ca:
 On Mon, 28 Sep 2009 12:06:47 +1300, you wrote:


On Sep 28, 2009, at 9:40 AM, Olex P wrote:

 Hi,

 Yes, I mean sizeOf 2. It's useful not only on GPUs but also in
 normal software. Think of huge data sets in computer graphics
 (particle clouds, volumetric data, images etc.) Some data (normals,
 density, temperature and so on) can be easily represented as float
 16 making files 200 GB instead of 300 GB. Good benefits.

 From the OpenEXR technical introduction:

   half numbers have 1 sign bit, 5 exponent bits,
   and 10 mantissa bits.  The interpretation of
   the sign, exponent and mantissa is analogous
   to IEEE-754 floating-point numbers.  half
   supports normalized and denormalized numbers,
   infinities and NANs (Not A Number).  The range
   of representable numbers is roughly 6.0E-8 to 6.5E4;
   numbers smaller than 6.1E-5 are denormalized.

Single-precision floats are already dangerously short for
many computations.  (Oh the dear old B6700 with 39 bits of
precision in single-precision floats...)  Half-precision
floats actually have less than half the precision of singles
(11 bits instead of 23).  It's probably best to think of
binary 16 as a form of compression for Float, and to write
stuff that will read half-precision from a binary stream as
single-precision, and conversely stuff that will accept
single-precision values and write them to a binary stream in
half-precision form.


 I agree with the above.

 I hadn't realized how dangerously short for many computations
 single-precision is.

 So, as he says, for computing, you do want to convert half-precision
 to single-precision, if not double-precision.

 If you want to save storage space, then some sort of compression
 scheme might be better on secondary storage.

 As for the video card, some sort of fast decompression scheme would be
 necessary for the half-precision numbers coming in.

'Half', as they are called, are supported in GPU. The half-precision
floating point is a core feature in OpenGL 3.0.

As said above, they are merely a data storage format, which should be
translated to floats or doubles before any computation.

Cheers,
Thu
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-28 Thread Henning Thielemann


On Sun, 27 Sep 2009, Olex P wrote:


Hi guys,

Do we have anything like half precision floats in Haskell? Maybe in some non 
standard
libraries? Or I have to use FFI + OpenEXR library to achieve this?


If you only want to save storage, you may define

newtype Float16 = Float16 Int16

and write Num, Fractional and Floating instances that convert operands to 
Float, perform operations on Float and put the results back to Int16. By 
some fusion you may save conversions, but you would also get different 
results due to higher precision.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-27 Thread Ross Mellgren

What about the built-in Float type?

Prelude Foreign.Storable sizeOf (undefined :: Float)
4
Prelude Foreign.Storable sizeOf (undefined :: Double)
8

Or maybe you mean something that can be used with FFI calls to C, in  
which case Foreign.C.Types (CFloat).


Both instance the Floating, RealFloat, RealFrac, etc, classes so  
should operate largely the same as (modulo precision) a Double.


-Ross

On Sep 27, 2009, at 2:42 PM, Olex P wrote:


Hi guys,

Do we have anything like half precision floats in Haskell? Maybe in  
some non standard libraries? Or I have to use FFI + OpenEXR library  
to achieve this?


Cheers,
Oleksandr.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-27 Thread Peter Verswyvelen
He meant 16-bit floats, which have sizeOf 2
On GPUs this is common and implemented in hardware (at least on the old
GPUs).

On DPSs you commonly had 24-bit floats too.

But these days I guess 32-bit is the minimum one would want to use? Most of
the time I just use double anyway :)

On Sun, Sep 27, 2009 at 9:47 PM, Ross Mellgren rmm-hask...@z.odi.ac wrote:

 What about the built-in Float type?

 Prelude Foreign.Storable sizeOf (undefined :: Float)
 4
 Prelude Foreign.Storable sizeOf (undefined :: Double)
 8

 Or maybe you mean something that can be used with FFI calls to C, in which
 case Foreign.C.Types (CFloat).

 Both instance the Floating, RealFloat, RealFrac, etc, classes so should
 operate largely the same as (modulo precision) a Double.

 -Ross


 On Sep 27, 2009, at 2:42 PM, Olex P wrote:

  Hi guys,

 Do we have anything like half precision floats in Haskell? Maybe in some
 non standard libraries? Or I have to use FFI + OpenEXR library to achieve
 this?

 Cheers,
 Oleksandr.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-27 Thread Ross Mellgren

Oh sorry, I misread the original question. I take it all back!

-Ross

On Sep 27, 2009, at 4:19 PM, Peter Verswyvelen wrote:


He meant 16-bit floats, which have sizeOf 2

On GPUs this is common and implemented in hardware (at least on the  
old GPUs).


On DPSs you commonly had 24-bit floats too.

But these days I guess 32-bit is the minimum one would want to use?  
Most of the time I just use double anyway :)


On Sun, Sep 27, 2009 at 9:47 PM, Ross Mellgren rmm- 
hask...@z.odi.ac wrote:

What about the built-in Float type?

Prelude Foreign.Storable sizeOf (undefined :: Float)
4
Prelude Foreign.Storable sizeOf (undefined :: Double)
8

Or maybe you mean something that can be used with FFI calls to C, in  
which case Foreign.C.Types (CFloat).


Both instance the Floating, RealFloat, RealFrac, etc, classes so  
should operate largely the same as (modulo precision) a Double.


-Ross


On Sep 27, 2009, at 2:42 PM, Olex P wrote:

Hi guys,

Do we have anything like half precision floats in Haskell? Maybe in  
some non standard libraries? Or I have to use FFI + OpenEXR library  
to achieve this?


Cheers,
Oleksandr.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-27 Thread Olex P
Hi,

Yes, I mean sizeOf 2. It's useful not only on GPUs but also in normal
software. Think of huge data sets in computer graphics (particle clouds,
volumetric data, images etc.) Some data (normals, density, temperature and
so on) can be easily represented as float 16 making files 200 GB instead of
300 GB. Good benefits.

Cheers,
Oleksandr.


On Sun, Sep 27, 2009 at 9:19 PM, Peter Verswyvelen bugf...@gmail.comwrote:

 He meant 16-bit floats, which have sizeOf 2
 On GPUs this is common and implemented in hardware (at least on the old
 GPUs).

 On DPSs you commonly had 24-bit floats too.

 But these days I guess 32-bit is the minimum one would want to use? Most of
 the time I just use double anyway :)

 On Sun, Sep 27, 2009 at 9:47 PM, Ross Mellgren rmm-hask...@z.odi.acwrote:

 What about the built-in Float type?

 Prelude Foreign.Storable sizeOf (undefined :: Float)
 4
 Prelude Foreign.Storable sizeOf (undefined :: Double)
 8

 Or maybe you mean something that can be used with FFI calls to C, in which
 case Foreign.C.Types (CFloat).

 Both instance the Floating, RealFloat, RealFrac, etc, classes so should
 operate largely the same as (modulo precision) a Double.

 -Ross


 On Sep 27, 2009, at 2:42 PM, Olex P wrote:

  Hi guys,

 Do we have anything like half precision floats in Haskell? Maybe in some
 non standard libraries? Or I have to use FFI + OpenEXR library to achieve
 this?

 Cheers,
 Oleksandr.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-27 Thread Olex P
Hi,

Yes, I mean sizeOf 2. It's useful not only on GPUs but also in normal
software. Think of huge data sets in computer graphics (particle clouds,
volumetric data, images etc.) Some data (normals, density, temperature and
so on) can be easily represented as float 16 making files 200 GB instead of
300 GB. Good benefits.

Cheers,
Oleksandr.

On Sun, Sep 27, 2009 at 9:19 PM, Peter Verswyvelen bugf...@gmail.comwrote:

 He meant 16-bit floats, which have sizeOf 2
 On GPUs this is common and implemented in hardware (at least on the old
 GPUs).

 On DPSs you commonly had 24-bit floats too.

 But these days I guess 32-bit is the minimum one would want to use? Most of
 the time I just use double anyway :)

 On Sun, Sep 27, 2009 at 9:47 PM, Ross Mellgren rmm-hask...@z.odi.acwrote:

 What about the built-in Float type?

 Prelude Foreign.Storable sizeOf (undefined :: Float)
 4
 Prelude Foreign.Storable sizeOf (undefined :: Double)
 8

 Or maybe you mean something that can be used with FFI calls to C, in which
 case Foreign.C.Types (CFloat).

 Both instance the Floating, RealFloat, RealFrac, etc, classes so should
 operate largely the same as (modulo precision) a Double.

 -Ross


 On Sep 27, 2009, at 2:42 PM, Olex P wrote:

  Hi guys,

 Do we have anything like half precision floats in Haskell? Maybe in some
 non standard libraries? Or I have to use FFI + OpenEXR library to achieve
 this?

 Cheers,
 Oleksandr.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-27 Thread Casey Hawthorne
I think a 16-bit float type would require compiler revisions as
opposed to doing something within the present type classes.

This is similar to how Java would benefit from an unsigned byte
primitive type for processing images, etc., whereas Haskell already
has Word8.
--
Regards,
Casey
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-27 Thread wren ng thornton

Olex P wrote:

Hi,

Yes, I mean sizeOf 2. It's useful not only on GPUs but also in normal
software. Think of huge data sets in computer graphics (particle clouds,
volumetric data, images etc.) Some data (normals, density, temperature and
so on) can be easily represented as float 16 making files 200 GB instead of
300 GB. Good benefits.


I think, if you're going to want any kind of performance and 
portability, then you'll have to use the FFI to wrap some C code that 
performs the primops. From there you can define the instances for 
Floating, RealFloat, etc. to use them like normal types in Haskell.


There are a number of embedded systems that still use 24-bit floating 
registers, so it'd be nice to provide both Float16 and Float24. But 
since these aren't natively supported in C, it's not clear how best to 
write the primops so they're portable across GPUs and embedded systems.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-27 Thread Olex P
Okay looks like FFI is the only way to go, Thanks.

Cheers,
Oleksandr.

On Sun, Sep 27, 2009 at 9:50 PM, wren ng thornton w...@freegeek.org wrote:

 Olex P wrote:

 Hi,

 Yes, I mean sizeOf 2. It's useful not only on GPUs but also in normal
 software. Think of huge data sets in computer graphics (particle clouds,
 volumetric data, images etc.) Some data (normals, density, temperature and
 so on) can be easily represented as float 16 making files 200 GB instead
 of
 300 GB. Good benefits.


 I think, if you're going to want any kind of performance and portability,
 then you'll have to use the FFI to wrap some C code that performs the
 primops. From there you can define the instances for Floating, RealFloat,
 etc. to use them like normal types in Haskell.

 There are a number of embedded systems that still use 24-bit floating
 registers, so it'd be nice to provide both Float16 and Float24. But since
 these aren't natively supported in C, it's not clear how best to write the
 primops so they're portable across GPUs and embedded systems.

 --
 Live well,
 ~wren

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-27 Thread Richard O'Keefe


On Sep 28, 2009, at 9:40 AM, Olex P wrote:


Hi,

Yes, I mean sizeOf 2. It's useful not only on GPUs but also in  
normal software. Think of huge data sets in computer graphics  
(particle clouds, volumetric data, images etc.) Some data (normals,  
density, temperature and so on) can be easily represented as float  
16 making files 200 GB instead of 300 GB. Good benefits.


From the OpenEXR technical introduction:

half numbers have 1 sign bit, 5 exponent bits,
and 10 mantissa bits.  The interpretation of
the sign, exponent and mantissa is analogous
to IEEE-754 floating-point numbers.  half
supports normalized and denormalized numbers,
infinities and NANs (Not A Number).  The range
of representable numbers is roughly 6.0E-8 to 6.5E4;
numbers smaller than 6.1E-5 are denormalized.

Single-precision floats are already dangerously short for
many computations.  (Oh the dear old B6700 with 39 bits of
precision in single-precision floats...)  Half-precision
floats actually have less than half the precision of singles
(11 bits instead of 23).  It's probably best to think of
binary 16 as a form of compression for Float, and to write
stuff that will read half-precision from a binary stream as
single-precision, and conversely stuff that will accept
single-precision values and write them to a binary stream in
half-precision form.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 16 bit floating point data in Haskell?

2009-09-27 Thread Casey Hawthorne
On Mon, 28 Sep 2009 12:06:47 +1300, you wrote:


On Sep 28, 2009, at 9:40 AM, Olex P wrote:

 Hi,

 Yes, I mean sizeOf 2. It's useful not only on GPUs but also in  
 normal software. Think of huge data sets in computer graphics  
 (particle clouds, volumetric data, images etc.) Some data (normals,  
 density, temperature and so on) can be easily represented as float  
 16 making files 200 GB instead of 300 GB. Good benefits.

 From the OpenEXR technical introduction:

   half numbers have 1 sign bit, 5 exponent bits,
   and 10 mantissa bits.  The interpretation of
   the sign, exponent and mantissa is analogous
   to IEEE-754 floating-point numbers.  half
   supports normalized and denormalized numbers,
   infinities and NANs (Not A Number).  The range
   of representable numbers is roughly 6.0E-8 to 6.5E4;
   numbers smaller than 6.1E-5 are denormalized.

Single-precision floats are already dangerously short for
many computations.  (Oh the dear old B6700 with 39 bits of
precision in single-precision floats...)  Half-precision
floats actually have less than half the precision of singles
(11 bits instead of 23).  It's probably best to think of
binary 16 as a form of compression for Float, and to write
stuff that will read half-precision from a binary stream as
single-precision, and conversely stuff that will accept
single-precision values and write them to a binary stream in
half-precision form.


I agree with the above.

I hadn't realized how dangerously short for many computations
single-precision is.

So, as he says, for computing, you do want to convert half-precision
to single-precision, if not double-precision.

If you want to save storage space, then some sort of compression
scheme might be better on secondary storage.

As for the video card, some sort of fast decompression scheme would be
necessary for the half-precision numbers coming in.

You are probably in the realm of DSP.

--
Regards,
Casey
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe