I think all of the code uses double-precision floats. I imagine much of it
could work as well with single-precision floats.

MapReduce and a GPU are very different things though, and I'm not sure how
you would use both together effectively.


On Wed, Feb 20, 2013 at 7:10 AM, shruti ranade <[email protected]>wrote:

> Hi,
>
> I am a beginner in mahout. I am working on k-means MR implementation and
> trying to run it on a GPGPU.* I wanted to know if mahout computations are
> all double precision or single precision. *
>
> Suggest me any documentation that I need to refer to.
>
> Thanks,
> Shruti
>

Reply via email to