(I agree, it's quite a useful approach -- was answering the question
about whether there was any such thing in Mahout. This all assumes you
can fit the data in memory in the GPU but that is true for moderately
large data sets.)

On Mon, Jul 9, 2012 at 9:04 AM, Manuel Blechschmidt
<[email protected]> wrote:
> Hi Mohsen, hello Sean,
> there is already a lot of researching going on for doing recommendations 
> especially matrix factorization on GPUs:
>
> e.g.
> http://www.slideshare.net/NVIDIA/1034-gtc09
> 20x - 300x faster
> or
> http://www.multicoreinfo.com/research/papers/2009/ipdps09-lahabar.pdf
> 60x faster over MATLAB
> 1,41x - 17x faster over Intel MKL
>
> So basically it has already been proven that from a number crunching 
> perspective GPUs are the way to go. Nevertheless there are a lot of other 
> facts that have to be incorporate e.g. graphic memory. The main trend for 
> doing real time recommendations is currently to put all the data into memory 
> (http://notes.matthiasb.com/post/7423754826/hunch-graph-database) and then 
> use it directly from there. I don't know if there already graphic cards with 
> 1 Terrabyte of graphics memory.
>
> So currently in the end a semi real time approach with batch and real time 
> processing parts is deployed e.g. Yahoo: 
> http://users.cis.fiu.edu/~lzhen001/activities/KDD_USB_key_2010/docs/p703.pdf
>
> Have a great day
>     Manuel

Reply via email to