[ https://issues.apache.org/jira/browse/MAHOUT-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15884697#comment-15884697 ]
ASF GitHub Bot commented on MAHOUT-1903: ---------------------------------------- Github user andrewpalumbo commented on the issue: https://github.com/apache/mahout/pull/286 There seems to be some loss of precision when performing matrix %*% vector multiplication on GPU. for each few orders of magnitude I raised the num elements, i had to relax epsilon by an order. We know that dense algebra is not where OpenCL shines but I am wondering if it is possible that vectors are being converted to fp32 from fp64. The same test, with same values, when run in main memory on OpenMP gives the same precision as the Mahout Jvm with an epsilon of -1e16. Not a blocker IMO. As Well GPU vectors give us access to an entire library of Native Iterative solvers. > Fix VCL vector %*% vector implementation > ---------------------------------------- > > Key: MAHOUT-1903 > URL: https://issues.apache.org/jira/browse/MAHOUT-1903 > Project: Mahout > Issue Type: Bug > Affects Versions: 0.12.2 > Reporter: Andrew Palumbo > Assignee: Andrew Palumbo > Fix For: 0.13.0 > > > Vector %*% vector and vector %*% Matrix need to have memory allocation > Re-written. Currently they are commented out in tests -- This message was sent by Atlassian JIRA (v6.3.15#6346)