[ 
https://issues.apache.org/jira/browse/MAHOUT-1786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994051#comment-14994051
 ] 

ASF GitHub Bot commented on MAHOUT-1786:
----------------------------------------

Github user michellemay commented on the pull request:

    https://github.com/apache/mahout/pull/174#issuecomment-154481034
  
    Forcing the use of Kryo might not be a valid statu-quo for spark 1.5+ 
though..  
    
    For reference, here is project Tungsten:
    
https://databricks.com/blog/2015/04/28/project-tungsten-bringing-spark-closer-to-bare-metal.html
    
    "The above chart compares the performance of shuffling 8 million complex 
rows in one thread using the Kryo serializer and a code generated custom 
serializer. The code generated serializer exploits the fact that all rows in a 
single shuffle have the same schema and generates specialized code for that. 
This made the generated version over 2X faster to shuffle than the Kryo 
version."


> Make classes implements Serializable for Spark 1.5+
> ---------------------------------------------------
>
>                 Key: MAHOUT-1786
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1786
>             Project: Mahout
>          Issue Type: Improvement
>          Components: Math
>    Affects Versions: 0.11.0
>            Reporter: Michel Lemay
>            Priority: Minor
>              Labels: performance
>
> Spark 1.5 comes with a new very efficient serializer that uses code 
> generation.  It is twice as fast as kryo.  When using mahout, we have to set 
> KryoSerializer because some classes aren't serializable otherwise.  
> I suggest to declare Math classes as "implements Serializable" where needed.  
> For instance, to use coocurence package in spark 1.5, we had to modify 
> AbstractMatrix, AbstractVector, DenseVector and SparseRowMatrix to make it 
> work without Kryo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to