Hi Patrick,
Thanks for your reply.
I am guessing even an array type will be registered automatically. Is this
correct?
Thanks,
Pradeep
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Kryo-serialization-does-not-compress-tp2042p2400.html
Sent from the
row._1._3), row._2))
>
> var dataRdd: RDD[((Array[Long], String, Array[String]), (String,
> Array[Float]))] = allArrays.map(row => ((row._1._1,
>
> row._1._2, row._1._3.map(x => x match { case str: String => str case _ =>
> println("unknown data type " + x + " :
>
> "); new String("")}).toArray), row._2))
>
> dataRdd = dataRdd.partitionBy(new
> HashPartitioner(64)).persist(StorageLevel.MEMORY_ONLY_SER)
>
> dataRdd.count()
> /
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Kryo-serialization-does-not-compress-tp2042p2347.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
ot;)}).toArray), row._2))
dataRdd = dataRdd.partitionBy(new
HashPartitioner(64)).persist(StorageLevel.MEMORY_ONLY_SER)
dataRdd.count()
/
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Kryo-serialization-does-not-compress-tp2042p2347.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
g for
MEMORY_ONLY?
Is there anything else we need to do for MEMORY_ONLY to get it compressed?
Thanks,
Pradeep
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Kryo-serialization-does-not-compress-tp2042.html
Sent from the Apache Spark User List mailing list ar