On Fri, Jan 3, 2014 at 8:25 PM, Guillaume Pitel
<[email protected]>wrote:

>  Have you tried with the mapred.* properties ? If saveAsObjectFile uses
> saveAsSequenceFile, maybe it uses the old API ?
>

None of spark.hadoop.mapred.* and spark.hadoop.mapreduce.* approaches cause
compression with saveAsObject. (Using spark 0.8.1)


>
> Guillaume
>
>   But why is that hadoop compression doesn't work for saveAsObject(), but
> it does work (according to Guillaume) for saveAsHadoopFile()?
>
>
> --
>    [image: eXenSa]
>  *Guillaume PITEL, Président*
> +33(0)6 25 48 86 80 / +33(0)9 70 44 67 53
>
>  eXenSa S.A.S. <http://www.exensa.com/>
>  41, rue Périer - 92120 Montrouge - FRANCE
> Tel +33(0)1 84 16 36 77 / Fax +33(0)9 72 28 37 05
>

<<exensa_logo_mail.png>>

Reply via email to