It looks like you hit https://issues.apache.org/jira/browse/SPARK-7837 .
As I understand this occurs if there is skew in unpartitioned data.

Can you try partitioning model before saving it ?

On Sat, Sep 5, 2015 at 11:16 PM, Madawa Soysa <madawa...@cse.mrt.ac.lk>
wrote:

> outPath is correct. In the path, there are two directories data and
> metadata. In the data directory, following data structure is there.
>
> |-data
> |----user
> |--------_temporary
> |------------ 0
> |----------------_temporary
>
> But nothing is written inside the folders. I'm using spark 1.4.1.
>
> On 6 September 2015 at 08:53, Yanbo Liang <yblia...@gmail.com> wrote:
>
>> Please check the "outPath" and verify whether the saving succeed.
>> Which version did you use?
>> You may hit this issue <https://issues.apache.org/jira/browse/SPARK-7837> 
>> which
>> is resolved at version 1.5.
>>
>> 2015-09-05 21:47 GMT+08:00 Madawa Soysa <madawa...@cse.mrt.ac.lk>:
>>
>>> Hi All,
>>>
>>> I'm getting an error when trying to save a ALS MatrixFactorizationModel.
>>> I'm using following method to save the model.
>>>
>>> *model.save(sc, outPath)*
>>>
>>> I'm getting the following exception when saving the model. I have
>>> attached the full stack trace. Any help would be appreciated to resolve
>>> this issue.
>>>
>>> org.apache.spark.SparkException: Job aborted.
>>>         at
>>> org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.insert(commands.scala:166)
>>>         at
>>> org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.run(commands.scala:139)
>>>         at
>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>>>         at
>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>>>         at
>>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
>>>         at
>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>>         at
>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>>         at
>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>         at
>>> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
>>>         at
>>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
>>>         at
>>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
>>>         at
>>> org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:336)
>>>         at
>>> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:144)
>>>         at
>>> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:135)
>>>         at
>>> org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:281)
>>>         at
>>> org.apache.spark.mllib.recommendation.MatrixFactorizationModel$SaveLoadV1_0$.save(MatrixFactorizationModel.scala:284)
>>>         at
>>> org.apache.spark.mllib.recommendation.MatrixFactorizationModel.save(MatrixFactorizationModel.scala:141)
>>>
>>>
>>> Thanks,
>>> Madawa
>>>
>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>
>>
>>
>
>
> --
>

Reply via email to