.Thread.run(Thread.java:745)
> Driver stacktrace:
>
> It is really strange, because count at ALS.scala:314 is already out the
> loop of iterations. Any idea?
> Thanks a lot for advance.
>
> FYI: I used spark 1.1.0 and ALS.train() works pretty well for all the
> cases.
--
Vi
ableLike$WithFilter.foreach(TraversableLike.scala:771)
>>>
>>> org.apache.spark.rdd.CoGroupedRDD.compute(CoGroupedRDD.scala:158)
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>>> org.apache.spark.rdd.RDD.ite
g.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$5.apply(CoGroupedRDD.scala:159)
>>>>
>>>> org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$5.apply(CoGroupedRDD.scala:158)
>>>>
>>>> scala.collection.Tr
ask.scala:54)
>>
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>
>> java.util.concurrent.ThreadPoolExe
:262)
>>> org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>>>
>>> org.apache.spark.rdd.MappedValuesRDD.compute(MappedValuesRDD.scala:31)
>>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>>> org.apache.spark.rdd.
rReadCheckpoint(RDD.scala:262)
>> org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>> org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>> org.apache.spark.
; Driver stacktrace:
>
> It is really strange, because count at ALS.scala:314 is already out the loop
> of iterations. Any idea?
> Thanks a lot for advance.
>
> FYI: I used spark 1.1.0 and ALS.train() works pretty well for all the cases.
>
>
>
> --
> View this messag
rReadCheckpoint(RDD.scala:262)
>> org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>> org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
>> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>> org.apache.spark.CacheM
java:615)
> java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
>
> It is really strange, because count at ALS.scala:314 is already out the
> loop of iterations. Any idea?
> Thanks a lot for advance.
>
> FYI: I used spark 1.1.0 and ALS.train() works p
rks pretty well for all the cases.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/ALS-implicit-error-pyspark-tp16595.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
10 matches
Mail list logo