Re: SparkContext with error from PySpark

2014-12-30 Thread Josh Rosen
r(ThreadPoolExecutor.java:1145) >> > at >> > >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >> >at java.lang.Thread.run(Thread.java:722) >> > >> > 14/12/31 04:49:58 INFO TaskSetMana

Re: SparkContext with error from PySpark

2014-12-30 Thread JAGANADH G
gt; > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > >at java.lang.Thread.run(Thread.java:722) > > > > 14/12/31 04:49:58 INFO TaskSetManager: Starting task 0.1 in stage 0.0 > (TID > > 1, aster4, NODE_LOCAL, 1321 bytes) > &

Re: SparkContext with error from PySpark

2014-12-30 Thread Eric Friedman
FO BlockManagerInfo: Added broadcast_2_piece0 in memory > on aster4:43309 (size: 3.8 KB, free: 265.0 MB) > 14/12/31 04:49:59 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on > executor aster4: org.apache.spark.SparkException ( > > > A

SparkContext with error from PySpark

2014-12-30 Thread Jaggu
TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on executor aster4: org.apache.spark.SparkException ( Any clue how to resolve the same. Best regards Jagan -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SparkContext-with-error-from-PySpark-tp20907.html S