It should be fixed in 1.1+.

Could you have a script to reproduce it?

On Thu, Nov 6, 2014 at 10:39 AM, skane <sk...@websense.com> wrote:
> I don't have any insight into this bug, but on Spark version 1.0.0 I ran into
> the same bug running the 'sort.py' example. On a smaller data set, it worked
> fine. On a larger data set I got this error:
>
> Traceback (most recent call last):
>   File "/home/skane/spark/examples/src/main/python/sort.py", line 30, in
> <module>
>     .sortByKey(lambda x: x)
>   File "/usr/lib/spark/python/pyspark/rdd.py", line 480, in sortByKey
>     bounds.append(samples[index])
> IndexError: list index out of range
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/PySpark-issue-with-sortByKey-IndexError-list-index-out-of-range-tp16445p18288.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to