[ https://issues.apache.org/jira/browse/SPARK-12100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15557219#comment-15557219 ]
holdenk commented on SPARK-12100: --------------------------------- Just noting related progress in https://github.com/apache/spark/pull/11211 / SPARK-13330 > bug in spark/python/pyspark/rdd.py portable_hash() > -------------------------------------------------- > > Key: SPARK-12100 > URL: https://issues.apache.org/jira/browse/SPARK-12100 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 1.5.1 > Reporter: Andrew Davidson > Priority: Minor > Labels: hashing, pyspark > Original Estimate: 2h > Remaining Estimate: 2h > > I am using spark-1.5.1-bin-hadoop2.6. I used > spark-1.5.1-bin-hadoop2.6/ec2/spark-ec2 to create a cluster and configured > spark-env to use python3. I get and exception 'Randomness of hash of string > should be disabled via PYTHONHASHSEED’. Is there any reason rdd.py should not > just set PYTHONHASHSEED ? > Should I file a bug? > Kind regards > Andy > details > http://spark.apache.org/docs/latest/api/python/pyspark.html?highlight=subtract#pyspark.RDD.subtract > Example from documentation does not work out of the box > Subtract(other, numPartitions=None) > Return each value in self that is not contained in other. > >>> x = sc.parallelize([("a", 1), ("b", 4), ("b", 5), ("a", 3)]) > >>> y = sc.parallelize([("a", 3), ("c", None)]) > >>> sorted(x.subtract(y).collect()) > [('a', 1), ('b', 4), ('b', 5)] > It raises > if sys.version >= '3.3' and 'PYTHONHASHSEED' not in os.environ: > raise Exception("Randomness of hash of string should be disabled via > PYTHONHASHSEED") > The following script fixes the problem > Sudo printf "\n# set PYTHONHASHSEED so python3 will not generate > Exception'Randomness of hash of string should be disabled via > PYTHONHASHSEED'\nexport PYTHONHASHSEED=123\n" >> /root/spark/conf/spark-env.sh > sudo pssh -i -h /root/spark-ec2/slaves cp /root/spark/conf/spark-env.sh > /root/spark/conf/spark-env.sh-`date "+%Y-%m-%d:%H:%M"` > Sudo for i in `cat slaves` ; do scp spark-env.sh > root@$i:/root/spark/conf/spark-env.sh; done > This is how I am starting spark > export PYSPARK_PYTHON=python3.4 > export PYSPARK_DRIVER_PYTHON=python3.4 > export IPYTHON_OPTS="notebook --no-browser --port=7000 --log-level=WARN" > $SPARK_ROOT/bin/pyspark \ > --master $MASTER_URL \ > --total-executor-cores $numCores \ > --driver-memory 2G \ > --executor-memory 2G \ > $extraPkgs \ > $* > see email thread "possible bug spark/python/pyspark/rdd.py portable_hash()' > on user@spark for more info -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org