[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...
Github user GregBowyer commented on a diff in the pull request: https://github.com/apache/spark/pull/14517#discussion_r81665506 --- Diff: python/pyspark/sql/readwriter.py --- @@ -747,16 +800,25 @@ def _test(): except py4j.protocol.Py4JError: spark = SparkSession(sc) +seed = int(time() * 1000) --- End diff -- I have been really busy with work of late, but I will try to sort this out today --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #9766: [SPARK-11775][PYSPARK][SQL] Allow PySpark to register Jav...
Github user GregBowyer commented on the issue: https://github.com/apache/spark/pull/9766 Where do we stand on this, I just reapplied this patch to a spark 2.1-xxx build to get the same behaviour. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy and sort...
Github user GregBowyer commented on the issue: https://github.com/apache/spark/pull/14517 What thoughts do people have about merging in? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...
Github user GregBowyer commented on a diff in the pull request: https://github.com/apache/spark/pull/14517#discussion_r75011876 --- Diff: python/pyspark/sql/readwriter.py --- @@ -692,8 +734,7 @@ def orc(self, path, mode=None, partitionBy=None, compression=None): This will override ``orc.compress``. If None is set, it uses the default value, ``snappy``. ->>> orc_df = spark.read.orc('python/test_support/sql/orc_partitioned') --- End diff -- I actually have some small changes for ORC that relate to a previous pull request I cleaned up --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...
Github user GregBowyer commented on a diff in the pull request: https://github.com/apache/spark/pull/14517#discussion_r75011758 --- Diff: python/pyspark/sql/readwriter.py --- @@ -692,8 +734,7 @@ def orc(self, path, mode=None, partitionBy=None, compression=None): This will override ``orc.compress``. If None is set, it uses the default value, ``snappy``. ->>> orc_df = spark.read.orc('python/test_support/sql/orc_partitioned') --- End diff -- Ah sorry, I was going to look into making the test do the lucene random testing thing of switching between the dataformats provided for `df` randomly. I was going to change the runner to use `random.choice` to pick between orc and parquet (and you know one day arrow, hdf5 whatever). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...
Github user GregBowyer commented on a diff in the pull request: https://github.com/apache/spark/pull/14517#discussion_r75011462 --- Diff: python/pyspark/sql/readwriter.py --- @@ -733,11 +774,19 @@ def _test(): import os import tempfile import py4j +import shutil from pyspark.context import SparkContext from pyspark.sql import SparkSession, Row import pyspark.sql.readwriter -os.chdir(os.environ["SPARK_HOME"]) +spark_home = os.path.realpath(os.environ["SPARK_HOME"]) + +test_dir = tempfile.mkdtemp() +os.chdir(test_dir) + +path = lambda x, y, z: os.path.join(x, y) + +shutil.copytree(path(spark_home, 'python', 'test_support'), path(test_dir, 'python', 'test_support')) --- End diff -- Thanks for the note, I was getting annoyed at not knowing where to find the tools for such things. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark issue #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy and sort...
Github user GregBowyer commented on the issue: https://github.com/apache/spark/pull/14517 Amended commit with style changes from MLNick. Can someone call the OK to test please --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...
GitHub user GregBowyer opened a pull request: https://github.com/apache/spark/pull/14517 [SPARK-16931][PYTHON] PySpark APIS for bucketBy and sortBy ## What changes were proposed in this pull request? API access to allow pyspark to use bucketBy and sortBy in datraframes. You can merge this pull request into a Git repository by running: $ git pull https://github.com/GregBowyer/spark pyspark-bucketing Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/14517.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #14517 commit 47d9ef797e229b9e3239c5dcb7ea72bef1c54683 Author: Greg Bowyer Date: 2016-08-06T00:53:30Z [SPARK-16931][PYTHON] PySpark APIS for bucketBy and sortBy --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org