[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2017-02-26 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r103135331
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -747,16 +800,25 @@ def _test():
 except py4j.protocol.Py4JError:
 spark = SparkSession(sc)
 
+seed = int(time() * 1000)
--- End diff --

@zero323, Good to know. Then, please go ahead if you are ready :).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2017-02-26 Thread zero323
Github user zero323 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r103134846
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -747,16 +800,25 @@ def _test():
 except py4j.protocol.Py4JError:
 spark = SparkSession(sc)
 
+seed = int(time() * 1000)
--- End diff --

@HyukjinKwon By all means. I prepared a bunch of tests 
(7d911c647f21ada7fb429fd7c1c5f15934ff8847)  and extended a bit code provided by 
@GregBowyer (72c04a3f196da5223ebb44725aa88cffa81036e4). I think we can skip low 
level tests (direct access to the files) which are already present in Scala 
test base.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2017-02-26 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r103122180
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -747,16 +800,25 @@ def _test():
 except py4j.protocol.Py4JError:
 spark = SparkSession(sc)
 
+seed = int(time() * 1000)
--- End diff --

cc @zero323, would you maybe be interested in taking over this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2017-02-17 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/14517


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2017-02-09 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r100310447
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -747,16 +800,25 @@ def _test():
 except py4j.protocol.Py4JError:
 spark = SparkSession(sc)
 
+seed = int(time() * 1000)
--- End diff --

@GregBowyer ping. Let me propose to close this after a week.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2017-01-23 Thread zero323
Github user zero323 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r97455456
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -747,16 +800,25 @@ def _test():
 except py4j.protocol.Py4JError:
 spark = SparkSession(sc)
 
+seed = int(time() * 1000)
--- End diff --

@GregBowyer Any progress on this? :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-11-28 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r89945485
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -747,16 +800,25 @@ def _test():
 except py4j.protocol.Py4JError:
 spark = SparkSession(sc)
 
+seed = int(time() * 1000)
--- End diff --

@GregBowyer ping


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-10-03 Thread GregBowyer
Github user GregBowyer commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r81665506
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -747,16 +800,25 @@ def _test():
 except py4j.protocol.Py4JError:
 spark = SparkSession(sc)
 
+seed = int(time() * 1000)
--- End diff --

I have been really busy with work of late, but I will try to sort this out 
today


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-10-03 Thread davies
Github user davies commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r81641574
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -747,16 +800,25 @@ def _test():
 except py4j.protocol.Py4JError:
 spark = SparkSession(sc)
 
+seed = int(time() * 1000)
--- End diff --

@GregBowyer ping


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-09-02 Thread davies
Github user davies commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r77420949
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -692,8 +734,7 @@ def orc(self, path, mode=None, partitionBy=None, 
compression=None):
 This will override ``orc.compress``. If None 
is set, it uses the
 default value, ``snappy``.
 
->>> orc_df = 
spark.read.orc('python/test_support/sql/orc_partitioned')
--- End diff --

@GregBowyer Could you exclude these changes from this PR? that could help 
to this PR to merged easily.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-09-02 Thread davies
Github user davies commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r77420246
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -747,16 +800,25 @@ def _test():
 except py4j.protocol.Py4JError:
 spark = SparkSession(sc)
 
+seed = int(time() * 1000)
--- End diff --

It's better to have determistic test, testing with parquet should be enough.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-08-17 Thread holdenk
Github user holdenk commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r75199740
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -733,11 +774,19 @@ def _test():
 import os
 import tempfile
 import py4j
+import shutil
 from pyspark.context import SparkContext
 from pyspark.sql import SparkSession, Row
 import pyspark.sql.readwriter
 
-os.chdir(os.environ["SPARK_HOME"])
+spark_home = os.path.realpath(os.environ["SPARK_HOME"])
+
+test_dir = tempfile.mkdtemp()
+os.chdir(test_dir)
+
+path = lambda x, y, z: os.path.join(x, y)
+
+shutil.copytree(path(spark_home, 'python', 'test_support'), 
path(test_dir, 'python', 'test_support'))
--- End diff --

Sure thing - I'm always happy to help people get up to speed with 
contributing to PySpark so feel free to reach out to me if you get stuck with 
something similar.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-08-16 Thread GregBowyer
Github user GregBowyer commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r75011876
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -692,8 +734,7 @@ def orc(self, path, mode=None, partitionBy=None, 
compression=None):
 This will override ``orc.compress``. If None 
is set, it uses the
 default value, ``snappy``.
 
->>> orc_df = 
spark.read.orc('python/test_support/sql/orc_partitioned')
--- End diff --

I actually have some small changes for ORC that relate to a previous pull 
request I cleaned up


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-08-16 Thread GregBowyer
Github user GregBowyer commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r75011758
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -692,8 +734,7 @@ def orc(self, path, mode=None, partitionBy=None, 
compression=None):
 This will override ``orc.compress``. If None 
is set, it uses the
 default value, ``snappy``.
 
->>> orc_df = 
spark.read.orc('python/test_support/sql/orc_partitioned')
--- End diff --

Ah sorry, I was going to look into making the test do the lucene random 
testing thing of switching between the dataformats provided for `df` randomly.

I was going to change the runner to use `random.choice` to pick between orc 
and parquet (and you know one day arrow, hdf5 whatever).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-08-16 Thread GregBowyer
Github user GregBowyer commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r75011462
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -733,11 +774,19 @@ def _test():
 import os
 import tempfile
 import py4j
+import shutil
 from pyspark.context import SparkContext
 from pyspark.sql import SparkSession, Row
 import pyspark.sql.readwriter
 
-os.chdir(os.environ["SPARK_HOME"])
+spark_home = os.path.realpath(os.environ["SPARK_HOME"])
+
+test_dir = tempfile.mkdtemp()
+os.chdir(test_dir)
+
+path = lambda x, y, z: os.path.join(x, y)
+
+shutil.copytree(path(spark_home, 'python', 'test_support'), 
path(test_dir, 'python', 'test_support'))
--- End diff --

Thanks for the note, I was getting annoyed at not knowing where to find the 
tools for such things.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-08-16 Thread holdenk
Github user holdenk commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r74984295
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -692,8 +734,7 @@ def orc(self, path, mode=None, partitionBy=None, 
compression=None):
 This will override ``orc.compress``. If None 
is set, it uses the
 default value, ``snappy``.
 
->>> orc_df = 
spark.read.orc('python/test_support/sql/orc_partitioned')
--- End diff --

Why did you get rid of the orc reading test?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-08-16 Thread holdenk
Github user holdenk commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r74983999
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -733,11 +774,19 @@ def _test():
 import os
 import tempfile
 import py4j
+import shutil
 from pyspark.context import SparkContext
 from pyspark.sql import SparkSession, Row
 import pyspark.sql.readwriter
 
-os.chdir(os.environ["SPARK_HOME"])
+spark_home = os.path.realpath(os.environ["SPARK_HOME"])
+
+test_dir = tempfile.mkdtemp()
+os.chdir(test_dir)
+
+path = lambda x, y, z: os.path.join(x, y)
+
+shutil.copytree(path(spark_home, 'python', 'test_support'), 
path(test_dir, 'python', 'test_support'))
--- End diff --

pep8 is saying this line is too long (over 100 chars) so its failing the 
style tests. You can run the style tests locally with `./dev/lint-python` as 
well for a faster turn around than Jenkins :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-08-08 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r73884828
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -500,6 +500,41 @@ def partitionBy(self, *cols):
 self._jwrite = self._jwrite.partitionBy(_to_seq(self._spark._sc, 
cols))
 return self
 
+@since(2.0)
+def bucketBy(self, numBuckets, *cols):
+"""Buckets the output by the given columns on the file system.
+
+:param numBuckets: the number of buckets to save
+:param cols: name of columns
+
+>>> df.write.format('parquet').bucketBy('year', 
'month').saveAsTable('bucketed_table')
+"""
+if len(cols) == 1 and isinstance(cols[0], (list, tuple)):
+cols = cols[0]
+
+col = cols[0]
+cols = cols[1:]
+
+self._jwrite = self._jwrite.bucketBy(numBuckets, col, 
_to_seq(self._spark._sc, cols))
+return self
+
+@since(2.0)
+def sortBy(self, *cols):
+"""Sorts the output in each bucket by the given columns on the 
file system.
+
+:param cols: name of columns
+
+>>> df.write.format('parquet').bucketBy('year', 
'month').sortBy('day').saveAsTable('sorted_bucketed_table')
--- End diff --

similarly here no `numBuckets` arg


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-08-08 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r73884740
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -500,6 +500,41 @@ def partitionBy(self, *cols):
 self._jwrite = self._jwrite.partitionBy(_to_seq(self._spark._sc, 
cols))
 return self
 
+@since(2.0)
+def bucketBy(self, numBuckets, *cols):
+"""Buckets the output by the given columns on the file system.
+
+:param numBuckets: the number of buckets to save
+:param cols: name of columns
+
+>>> df.write.format('parquet').bucketBy('year', 
'month').saveAsTable('bucketed_table')
--- End diff --

This test has no `numBuckets` arg in the `bucketBy` method call


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-08-08 Thread MLnick
Github user MLnick commented on a diff in the pull request:

https://github.com/apache/spark/pull/14517#discussion_r73884350
  
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -500,6 +500,41 @@ def partitionBy(self, *cols):
 self._jwrite = self._jwrite.partitionBy(_to_seq(self._spark._sc, 
cols))
 return self
 
+@since(2.0)
--- End diff --

since should be `2.1` since I don't think this will go into branch-2.0


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14517: [SPARK-16931][PYTHON] PySpark APIS for bucketBy a...

2016-08-05 Thread GregBowyer
GitHub user GregBowyer opened a pull request:

https://github.com/apache/spark/pull/14517

[SPARK-16931][PYTHON] PySpark APIS for bucketBy and sortBy

## What changes were proposed in this pull request?
API access to allow pyspark to use bucketBy and sortBy in datraframes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/GregBowyer/spark pyspark-bucketing

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/14517.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14517


commit 47d9ef797e229b9e3239c5dcb7ea72bef1c54683
Author: Greg Bowyer 
Date:   2016-08-06T00:53:30Z

[SPARK-16931][PYTHON] PySpark APIS for bucketBy and sortBy




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org