Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8948#issuecomment-145356466
Adding @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user 0x0FFF opened a pull request:
https://github.com/apache/spark/pull/8948
[SPARK-7869][SQL] Adding Postgres JSON and JSONb data types support
This PR addresses
[SPARK-7869](https://issues.apache.org/jira/browse/SPARK-7869)
Before the patch, attempt to load the
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139476639
Agree, with commit aecc0c2 I reverted to the first option and replaced
`float` with `_create_column_from_literal` as proposed
---
If your project is set up for it, you
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139384669
Please, also check out the implementation from last commit. In my opinion
it is much more consistent. I just cannot implement `_pow` in `column.py`, it
looks much like a
Github user 0x0FFF commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r39195144
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__
Github user 0x0FFF commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r39192640
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__
Github user 0x0FFF commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r39189353
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139131882
@davies, could you take a look, please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-138851028
@holdenk, there are two ways of implementing this:
1. The one I've done: adding functionality to utilize binary function
operation
GitHub user 0x0FFF opened a pull request:
https://github.com/apache/spark/pull/8658
[SPARK-9014][SQL] Allow Python spark API to use built-in exponential
operator
This PR addresses
(SPARK-9014)[https://issues.apache.org/jira/browse/SPARK-9014]
Added functionality: `Column
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8574#issuecomment-137165930
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8574#issuecomment-137142934
Looks like it's not being retested after the last commit as Jenkins failed
to update the status and the dashboard shows that it's still running. Am I
right
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8574#issuecomment-137128350
@cloud-fan, I addressed your comments with last commit
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user 0x0FFF opened a pull request:
https://github.com/apache/spark/pull/8574
[SPARK-10417][SQL] Iterating through Column results in infinite loop
`pyspark.sql.column.Column` object has `__getitem__` method, which makes it
iterable for Python. In fact it has `__getitem__` to
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8556#issuecomment-136862025
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8556#issuecomment-136862005
Failed mllib test for python2.6, I didn't change anything that might have
affected it. Same test passes locally on my machine
---
If your project is set up for it
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8556#issuecomment-136852809
Moved regression test to DataTypeTests class
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user 0x0FFF commented on a diff in the pull request:
https://github.com/apache/spark/pull/8556#discussion_r38462716
--- Diff: python/pyspark/sql/types.py ---
@@ -168,10 +168,12 @@ def needConversion(self):
return True
def toInternal(self, d
Github user 0x0FFF commented on a diff in the pull request:
https://github.com/apache/spark/pull/8555#discussion_r38461951
--- Diff: python/pyspark/sql/types.py ---
@@ -1290,8 +1290,9 @@ def can_convert(self, obj):
def convert(self, obj, gateway_client
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8556#issuecomment-136836673
Added regression test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user 0x0FFF commented on a diff in the pull request:
https://github.com/apache/spark/pull/8556#discussion_r38460209
--- Diff: python/pyspark/sql/types.py ---
@@ -168,10 +168,12 @@ def needConversion(self):
return True
def toInternal(self, d
GitHub user 0x0FFF opened a pull request:
https://github.com/apache/spark/pull/8556
[SPARK-10392] [SQL] Pyspark - Wrong DateType support on JDBC connection
This PR addresses issue
[SPARK-10392](https://issues.apache.org/jira/browse/SPARK-10392)
The problem is that for "sta
GitHub user 0x0FFF opened a pull request:
https://github.com/apache/spark/pull/8555
[SPARK-10162] [SQL] Fix the timezone omitting for PySpark Dataframe filter
function
This PR addresses
[SPARK-10162](https://issues.apache.org/jira/browse/SPARK-10162)
The issue is with
Github user 0x0FFF closed the pull request at:
https://github.com/apache/spark/pull/8536
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8536#issuecomment-136446006
Unit test is added. Changed `UTC` class definition in
`python/pyspark/sql/tests.py` to avoid introducing additional dependency on
`pytz` or duplicating class with almost
GitHub user 0x0FFF opened a pull request:
https://github.com/apache/spark/pull/8536
[SPARK-10162] [SQL] Fix the timezone omitting for PySpark Dataframe filter
function
This PR addresses
[SPARK-10162](https://issues.apache.org/jira/browse/SPARK-10162)
The change applied
26 matches
Mail list logo