Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139476639
Agree, with commit aecc0c2 I reverted to the first option and replaced
`float` with `_create_column_from_literal` as proposed
---
If your project is set up for it, you
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139667259
LGTM, waiting for tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139667413
[Test build #1745 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/1745/consoleFull)
for PR 8658 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139671003
[Test build #1745 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/1745/console)
for PR 8658 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/8658
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139131882
@davies, could you take a look, please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r39186863
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__rtruediv__ =
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139316438
Jenkins, OK to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user 0x0FFF commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r39189353
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__rtruediv__ =
Github user 0x0FFF commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r39192640
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__rtruediv__ =
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r39190746
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__rtruediv__ =
Github user 0x0FFF commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r39195144
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__rtruediv__ =
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139386251
@0x0FFF I think `**` only make sense in Python, so we should not introduce
`**` into Scala (also pow).
---
If your project is set up for it, you can reply to this email
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139386349
cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139405282
+1 on not having this for Scala. There is already a pow function that do
pow(x, y).
We should just do this for Python.
---
If your project is set up for it, you
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r39212928
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__rtruediv__ =
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-139384669
Please, also check out the implementation from last commit. In my opinion
it is much more consistent. I just cannot implement `_pow` in `column.py`, it
looks much like a
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r39193022
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__rtruediv__ =
Github user 0x0FFF commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-138851028
@holdenk, there are two ways of implementing this:
1. The one I've done: adding functionality to utilize binary function
operation from
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8658#issuecomment-138709945
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user 0x0FFF opened a pull request:
https://github.com/apache/spark/pull/8658
[SPARK-9014][SQL] Allow Python spark API to use built-in exponential
operator
This PR addresses
(SPARK-9014)[https://issues.apache.org/jira/browse/SPARK-9014]
Added functionality: `Column`
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r38985409
--- Diff: python/pyspark/sql/column.py ---
@@ -151,6 +162,8 @@ def __init__(self, jc):
__rdiv__ = _reverse_op("divide")
__rtruediv__ =
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/8658#discussion_r38985416
--- Diff: python/pyspark/sql/column.py ---
@@ -91,6 +91,17 @@ def _(self):
return _
+def _bin_func_op(name, reverse=False,
23 matches
Mail list logo