Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46650863
BTW I really want this to go into 1.0.1, which will probably have a release
candidate soon. So if you have a chance to rebase your PR and add the cast,
please do. Thanks a
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46676449
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46676431
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user willb commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46677366
Thanks for the quick review and patch, @rxin!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46684490
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15959/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46684488
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46715422
I tried `having.q` in hive, I got an error on running `SELECT key FROM src
GROUP BY key HAVING max(value) val_255`. The reason is that the output of
an `Aggregate` only
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46724012
I'm going to merge this in master branch-1.0. I will create a separate
ticket to track progress on HAVING. Basically there are two things missing:
1. HAVING
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1136
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user willb commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46724443
@rxin, re: the former, seems like most implementations signal this as an
error.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46725494
BTW two follow up tickets created:
https://issues.apache.org/jira/browse/SPARK-2225
https://issues.apache.org/jira/browse/SPARK-2226
Let me know if
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46725451
There are databases that support that, and it seems to me a very simple
change (actually just removing the check code you added is probably enough).
---
If your project
Github user willb commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46725581
OK, I wasn't sure if strict Hive compatibility was the goal. I'm happy to
take these tickets. Thanks again!
---
If your project is set up for it, you can reply to this
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46726272
I actually did 2225 already. I will assign 2226 to you. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
GitHub user willb opened a pull request:
https://github.com/apache/spark/pull/1136
SPARK-2180: support HAVING clauses in Hive queries
This PR extends Spark's HiveQL support to handle HAVING clauses in
aggregations. The HAVING test from the Hive compatibility suite doesn't appear
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46603025
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46603044
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46611573
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15917/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46611571
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46612533
Any idea why the having test from Hive is not runnable?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user willb commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46615691
@rxin, I'm not 100% sure but I think it's a problem with local map/reduce
(the stack trace isn't too informative, but it's the same as the one for tests
that are
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46618244
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46618437
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15925/
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46618436
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46620483
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46620491
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46627231
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15928/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46627228
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46634535
Thanks, @willb. There is at least one problem I found. - I think you'd need
to add a cast to the having expression. Otherwise try run the following:
```select key,
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46635173
To be more specific, I think you can always add a cast that cast the having
expression to boolean, and then we have SimplifyCasts in the optimizer that
would remove
Github user willb commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46635597
Thanks for the catch, @rxin! I'll make the change and add tests for it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user willb commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46640150
So I've added a cast in cases in which non-boolean expressions are supplied
to having expressions. It appears that `Cast(_, BooleanType)` isn't
idempotent, though -- if
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46642243
That's definitely a bug - I will take a look at it later.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user willb commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46642661
Thanks! I'm happy to put together a preliminary patch as well, but
probably won't be able to take a look until tomorrow morning.
---
If your project is set up for it,
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46644761
I found the issue and fixed it. Will push out a pull request soon.
If you can just add the boolean cast (always add it - no need to check if
the type is already
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1136#issuecomment-46646244
Here's the patch: https://github.com/apache/spark/pull/1144
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
36 matches
Mail list logo