Github user rxin closed the pull request at:
https://github.com/apache/spark/pull/1152
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1152#issuecomment-46941044
@mateiz I submitted a patch to core's Aggregator in #1191.
After implementing it in Aggregator, I realized it might be hard for Spark
SQL to reuse Aggregator unless
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1152#issuecomment-46768096
Spark SQL doesn't currently use the aggregator, but we would want to do
that.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/1152#discussion_r14052590
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/Aggregate.scala ---
@@ -168,9 +174,22 @@ case class Aggregate(
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/1152
[SPARK-1412][SQL] Disable partial aggregation automatically when reduction
factor is low - WIP
This is just a prototype. Kinda ugly, doesn't properly connect with the
config system yet, and have no
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1152#issuecomment-46654388
@concretevitamin I find it hard to actually use config options in a
physical operator. Any suggestions?
---
If your project is set up for it, you can reply to this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1152#issuecomment-46654458
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1152#issuecomment-46654473
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/1152#issuecomment-46654585
@pwendell / @mateiz should we actually build this into Spark directly (i.e.
in Aggregator)?
---
If your project is set up for it, you can reply to this email and have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1152#issuecomment-46660386
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15952/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1152#issuecomment-46660384
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user concretevitamin commented on the pull request:
https://github.com/apache/spark/pull/1152#issuecomment-46709082
@rxin If we are simply trying to read the default values for the params,
but not user-set ones (i.e. in the absence of a `SQLContext` in `execute()`, I
think we
12 matches
Mail list logo