Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11807#issuecomment-198638752
**[Test build #53602 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53602/consoleFull)**
for PR 11807 at commit
Github user sameeragarwal commented on the pull request:
https://github.com/apache/spark/pull/11834#issuecomment-198637517
No (we made all the changes in https://github.com/apache/spark/pull/11799)
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11807#issuecomment-198638912
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/11840
[Spark-14019] [SQL] Remove noop SortOrder in Sort
What changes were proposed in this pull request?
This PR is to add a new Optimizer rule for pruning Sort if its SortOrder is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11840#issuecomment-198646675
**[Test build #53605 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53605/consoleFull)**
for PR 11840 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11828#issuecomment-198647761
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11828#issuecomment-198647760
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11828#issuecomment-198647714
**[Test build #53600 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53600/consoleFull)**
for PR 11828 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11839#issuecomment-198635870
**[Test build #53603 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53603/consoleFull)**
for PR 11839 at commit
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/11841
[SPARK-13897][SQL] RelationalGroupedDataset and KeyValueGroupedDataset
## What changes were proposed in this pull request?
Previously, Dataset.groupBy returns a GroupedData, and Dataset.groupByKey
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11722#issuecomment-198640266
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/11841#issuecomment-198647907
cc @liancheng and @sameeragarwal
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/11840#discussion_r56745083
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -826,6 +827,17 @@ object CombineFilters extends
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11841#issuecomment-198648078
**[Test build #53606 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53606/consoleFull)**
for PR 11841 at commit
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/11817#issuecomment-198648163
Rethink this issue, I think the issue described in the JIRA should not
related to pushdown of limit. Because the latest CollectLimit only takes few
rows (here is only 1
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/11834#issuecomment-198634733
Did any code change?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/11840#discussion_r56745107
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -826,6 +827,17 @@ object CombineFilters extends
Github user viirya closed the pull request at:
https://github.com/apache/spark/pull/11817
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user yinxusen commented on the pull request:
https://github.com/apache/spark/pull/11835#issuecomment-198634432
@jkbradley
MiMa tests failed for changing to the `StageArrayParam`. But I think we
need a new Param like `ArrayParam[T]` with the Java compatible `w`
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/11791#issuecomment-198648221
Still WIP?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11841#issuecomment-198648504
**[Test build #53606 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53606/consoleFull)**
for PR 11841 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11841#issuecomment-198648514
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11838#issuecomment-198648575
Thank you, @rxin.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11841#issuecomment-198648512
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/11815#issuecomment-198634566
LGTM except 2 minor comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11841#issuecomment-198648728
**[Test build #53607 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53607/consoleFull)**
for PR 11841 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/11840#discussion_r56745275
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -826,6 +827,17 @@ object CombineFilters extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/11840#discussion_r56745280
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -826,6 +827,17 @@ object CombineFilters extends
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11840#issuecomment-198648880
**[Test build #53608 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53608/consoleFull)**
for PR 11840 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/11841#discussion_r56745324
--- Diff: project/MimaExcludes.scala ---
@@ -315,6 +315,7 @@ object MimaExcludes {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11831#issuecomment-198648881
**[Test build #53609 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53609/consoleFull)**
for PR 11831 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11805#issuecomment-198142117
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11750#discussion_r56425133
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -211,8 +214,7 @@ case class CatalogTablePartition(
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/11208#issuecomment-197635547
cc @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11737#issuecomment-197702816
**[Test build #53394 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53394/consoleFull)**
for PR 11737 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11735#issuecomment-197351217
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user antonoal opened a pull request:
https://github.com/apache/spark/pull/11777
Added transitive closure transformation to Catalyst
## What changes were proposed in this pull request?
A relatively simple transformation is missing from Catalyst's arsenal -
generation of
Github user shaneknapp commented on the pull request:
https://github.com/apache/spark/pull/11652#issuecomment-197597252
i'm currently installing the latest lintr on all of our jenkins workers.
this should finish in ~15 mins
---
If your project is set up for it, you can reply to
Github user tomwhite commented on the pull request:
https://github.com/apache/spark/pull/11806#issuecomment-198358199
I agree that BLOCK is always to be preferred over RECORD, so leave it at
BLOCK. RECORD is the default in Hadoop 1 and 2 (for backwards compatibility
reasons), but
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11297#issuecomment-197749675
**[Test build #53397 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53397/consoleFull)**
for PR 11297 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11750#issuecomment-197596671
**[Test build #2646 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2646/consoleFull)**
for PR 11750 at commit
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/11486#discussion_r56623549
--- Diff: R/pkg/R/mllib.R ---
@@ -71,14 +71,23 @@ setMethod("glm", signature(formula = "formula", family
= "ANY", data = "DataFram
#' @rdname
Github user sethah commented on the pull request:
https://github.com/apache/spark/pull/9008#issuecomment-198428763
cc @MLnick thoughts on the above comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11817#issuecomment-198370581
**[Test build #53539 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53539/consoleFull)**
for PR 11817 at commit
Github user yy2016 commented on the pull request:
https://github.com/apache/spark/pull/11787#issuecomment-198053603
I wonder why OneRowRelation isn't covered by the following import ?
```
import org.apache.spark.sql.catalyst.plans.logical._
```
---
If your project is set
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11621#issuecomment-198019144
**[Test build #53449 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53449/consoleFull)**
for PR 11621 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/11794#discussion_r56690412
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/ShuffleExternalSorter.java ---
@@ -320,7 +320,15 @@ private void growPointerArrayIfNecessary()
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/11770#issuecomment-197572471
BTW, if you're going to consider changing `SparkEnv` then I'd remove the
deprecated methods from back when it used to be a thread-local.
---
If your project is set
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11784#issuecomment-197916455
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/11841#issuecomment-19864
LGTM except for one MiMA check question.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11836#discussion_r56745379
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveCatalog.scala ---
@@ -182,13 +189,15 @@ private[spark] class HiveCatalog(client:
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11836#discussion_r56745391
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala ---
@@ -81,15 +83,31 @@ class HiveContext private[hive](
sc:
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11843#issuecomment-198667909
Build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/11759#discussion_r56455662
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala ---
@@ -218,48 +218,64 @@ abstract class SparkPlan extends
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/11636#discussion_r56364150
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/BoundAttribute.scala
---
@@ -60,17 +60,28 @@ case class
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11807#issuecomment-198147477
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11636#issuecomment-198218038
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11788#issuecomment-197994373
**[Test build #53437 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53437/consoleFull)**
for PR 11788 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11840#issuecomment-198667805
**[Test build #53608 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53608/consoleFull)**
for PR 11840 at commit
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/11783#issuecomment-198013182
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11843#issuecomment-198667908
**[Test build #53613 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53613/consoleFull)**
for PR 11843 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11843#issuecomment-198667910
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/11806#issuecomment-198206078
Actually, I did not understand why the overhead of compression at record (I
mean a row in Spark, a key-value in Hadoop output format) level would be very
high. I
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11701#issuecomment-197480732
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11620#issuecomment-197539250
**[Test build #53347 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53347/consoleFull)**
for PR 11620 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11447#issuecomment-197898514
**[Test build #53433 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53433/consoleFull)**
for PR 11447 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11843#issuecomment-198667857
**[Test build #53613 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53613/consoleFull)**
for PR 11843 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11840#issuecomment-198667845
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/11843
[SPARK-14021][SQL][WIP] custom context support for SparkSQLEnv
## What changes were proposed in this pull request?
This is to create a custom context for command `bin/spark-sql` and
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11447#issuecomment-197925454
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/11806#issuecomment-198201760
I see.. Should I maybe close this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/11782#issuecomment-197982710
Once we have gong through the failed tests, can we make a summary (on root
causes of failed tests)?
---
If your project is set up for it, you can reply to this email
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/11108#issuecomment-198004940
I agree. But we don't really need lines just to print empty lines. For
example:
~~~scala
println(goodnessOfFitTestResult)
println()
~~~
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10231#issuecomment-198504535
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11801#issuecomment-198162340
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11833#issuecomment-198594547
**[Test build #53573 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53573/consoleFull)**
for PR 11833 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11767#issuecomment-197490629
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11782#issuecomment-197902493
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11769#issuecomment-197512871
**[Test build #53340 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53340/consoleFull)**
for PR 11769 at commit
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/11765#issuecomment-197514258
cc @marmbrus @sameeragarwal @cloud-fan @nongli
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11722#issuecomment-197272461
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/11636#discussion_r56482715
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -158,9 +158,13 @@ class CodegenContext {
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/11636#discussion_r56408960
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/BoundAttribute.scala
---
@@ -60,17 +60,28 @@ case class
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11798#issuecomment-198209396
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/11768#discussion_r56408062
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/SQLBuilder.scala ---
@@ -333,6 +360,9 @@ class SQLBuilder(logicalPlan: LogicalPlan, sqlContext:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11764#issuecomment-197681396
**[Test build #53380 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53380/consoleFull)**
for PR 11764 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11735#issuecomment-197277579
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user olarayej commented on the pull request:
https://github.com/apache/spark/pull/11318#issuecomment-197566393
@felixcheung @shivaram @sun-rui I have addressed all your comments. Do we
have a consensus on the default value for drop? I'd say drop=T makes sense cuz
R does it
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11750#issuecomment-197620171
**[Test build #53358 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53358/consoleFull)**
for PR 11750 at commit
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/11663#discussion_r56406681
--- Diff: python/pyspark/ml/param/_shared_params_code_gen.py ---
@@ -105,64 +104,71 @@ def get$Name(self):
if __name__ == "__main__":
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/11721#issuecomment-198513672
LGTM, merging to master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11722#issuecomment-198268868
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user Astralidea closed the pull request at:
https://github.com/apache/spark/pull/10770
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11776#discussion_r56481486
--- Diff:
examples/src/main/java/org/apache/spark/examples/mllib/JavaStreamingTestExample.java
---
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/11806#issuecomment-198202709
Yea - until we can figure out what it actually means, I'd close this for
now.
cc @tomwhite - maybe you can shed some light on what "record" means here?
---
If
Github user shaneknapp commented on the pull request:
https://github.com/apache/spark/pull/11652#issuecomment-197601736
ok done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/11806#issuecomment-198201470
Am I misunderstanding it? It seems insane to run compression at record
level because the overhead is very high/
---
If your project is set up for it, you can reply to
Github user skambha commented on a diff in the pull request:
https://github.com/apache/spark/pull/11775#discussion_r56551299
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -205,7 +205,17 @@ case class DataSource(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11843#issuecomment-198669318
**[Test build #53614 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53614/consoleFull)**
for PR 11843 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10355#issuecomment-198113198
**[Test build #53454 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53454/consoleFull)**
for PR 10355 at commit
1 - 100 of 1811 matches
Mail list logo