Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14119
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14175
**[Test build #62225 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62225/consoleFull)**
for PR 14175 at commit
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/14119
LGTM, I've merged this to master and branch-2.0. Thanks for working on this!
I only observed one weird rendering caused by the blank lines before `{%
include_example %}`, maybe my local
GitHub user sun-rui opened a pull request:
https://github.com/apache/spark/pull/14175
[SPARK-16522][MESOS] Spark application throws exception on exit.
## What changes were proposed in this pull request?
Spark applications running on Mesos throw exception upon exit. For details,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14174
**[Test build #6 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/6/consoleFull)**
for PR 14174 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14165
**[Test build #62223 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62223/consoleFull)**
for PR 14165 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14165
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14165
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62223/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14174
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/6/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14174
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14174
**[Test build #6 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/6/consoleFull)**
for PR 14174 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14036
**[Test build #62224 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62224/consoleFull)**
for PR 14036 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14165
**[Test build #62223 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62223/consoleFull)**
for PR 14165 at commit
GitHub user ooq opened a pull request:
https://github.com/apache/spark/pull/14174
[SPARK-16524][SQL] Add RowBatch and RowBasedHashMapGenerator
## What changes were proposed in this pull request?
This PR is the first step for the following feature:
For hash
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14036
@cloud-fan Done ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14165
**[Test build #62221 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62221/consoleFull)**
for PR 14165 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14165
**[Test build #62220 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62220/consoleFull)**
for PR 14165 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14036
LGTM except 2 naming comments, thanks for working on it!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70580188
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -207,20 +207,12 @@ case class Multiply(left:
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70580079
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -285,6 +278,28 @@ case class Divide(left:
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14111
@cloud-fan At firstly I have implemented it with you said. But the
following situation that has broadcast join will have a error 'ScalarSubquery
has not finished', example (from SPARK-14791):
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14036
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14036
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62213/
Test PASSed.
---
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14148
It's easy to infer the schema once when we create the table and store it
into external catalog. However, it's a breaking change which means users can't
change the underlying data file schema
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14036
**[Test build #62213 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62213/consoleFull)**
for PR 14036 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70578153
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -413,38 +413,36 @@ case class DescribeTableCommand(table:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14036
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62212/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14036
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13701
@yhuai OK. Thanks for letting me know that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14036
**[Test build #62212 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62212/consoleFull)**
for PR 14036 at commit
Github user subrotosanyal commented on the issue:
https://github.com/apache/spark/pull/13658
hi @vanzin
Even I am surprised to see that notify was not triggered somehow.
> Is your code perhaps setting "spark.master" to "local" or something that
is not "yarn-cluster"
Github user lins05 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14165#discussion_r70575753
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -79,6 +79,9 @@ class SparkSession private(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14172
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62214/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14172
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14172
**[Test build #62214 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62214/consoleFull)**
for PR 14172 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14152#discussion_r70575075
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/Checkpoint.scala ---
@@ -18,8 +18,8 @@
package org.apache.spark.streaming
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14148
Tomorrow, I will try to dig it deeper and check whether schema evolution
could be an issue if the schema is fixed when creating tables.
---
If your project is set up for it, you can reply to
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14148
uh... I see what you mean. Agree.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14148
I was not talking about caching here. Caching is transient. I want the
behavior to be the same regardless of how many times I'm restarting Spark ...
And this has nothing to do with refresh.
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70573373
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -413,38 +413,36 @@ case class DescribeTableCommand(table:
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14148
@rxin Currently, we do not run schema inference every time when metadata
cache contains the plan. Based on my understanding, that is the major reason
why we introduced the metadata cache at the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14173
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62218/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14173
**[Test build #62218 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62218/consoleFull)**
for PR 14173 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14173
**[Test build #62218 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62218/consoleFull)**
for PR 14173 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14173
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14065
**[Test build #62219 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62219/consoleFull)**
for PR 14065 at commit
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70571914
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -413,38 +413,36 @@ case class DescribeTableCommand(table:
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14148
@cloud-fan, @gatorsmile, and @yhuai - how difficult would it be to change
Spark so that it runs schema inference during table creation, and saves the
table schema when we create the table?
---
If
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14171
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14148
Thanks. Just FYI when you make future changes, when a table is added to the
catalog (regardless whether it is temporary, non-temp, external, internal), we
should save its schema. We should not rely on
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14148
**[Test build #62217 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62217/consoleFull)**
for PR 14148 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14165
**[Test build #62216 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62216/consoleFull)**
for PR 14165 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14148
LGTM, pending jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14148
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14165
LGTM pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14173
**[Test build #62215 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62215/consoleFull)**
for PR 14173 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14173
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14173
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62215/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14173
**[Test build #62215 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62215/consoleFull)**
for PR 14173 at commit
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
cc @felixcheung @sun-rui @mengxr @junyangq
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/14173
[SPARKR][SPARK-16507] Add a CRAN checker, fix Rd aliases
## What changes were proposed in this pull request?
Add a check-cran.sh script that runs `R CMD check` as CRAN. Also fixes a
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14165#discussion_r70570816
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -79,6 +79,9 @@ class SparkSession private(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70570674
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -431,7 +431,7 @@ case class DescribeTableCommand(table:
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70570710
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -105,7 +105,7 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70570551
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -105,7 +105,7 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70570489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -431,7 +431,7 @@ case class DescribeTableCommand(table:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13990
Build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13990
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62208/
Test PASSed.
---
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13701
@viirya Thank you for updating this. Our schedules are pretty packed for
the release. We can take a look at it once 2.0 is released.
---
If your project is set up for it, you can reply to this email
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14165#discussion_r70569895
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -79,6 +79,9 @@ class SparkSession private(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13990
**[Test build #62208 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62208/consoleFull)**
for PR 13990 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14165#discussion_r70569883
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -79,6 +79,9 @@ class SparkSession private(
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13778
ping @cloud-fan Please see if this is ok for you now. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
let me take another look to see if there is a better change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14168
@sameeragarwal that's a good defensive measure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14168
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user sameeragarwal commented on the issue:
https://github.com/apache/spark/pull/14168
@ericl as an alternative, did you consider simply defining `boolean
${ev.isNull} = false;` for even the non-nullable branches in `nullSafeCodegen`
(e.g., here:
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14168
Thanks - merging in master/2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13990
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62210/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13990
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13990
**[Test build #62210 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62210/consoleFull)**
for PR 13990 at commit
601 - 681 of 681 matches
Mail list logo