Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14124
@viirya Thanks for your comment! Actually, that's I want to have some
feedback for from @marmbrus .
It seems forcing to a nullable schema all is already happening when you
read/write da
Github user lins05 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13248#discussion_r70198232
--- Diff: python/pyspark/ml/stat/distribution.py ---
@@ -0,0 +1,267 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+#
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/13778#discussion_r70198204
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -346,14 +346,47 @@ case class LambdaVariable(val
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/14048
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14048
Hmm. Okay, I didn't prevent all.
I see. I'll close.
Thank you for decision, @cloud-fan .
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14048
```
case Union(children) if children.forall(x =>
x.isInstanceOf[InsertIntoTable] ||
x.isInstanceOf[InsertIntoHadoopFsRelationCommand]) =>
```
This doesn't indicate a muliti-insert righ
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14048
Ah, I see. You mean Union of `INSERT INTO`s, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14048
This PR fixes that with minimal efforts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14034
you missed one comment :
https://github.com/apache/spark/pull/14034/files#r70183958 :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as we
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13704
**[Test build #62071 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62071/consoleFull)**
for PR 13704 at commit
[`66800fa`](https://github.com/apache/spark/commit/6
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14048
Ur, what do you mean?
> With your patch, we can still create union queries with side effect which
will be executed eagerly.
---
If your project is set up for it, you can reply to this ema
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14048
The current one looks like this.
```
case Union(children) if children.forall(x =>
x.isInstanceOf[InsertIntoTable] ||
x.isInstanceOf[InsertIntoHadoopFsRelationCommand]) =>
```
---
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14123
should we wait for https://github.com/apache/spark/pull/14071?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14128
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62066/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14128
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/13778#discussion_r70197828
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -346,14 +346,47 @@ case class LambdaVariable(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14128
**[Test build #62066 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62066/consoleFull)**
for PR 14128 at commit
[`86e9d12`](https://github.com/apache/spark/commit/
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/13778#discussion_r70197819
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -346,14 +346,47 @@ case class LambdaVariable(val
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/13778#discussion_r70197694
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -346,14 +346,47 @@ case class LambdaVariable(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13991
**[Test build #62070 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62070/consoleFull)**
for PR 13991 at commit
[`0c60d87`](https://github.com/apache/spark/commit/0
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/13991
LGTM, pending jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/13991
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/13778#discussion_r70197480
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -346,14 +346,47 @@ case class LambdaVariable(val
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14034
**[Test build #62069 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62069/consoleFull)**
for PR 14034 at commit
[`dec5ad9`](https://github.com/apache/spark/commit/d
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/13778#discussion_r70197423
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -346,14 +346,47 @@ case class LambdaVariable(val
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14048
It's good to elimiante the inconsistency, but I don't think there is an
easy fix. With your patch, we can still create union queries with side effect
which will be executed eagerly. We use `Union`
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13990
@rxin no need, I will update this today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14124
@HyukjinKwon Your patch solves this inconsistency by forcing schema as
nullable at all. However, looks like the parquet case is for compatibility, is
this the same for json?
---
If your project is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14123
**[Test build #62068 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62068/consoleFull)**
for PR 14123 at commit
[`082040f`](https://github.com/apache/spark/commit/0
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/13778#discussion_r70196570
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -346,14 +346,47 @@ case class LambdaVariable(
Github user lw-lin commented on the issue:
https://github.com/apache/spark/pull/14118
I think @HyukjinKwon has made a good point: it's kind of strange null
strings can be written out, but can not be read back as nulls.
So for `StringType`:
nulls writ
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13778
@cloud-fan I just checked the python UDT. In python side, we will serialize
the python UDT to binary. The python UDT passed to java includes the binary.
Then in python side, in the worker we will des
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14124
I am a bit confused if we are allowed to read JSON (via `json(jsonRDD:
RDD[String])` API) with schema having fields set `false` in `nullable`.
If it is meant to be not allowed, this issue wil
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14034#discussion_r70194607
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -660,18 +660,51 @@ class SQLQuerySuite extends QueryTest with
SharedSQL
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14034#discussion_r70194599
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/StatisticsSuite.scala ---
@@ -31,4 +33,46 @@ class StatisticsSuite extends QueryTest with
SharedSQ
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14034#discussion_r70194595
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/StatisticsSuite.scala ---
@@ -31,4 +33,46 @@ class StatisticsSuite extends QueryTest with
SharedSQ
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14124
@HyukjinKwon No matter whether this PR is merged or not, I still think we
should fix the above issue. Silent conversion does not look good to me.
---
If your project is set up for it, you can re
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/14038
ping @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/13852
Yes. Currently, we have three functions with `supportPartial=false`:
`hive_udaf`, `collect`, and window functions. The former two functions cannot
support this because we do not have byte-backed muta
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14114
**[Test build #62067 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62067/consoleFull)**
for PR 14114 at commit
[`af6692f`](https://github.com/apache/spark/commit/a
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14129
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
GitHub user tilumi opened a pull request:
https://github.com/apache/spark/pull/14129
[SPARK-16280][SQL][WIP] add HistogramNumeric Expression
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70194370
--- Diff: docs/sparkr.md ---
@@ -306,6 +306,64 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset grouping
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/13704#discussion_r70194231
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2018,6 +2018,8 @@ class Analyzer(
fai
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14116
Now, all tests are passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled an
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/13704#discussion_r70193764
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -479,7 +479,7 @@ case class Cast(child: Expression, dataType
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13852
Do we have two requirements here?
One is whether an aggregate function supports partial aggregation, and the
other is whether the order should be enforced right ?
---
If your project is set
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/13852
you mean we need two funcs: `supportPartial` and `forceSortAggregate`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13990
cc @dongjoon-hyun or @petermaxlee want to take over the pull request and
bring it to completion?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13852
are we overloading the semantics? I think it's actually useful to have a
supportsPartial, which is what this was for.
---
If your project is set up for it, you can reply to this email and have your
re
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13494
@hvanhovell can you take a look at this too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14118
IMHO, handling `StringType` at least lets users handling `null`s in
roundtrip in writing and reading. CSV writes `null` according to `nullValue`
[here](https://github.com/apache/spark/blob/38cf8
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14128
**[Test build #62066 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62066/consoleFull)**
for PR 14128 at commit
[`86e9d12`](https://github.com/apache/spark/commit/8
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/13704#discussion_r70193070
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -479,7 +479,7 @@ case class Cast(child: Expression, data
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/14128
[SPARK-16476] Restructure MimaExcludes for easier union excludes
## What changes were proposed in this pull request?
It is currently fairly difficult to have proper mima excludes when we cut a
ver
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14127
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14127
Thanks - merging in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishe
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14114
Thank you for comments! Both comment are tightly related to each other.
I see what is your point. Yep, implicit use of "" string is not a good
idea. I'll fix it again according to the advic
Github user praveendareddy21 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13248#discussion_r70192124
--- Diff: python/pyspark/ml/stat/distribution.py ---
@@ -0,0 +1,267 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or mo
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14054
I do feel it'd be useful to be configurable, but go ahead. As you said we
can make it configurable later.
---
If your project is set up for it, you can reply to this email and have your
reply ap
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14118
Thanks for the information. I'm still confused. From an end-user
perspective, do we need to handle StringType there?
---
If your project is set up for it, you can reply to this email and have your
re
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14115
This looks alright.
cc @hvanhovell can you review and merge?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project d
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14114#discussion_r70191915
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala ---
@@ -138,7 +138,7 @@ class CatalogImpl(sparkSession: SparkSession) extends
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14114#discussion_r70191775
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -423,12 +442,13 @@ class SessionCatalog(
* cont
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14116
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14116
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62065/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14116
**[Test build #62065 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62065/consoleFull)**
for PR 14116 at commit
[`e6e96eb`](https://github.com/apache/spark/commit/
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14124
Oh, I see, before this patch
```
+---+
| a|
+---+
| 1|
| 0|
+---+
```
after this patch
```
++
| a|
++
| 1
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14116
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62064/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14116
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14116
**[Test build #62064 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62064/consoleFull)**
for PR 14116 at commit
[`a645410`](https://github.com/apache/spark/commit/
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14124
Ah, yes it seems a bug to me.. I thought it throws an exception in that
case. Does this PR introduce the problem? (Just curious and to be sure).
---
If your project is set up for it, you can re
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14116
**[Test build #62065 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62065/consoleFull)**
for PR 14116 at commit
[`e6e96eb`](https://github.com/apache/spark/commit/e
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14116
Locally, the last commit passes the R test, too. Now, I think I've finished
my first implementation.
While waiting for #14114 and #14115 , I'll move on other new issues.
---
If your proje
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14116
**[Test build #62064 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62064/consoleFull)**
for PR 14116 at commit
[`a645410`](https://github.com/apache/spark/commit/a
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14114
Hi, @rxin .
Could you review this when you have some time?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project d
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14115
After merging this, I'll add the same prevention for `information_schema`,
too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14127
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14127
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62063/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14127
**[Test build #62063 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62063/consoleFull)**
for PR 14127 at commit
[`2c702c5`](https://github.com/apache/spark/commit/
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14115
Hi, @rxin .
Could you review this again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14115
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62062/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14115
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14115
**[Test build #62062 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62062/consoleFull)**
for PR 14115 at commit
[`1702c7e`](https://github.com/apache/spark/commit/
Github user phalodi commented on the issue:
https://github.com/apache/spark/pull/14104
@srowen @rxin Please review it and give your comments.
![screenshot from 2016-07-11
01-08-21](https://cloud.githubusercontent.com/assets/8075390/16715663/1305e0f6-4704-11e6-90e0-15f76a7bb1f8
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14116
`SparkR` failures are due to my TODO item.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/14127
[Here](http://janino-compiler.github.io/janino/changelog.html) is a change
log of Janino. Updates from 2.7.8 are the followings:
- JANINO-186: Code size error message should include method nam
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14127
**[Test build #62063 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62063/consoleFull)**
for PR 14127 at commit
[`2c702c5`](https://github.com/apache/spark/commit/2
Github user phalodi commented on the issue:
https://github.com/apache/spark/pull/14104
@srowen i make changes as you suggest i remove the table and add a single
below actions table and also change statement to be more clear about non
blocking.
---
If your project is set up for it, y
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14115
**[Test build #62062 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62062/consoleFull)**
for PR 14115 at commit
[`1702c7e`](https://github.com/apache/spark/commit/1
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14115
Thank you, @srowen . It's done!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabl
Github user phalodi commented on the issue:
https://github.com/apache/spark/pull/14104
@srowen ok no problem so what you suggest should i remove the table and
just add the line below the actions table and give link to scala and java docs?
And the line "Spark provide asynchronous actio
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14097
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62061/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14097
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14097
**[Test build #62061 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62061/consoleFull)**
for PR 14097 at commit
[`55dc2a2`](https://github.com/apache/spark/commit/
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14115#discussion_r70186581
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -49,6 +49,8 @@ class SessionCatalog(
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14115#discussion_r70186380
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -49,6 +49,8 @@ class SessionCatalog(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14097
**[Test build #62061 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62061/consoleFull)**
for PR 14097 at commit
[`55dc2a2`](https://github.com/apache/spark/commit/5
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14115#discussion_r70186339
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -49,6 +49,8 @@ class SessionCatalog(
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14086
Thank you for review, @srowen . Currently, the `truncate` option will be
ignored for other non-JDBC sources.
IMO, most of them is **file-based* sources, so they do not need this.
I
101 - 200 of 328 matches
Mail list logo