Github user chuanlei commented on the issue:
https://github.com/apache/spark/pull/15195
@marmbrus
Could you have a look at this pr?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15168#discussion_r80419919
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -420,17 +424,40 @@ case class DescribeTableCommand(table:
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15168
LGTM except two minor comments and pending tests. cc @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15168#discussion_r80419521
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -341,6 +341,74 @@ class SQLQuerySuite extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15168#discussion_r80419524
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -420,17 +424,40 @@ case class DescribeTableCommand(table:
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15168#discussion_r80418906
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -341,6 +341,74 @@ class SQLQuerySuite extends QueryTest
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15231#discussion_r80417904
--- Diff: R/pkg/R/utils.R ---
@@ -698,6 +698,21 @@ isSparkRShell <- function() {
grepl(".*shell\\.R$", Sys.getenv("R_PROFILE_USER"), perl = TRUE)
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15231#discussion_r80417016
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2624,6 +2624,15 @@ setMethod("except",
setMethod("write.df",
signature(df = "SparkDataFrame"),
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15231#discussion_r80418015
--- Diff: R/pkg/R/utils.R ---
@@ -698,6 +698,21 @@ isSparkRShell <- function() {
grepl(".*shell\\.R$", Sys.getenv("R_PROFILE_USER"), perl = TRUE)
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15231#discussion_r80417594
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2624,6 +2624,15 @@ setMethod("except",
setMethod("write.df",
signature(df = "SparkDataFrame"),
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15231#discussion_r80417225
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2624,6 +2624,15 @@ setMethod("except",
setMethod("write.df",
signature(df = "SparkDataFrame"),
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15231#discussion_r80417020
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2624,6 +2624,15 @@ setMethod("except",
setMethod("write.df",
signature(df = "SparkDataFrame"),
Github user xwu0226 closed the pull request at:
https://github.com/apache/spark/pull/15107
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15230
**[Test build #65900 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65900/consoleFull)**
for PR 15230 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15232
Hm. Actually, I was wondering if it really works and I am still looking
here and there.
My `testthat` version in my local is `v.1.0.2`. but it seems not working as
below:
```r
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/15232
According to this fix
https://github.com/hadley/testthat/commit/64036395cf0f2556cdd181dddb219e498f4be370
this is fixed in testthat v1.0.2? I recall we are running 1.1.0 in Jenkins?
---
If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15168
**[Test build #65899 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65899/consoleFull)**
for PR 15168 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15211
**[Test build #65898 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65898/consoleFull)**
for PR 15211 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13599
**[Test build #65897 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65897/consoleFull)**
for PR 13599 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15168
I see. I'll prevent that from those persistent views, too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/9
Ping @yinxusen on update. Would like to have it merged soon so we can work
on LiR and LoR parts. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15168
I think we do not need an extra metastore call if we already know we do not
support partitions over views. You can check all the other DDL implementation.
We did the same thing.
---
If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15168#discussion_r80413543
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +516,35 @@ case class DescribeTableCommand(table:
Github user Yunni commented on the issue:
https://github.com/apache/spark/pull/15148
Thanks @karlhigley All of your comments are very helpful. I made some
changes to make it work. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15168#discussion_r80413281
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +516,35 @@ case class DescribeTableCommand(table:
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15168#discussion_r80412806
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +516,35 @@ case class DescribeTableCommand(table:
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15168#discussion_r80412638
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +516,35 @@ case class DescribeTableCommand(table:
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15168
For example, like the followings.
```
scala> sql("desc view1 partition (c='Us')").show
org.apache.spark.sql.AnalysisException: Partition spec is invalid. The spec
(c) must match
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15168
If we are talking about hive views (for non-partitioned persistent view),
there is no partition info because it looks like a normal table. So, my PR
handles that equally with normal tables.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15168
We do not need to support partitioned views, I think. Your PR is unable to
handle it without a code change in `HiveClientImpl`.
For non-partitioned persistent view, you should also issue
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15168
Do you mean PartitionedView as you mentioned at #15233 ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15168
How about the persistent view? I think we do not support it too, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14897
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14897
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65896/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14897
**[Test build #65896 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65896/consoleFull)**
for PR 14897 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15224
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
501 - 536 of 536 matches
Mail list logo