GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14668
[SPARK-16656][SQL][BRANCH-1.6] Try to make CreateTableAsSelectSuite more
stable
## What changes were proposed in this pull request?
This PR backports #14289 to branch 1.6
https
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14506
Thanks. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14580
Can you explain `isnotnull(coalesce(b#227, c#238)) does not filter out
NULL!!!`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74852722
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -689,4 +689,38 @@ class HiveDDLSuite
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14506
LGTM pending jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14506
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14651
Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14651
LGTM. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14634
Sorry. What's the necessity to make this change?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74665317
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -207,15 +310,52 @@ private[spark] class HiveExternalCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74665106
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -81,6 +86,19 @@ private[spark] class HiveExternalCatalog(client
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74665091
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -81,6 +86,19 @@ private[spark] class HiveExternalCatalog(client
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74665033
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -363,3 +503,82 @@ private[spark] class HiveExternalCatalog(client
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74664610
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -207,15 +310,52 @@ private[spark] class HiveExternalCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74664134
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -144,16 +162,101 @@ private[spark] class HiveExternalCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74664001
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -81,6 +86,19 @@ private[spark] class HiveExternalCatalog(client
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74663776
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -396,40 +393,6 @@ class DDLSuite extends QueryTest with
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/14586
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14586
merging to branch 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14586
I am merging this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14546
@dongjoon-hyun Seems this issue has been fixed as a by-product of
https://github.com/apache/spark/pull/14595. How about we close this? Also, feel
free to look at @clockfly's follow-up pr.
-
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14586
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14586
Done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14603
With this chnage, I think we can use encoder to serialize it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14603
LGTM. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14456
Thanks! Seems https://github.com/apache/spark/pull/12464 introduced
avgMetrics to CrossValidator model.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14456
Sorry. I think this pr breaks 1.6 build.
```
**
File
"/home/jenkins/workspace/NewSparkPullRequestBuilder/p
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13775
for the benchmark, how about we just test the scan operation?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14586
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14586
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14586
[SPARK-16453] [BUILD] [BRANCH-1.6] release-build.sh is missing
hive-thriftserver for scala 2.10
## What changes were proposed in this pull request?
hive-thriftserver works with Scala 2.11
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14155
Thank you for working on this! It's great to see we are moving those hacks
into `HiveExternalCatalog`. It will be very helpful if we can have two diagrams
to show how we use CatalogTable befor
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74131369
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
---
@@ -18,21 +18,32 @@
package
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74131072
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -70,64 +69,16 @@ private[hive] class HiveMetastoreCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74130831
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -363,3 +503,82 @@ private[spark] class HiveExternalCatalog(client
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74130452
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -144,16 +162,101 @@ private[spark] class HiveExternalCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74130034
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -81,6 +86,19 @@ private[spark] class HiveExternalCatalog(client
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74129714
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -81,6 +86,19 @@ private[spark] class HiveExternalCatalog(client
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74129515
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -689,4 +689,38 @@ class HiveDDLSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74128927
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -93,7 +92,7 @@ class DDLSuite extends QueryTest with
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r74127228
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -229,10 +230,8 @@ case class AlterTableSetPropertiesCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r74123952
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -425,6 +430,111 @@ case class AlterTableDropPartitionCommand
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14500
@liancheng Can you do a post-hoc review?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14492
That's a good point. Let me close this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/14492
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14376
Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14531
We do not support index tables at all (you can not create such a table).
Let's not add the support right now.
---
If your project is set up for it, you can reply to this email and have your
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14207
@gatorsmile
Where is change for the following description?
```
This PR is to store the inferred schema in the external catalog when
creating the table. When users intend to refresh
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14207#discussion_r73941472
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -95,17 +95,39 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14207#discussion_r73940895
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -521,31 +521,29 @@ object DDLUtils
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14207#discussion_r73940848
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -521,31 +521,29 @@ object DDLUtils
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14207#discussion_r73940468
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -95,17 +95,39 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14207#discussion_r73940350
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -95,17 +95,39 @@ case class
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14541
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14376#discussion_r73917166
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WindowExec.scala ---
@@ -565,7 +566,7 @@ private[execution] abstract class
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14518
I think it is fine that `compression` takes precedence. btw, is this flag
used by other data sources?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14518#discussion_r73914641
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcOptions.scala ---
@@ -31,7 +30,8 @@ private[orc] class OrcOptions(
* Acceptable
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14518#discussion_r73914816
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcQuerySuite.scala ---
@@ -161,6 +161,29 @@ class OrcQuerySuite extends QueryTest with
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14518#discussion_r73811203
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcOptions.scala ---
@@ -17,27 +17,40 @@
package org.apache.spark.sql.hive.orc
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/12872
Can you be more specific on the inconsistency? Seems `ALTER VIEW view_name`
is the only inconsistent command?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14497
Thanks for reviewing! I am merging this to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r73730807
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -425,6 +431,96 @@ case class AlterTableDropPartitionCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r73730136
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -409,6 +409,18 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r73729927
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -827,6 +827,45 @@ class DDLSuite extends QueryTest with
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r73728488
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -409,6 +409,18 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73727982
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -301,9 +298,6 @@ case class AlterTableSerDePropertiesCommand
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14506
oh, these checks are used to make sure that users do not mess up spark
sql's internal settings. Let's have a discussion about these checks first.
---
If your project is set up for it, you
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14497#discussion_r73723916
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSparkSubmitSuite.scala ---
@@ -253,6 +253,47 @@ class HiveSparkSubmitSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73633166
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -275,238 +269,21 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73623682
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -301,9 +298,6 @@ case class AlterTableSerDePropertiesCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73623489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -229,10 +230,8 @@ case class AlterTableSetPropertiesCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73623449
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -229,10 +230,8 @@ case class AlterTableSetPropertiesCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73622711
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/HiveSerDe.scala ---
@@ -42,8 +41,7 @@ object HiveSerDe {
HiveSerDe
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14500
We do not generate golden files anymore. Let's port those tests. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14497
[SPARK-16901] Hive settings in hive-site.xml may be overridden by Hive's
default values
## What changes were proposed in this pull request?
When we create the HiveConf for metastore clien
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14492
Sure. This change is for putting Spark jars in a different dir than the
default dir in `spark/assembly` or `spark/jars`. So, in this case, the main
class is not in `SPARK_JARS_DIR`.
---
If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14476#discussion_r73458295
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
---
@@ -82,7 +82,7 @@ abstract class ExternalCatalog
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14492
[SPARK-16887] Add SPARK_DIST_CLASSPATH to LAUNCH_CLASSPATH
## What changes were proposed in this pull request?
To deploy Spark, it can be pretty convenient to put all jars (spark jars,
hadoop
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14439
OK. I am merging this PR to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14482
SaveMode is a public API. We cannot move it to catalyst.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14439
@cloud-fan Thanks for the fix. The new logic looks good. I will merge it
once jenkins pass.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14439#discussion_r73365299
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -344,6 +384,15 @@ class TypeCoercionSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14439#discussion_r73363487
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -344,6 +384,15 @@ class TypeCoercionSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14439#discussion_r73361360
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -344,6 +384,15 @@ class TypeCoercionSuite extends
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14434
Thanks. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14439#discussion_r73064350
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -157,6 +145,26 @@ object TypeCoercion
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14439
It will be good to summarize the behaviors of other systems in the
description. Let's also explain the behavioral change of this pr in the
description. So, others can understand its implic
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14439
Let's be careful at here. I am not sure we can just use
`DecimalPrecision.widerDecimalType`, which produces `Decimal(38, 38)` when we
have one decimal with the type of `Decimal(38, 0)` and an
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14401
Seems it is mainly removing the field of `warehousePath` from
`TestHiveSessionState` and `TestHiveSharedState`. Probably it will help us
remove `TestHiveSessionState` and `TestHiveSharedState
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14434#discussion_r73023513
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -662,10 +662,6 @@ object NullPropagation extends Rule
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14368
@liancheng after second thought, I think it makes sense to also merge it to
branch 2.0 to avoid potential conflicts on doc fixes.
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14363
Thanks. But, what are specific cases are not supported? If there is any
case, we should make change to support that, right?
---
If your project is set up for it, you can reply to this email and have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14413
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14363
LGTM. Thanks. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14363
Do we know which hive type strings cannot be parsed by spark?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14363#discussion_r72910844
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -78,28 +78,6 @@ object CatalogStorageFormat
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14395
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14395
seems jenkins is down?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14395
this this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r72834549
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -83,27 +83,4 @@ private[hive] trait HiveStrategies {
Nil
601 - 700 of 5297 matches
Mail list logo