GitHub user xwu0226 opened a pull request:
https://github.com/apache/spark/pull/13687
[SPARK-15970][SQL]: avoid warning message for hive metastore in In-memory
caâ¦
## What changes were proposed in this pull request?
##### Issue:
Switch the catalog implementation mode to in-memory mode by adding
`spark.sql.catalogImplementation in-memory` in
`spark-defaults.conf`.
Then, start spark-shell, which will be run using in-memory catalog.
Issue DDL command like `CREATE TABLE T1 (c1 INT, c2 INT) USING PARQUET
PARTITIONED BY (c2)`. The table metadata will be created in in-memory catalog.
However, a WARNING message dumped into the console saying that `Persisting
partitioned data source relation default.t1 into Hive metastore in Spark SQL
specific format, which is NOT compatible with Hive.` This kind of warning
message is not applicable to In-Memory catalog mode.
This PR is to skip checking whether the table can be created as hive
compatible table or not when in-memory catalog mode is in use.
## How was this patch tested?
A unit test case is added for in-memory catalog mode. And regrets is run
locally.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/xwu0226/spark SPARK-15970
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/13687.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #13687
----
commit f6b17174fb3896af002d2d2f81bea117f49e7b57
Author: Xin Wu <[email protected]>
Date: 2016-06-15T18:49:45Z
SPARK-15970: avoid warning message for hive metastore in In-memory catalog
mode
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]