GitHub user yaooqinn opened a pull request:

    https://github.com/apache/spark/pull/18648

    [SPARK-21428] Set IsolatedClientLoader off while using builtin Hive jars 
for reusing CliSessionState

    ## What changes were proposed in this pull request?
    
    Set isolated to false while using builtin hive jars
    
    ## How was this patch tested?
    
    Manually verified: `hive.exec.strachdir` was only created once because of 
reusing cliSessionState 
    ```java
    ➜  spark git:(SPARK-21428) ✗ bin/spark-sql --conf 
spark.sql.hive.metastore.jars=builtin
    
    log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.util.Shell).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for 
more info.
    Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties
    17/07/16 23:59:27 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
    17/07/16 23:59:27 INFO HiveMetaStore: 0: Opening raw store with 
implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
    17/07/16 23:59:27 INFO ObjectStore: ObjectStore, initialize called
    17/07/16 23:59:28 INFO Persistence: Property 
hive.metastore.integral.jdo.pushdown unknown - will be ignored
    17/07/16 23:59:28 INFO Persistence: Property datanucleus.cache.level2 
unknown - will be ignored
    17/07/16 23:59:29 INFO ObjectStore: Setting MetaStore object pin classes 
with 
hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
    17/07/16 23:59:30 INFO Datastore: The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
    17/07/16 23:59:30 INFO Datastore: The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
    17/07/16 23:59:31 INFO Datastore: The class 
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as 
"embedded-only" so does not have its own datastore table.
    17/07/16 23:59:31 INFO Datastore: The class 
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so 
does not have its own datastore table.
    17/07/16 23:59:31 INFO MetaStoreDirectSql: Using direct SQL, underlying DB 
is DERBY
    17/07/16 23:59:31 INFO ObjectStore: Initialized ObjectStore
    17/07/16 23:59:31 WARN ObjectStore: Version information not found in 
metastore. hive.metastore.schema.verification is not enabled so recording the 
schema version 1.2.0
    17/07/16 23:59:31 WARN ObjectStore: Failed to get database default, 
returning NoSuchObjectException
    17/07/16 23:59:32 INFO HiveMetaStore: Added admin role in metastore
    17/07/16 23:59:32 INFO HiveMetaStore: Added public role in metastore
    17/07/16 23:59:32 INFO HiveMetaStore: No user is added in admin role, since 
config is empty
    17/07/16 23:59:32 INFO HiveMetaStore: 0: get_all_databases
    17/07/16 23:59:32 INFO audit: ugi=Kent      ip=unknown-ip-addr      
cmd=get_all_databases
    17/07/16 23:59:32 INFO HiveMetaStore: 0: get_functions: db=default pat=*
    17/07/16 23:59:32 INFO audit: ugi=Kent      ip=unknown-ip-addr      
cmd=get_functions: db=default pat=*
    17/07/16 23:59:32 INFO Datastore: The class 
"org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as 
"embedded-only" so does not have its own datastore table.
    17/07/16 23:59:32 INFO SessionState: Created local directory: 
/var/folders/k2/04p4k4ws73l6711h_mz2_tq00000gn/T/beea7261-221a-4711-89e8-8b12a9d37370_resources
    17/07/16 23:59:32 INFO SessionState: Created HDFS directory: 
/tmp/hive/Kent/beea7261-221a-4711-89e8-8b12a9d37370
    17/07/16 23:59:32 INFO SessionState: Created local directory: 
/var/folders/k2/04p4k4ws73l6711h_mz2_tq00000gn/T/Kent/beea7261-221a-4711-89e8-8b12a9d37370
    17/07/16 23:59:32 INFO SessionState: Created HDFS directory: 
/tmp/hive/Kent/beea7261-221a-4711-89e8-8b12a9d37370/_tmp_space.db
    17/07/16 23:59:32 INFO SparkContext: Running Spark version 2.3.0-SNAPSHOT
    17/07/16 23:59:32 INFO SparkContext: Submitted application: 
SparkSQL::10.0.0.8
    17/07/16 23:59:32 INFO SecurityManager: Changing view acls to: Kent
    17/07/16 23:59:32 INFO SecurityManager: Changing modify acls to: Kent
    17/07/16 23:59:32 INFO SecurityManager: Changing view acls groups to:
    17/07/16 23:59:32 INFO SecurityManager: Changing modify acls groups to:
    17/07/16 23:59:32 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(Kent); groups 
with view permissions: Set(); users  with modify permissions: Set(Kent); groups 
with modify permissions: Set()
    17/07/16 23:59:33 INFO Utils: Successfully started service 'sparkDriver' on 
port 51889.
    17/07/16 23:59:33 INFO SparkEnv: Registering MapOutputTracker
    17/07/16 23:59:33 INFO SparkEnv: Registering BlockManagerMaster
    17/07/16 23:59:33 INFO BlockManagerMasterEndpoint: Using 
org.apache.spark.storage.DefaultTopologyMapper for getting topology information
    17/07/16 23:59:33 INFO BlockManagerMasterEndpoint: 
BlockManagerMasterEndpoint up
    17/07/16 23:59:33 INFO DiskBlockManager: Created local directory at 
/private/var/folders/k2/04p4k4ws73l6711h_mz2_tq00000gn/T/blockmgr-9cfae28a-01e9-4c73-a1f1-f76fa52fc7a5
    17/07/16 23:59:33 INFO MemoryStore: MemoryStore started with capacity 366.3 
MB
    17/07/16 23:59:33 INFO SparkEnv: Registering OutputCommitCoordinator
    17/07/16 23:59:33 INFO Utils: Successfully started service 'SparkUI' on 
port 4040.
    17/07/16 23:59:33 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at 
http://10.0.0.8:4040
    17/07/16 23:59:33 INFO Executor: Starting executor ID driver on host 
localhost
    17/07/16 23:59:33 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 51890.
    17/07/16 23:59:33 INFO NettyBlockTransferService: Server created on 
10.0.0.8:51890
    17/07/16 23:59:33 INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
    17/07/16 23:59:33 INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, 10.0.0.8, 51890, None)
    17/07/16 23:59:33 INFO BlockManagerMasterEndpoint: Registering block 
manager 10.0.0.8:51890 with 366.3 MB RAM, BlockManagerId(driver, 10.0.0.8, 
51890, None)
    17/07/16 23:59:33 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, 10.0.0.8, 51890, None)
    17/07/16 23:59:33 INFO BlockManager: Initialized BlockManager: 
BlockManagerId(driver, 10.0.0.8, 51890, None)
    17/07/16 23:59:34 INFO SharedState: Setting hive.metastore.warehouse.dir 
('null') to the value of spark.sql.warehouse.dir 
('file:/Users/Kent/Documents/spark/spark-warehouse').
    17/07/16 23:59:34 INFO SharedState: Warehouse path is 
'file:/Users/Kent/Documents/spark/spark-warehouse'.
    17/07/16 23:59:34 INFO HiveUtils: Initializing HiveMetastoreConnection 
version 1.2.1 using Spark classes.
    17/07/16 23:59:34 INFO HiveClientImpl: Warehouse location for Hive client 
(version 1.2.2) is /user/hive/warehouse
    17/07/16 23:59:34 INFO HiveMetaStore: 0: get_database: default
    17/07/16 23:59:34 INFO audit: ugi=Kent      ip=unknown-ip-addr      
cmd=get_database: default
    17/07/16 23:59:34 INFO HiveClientImpl: Warehouse location for Hive client 
(version 1.2.2) is /user/hive/warehouse
    17/07/16 23:59:34 INFO HiveMetaStore: 0: get_database: global_temp
    17/07/16 23:59:34 INFO audit: ugi=Kent      ip=unknown-ip-addr      
cmd=get_database: global_temp
    17/07/16 23:59:34 WARN ObjectStore: Failed to get database global_temp, 
returning NoSuchObjectException
    17/07/16 23:59:34 INFO HiveClientImpl: Warehouse location for Hive client 
(version 1.2.2) is /user/hive/warehouse
    17/07/16 23:59:34 INFO StateStoreCoordinatorRef: Registered 
StateStoreCoordinator endpoint
    spark-sql>
    
    ```

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/yaooqinn/spark SPARK-21428

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/18648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #18648
    
----
commit 394b4716daefd76cf99b6eb0f90c37b23cfa12d1
Author: Kent Yao <[email protected]>
Date:   2017-07-16T15:52:37Z

    set isolateOn to false while using builtin hive jars

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to