[ 
https://issues.apache.org/jira/browse/SPARK-20382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15974022#comment-15974022
 ] 

QQShu1 edited comment on SPARK-20382 at 4/19/17 3:58 AM:
---------------------------------------------------------

I find in Hive.java needToCopy  HadoopShims.HdfsEncryptionShim 
srcHdfsEncryptionShim = SessionState.get().getHdfsEncryptionShim(srcFs);
in SessionState has a map  hdfsEncryptionShims,in  HdfsEncryptionShim has a 
HdfsAdmin, and when we exit beeline ,we call HiveSessionImplwithUGI that 
FileSystem.closeAllForUGI(sessionUgi). and the fileSystem in HdfsAdmin will be 
close.  

Because beeline use one org.apache.hadoop.hive.ql.session.SessionState, so the 
next beeline will get the same object of HdfsAdmin,and in 
srcHdfsEncryptionShim.isPathEncrypted(srcf), will call the DFSClient.checkOpen, 
that throw the ava.io.IOException: Filesystem closed.


was (Author: qqshu11):
I find in Hive.java needToCopy  HadoopShims.HdfsEncryptionShim 
srcHdfsEncryptionShim = SessionState.get().getHdfsEncryptionShim(srcFs);
in SessionState has a map  hdfsEncryptionShims,in  HdfsEncryptionShim has a 
HdfsAdmin, and when we ext beeline ,we call HiveSessionImplwithUGI that 
FileSystem.closeAllForUGI(sessionUgi). and the fileSystem in HdfsAdmin will be 
close.  

Beacure beeline use one org.apache.hadoop.hive.ql.session.SessionState, so the 
next beeline will get the same object of HdfsAdmin,and in 
srcHdfsEncryptionShim.isPathEncrypted(srcf), will call the DFSClient.checkOpen, 
that throw the ava.io.IOException: Filesystem closed.

> we meet fileSystem.closed when we run load data on beeline
> ----------------------------------------------------------
>
>                 Key: SPARK-20382
>                 URL: https://issues.apache.org/jira/browse/SPARK-20382
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: QQShu1
>
> we run on beeline steps as follows:
> 1. hadoop fs -put kv1.txt hdfs://spark/user/
> 2.Connect beeline
> 3. LOAD DATA INPATH  'hdfs://spark/user/kv1.txt' INTO TABLE src_txt16;
> 4.exit beeline
> 5.Connect beeline
> 6.hadoop fs -put kv1.txt hdfs://spark/user/
> 7.run the LOAD DATA command again , we will meet fileSystem.closed error



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to