]
org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException:
Filesystem closed
at
org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:454)
at
org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:636
.
Is there any hadoop configuration I'm missing?
Thank you
[stderr logs]
org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException:
Filesystem closed
at
org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:454
logs]
org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException:
Filesystem closed
at
org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:454)
at
org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:636
On 29/09/2011 18:02, Joey Echeverria wrote:
Do you close your FileSystem instances at all? IIRC, the FileSystem
instance you use is a singleton and if you close it once, it's closed
for everybody. My guess is you close it in your cleanup method and you
have JVM reuse turned on.
I've hit this
)
at org.apache.hadoop.mapred.Child.main(Child.java:211)
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:297)
at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:426)
at java.io.FilterInputStream.close(FilterInputStream.java:155
are same and got the same fs
object. When it closes first one, diffenitely other will get this exception.
Regards,
Uma
- Original Message -
From: Joey Echeverria j...@cloudera.com
Date: Thursday, September 29, 2011 10:34 pm
Subject: Re: FileSystem closed
To: common-user@hadoop.apache.org
Do
for such exception?
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:222)
at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:66)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.java:2948
cause for such exception?
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:222)
at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:66)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.java:2948