Is it possible to keep a file open for say 1 hour and write to it every once
in a while and then close it. I constantly get the same error on attempt to
close the handle of the file when I am done with my writes. I can see all
the writes and flushes happening w/o any problems. I am using the Native C
API to achieve this. Does it have anything to do with the client losing
some kind of a lease ? How do I go about fixing this problem ?
This is the exception I get when I try to close the file. I am writing into
HDFS from a remote machine. What am I doing wrong?
Exception in thread "main" java.io.IOException: Filesystem closed
at org.apache.hadoop.dfs.DFSClient.checkOpen(DFSClient.java:168)
at org.apache.hadoop.dfs.DFSClient.access$200(DFSClient.java:48)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write(
DFSClient.java:1245)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(
FSDataOutputStream.java:38)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
at java.io.DataOutputStream.write (DataOutputStream.java:90)
at org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.write(
ChecksumFileSystem.java:402)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(
FSDataOutputStream.java :38)
at java.io.BufferedOutputStream.flushBuffer(
BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at java.io.DataOutputStream.flush(DataOutputStream.java :106)
at org.apache.hadoop.fs.FSDataOutputStream.close(
FSDataOutputStream.java:91)
Call to org/apache/hadoop/fs/FSDataOutputStream::close failed!
[Thu Jun 21 18:24:42 2007] "File closed -1"
Thanks
Avinash