>
> Hi Stack,
>
Next time, when you see something like this:
>
> >> java.io.IOException: java.io.IOException: Cannot open filename
> >> /hbase/filmContributors/1670715971/content/3783592739034234831
>
> ...try getting it with a new client as in:
>
> $ ./bin/hadoop fs -get
> /hbase/filmContributors/1670715971/content/3783592739034234831 .
>
> I did this, i've checked if there is this file from web UI and the whole
dir was missing.
No wounder because it was deleted after split by node that was hosting
partent region before split.
2010-02-03 15:33:35,902 INFO org.apache.hadoop.hdfs.server.
namenode.FSNamesystem.audit: ugi=mpodsiadlowski,devel, [some privileges]
ip=/10.0.100.50 cmd=delete src=/hbase/filmContributors/1670715971
dst=null perm=null
So the order of events was sth like this
15:32:35, - split of region hosted by 10.0.100.50
15:32:37 one of the new regions assigned to 10.0.100.51
15.33.35 10.0.100.50 removes whole dir with the file that is causing
problems
15.33.49 10.0.100.51 tries to performe compaction and failes
The below is really bad usually indicative of a stressed hdfs (or one
> not configered for the load its taking on):
>
> > IOException: Could not complete write to file
>
>
I will try to tune a bit hdsf. And see what will happen
> I tried to follow your pastebin link but it is empty for me. It works for
> you?
> St.Ack
>
>
Unfortunatly something got broken with my pastebin, i will put new one as
sone as i get to work.
Thanks,
Michal