Well, if its a log no longer used, then you could just delete it. That'll get rid of the fsck complaint (True, logs are not per table so to be safe you'd need to flush all tables -- this would get all edits that the log could be carrying out into the filesystem into hfiles).
St.Ack On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <[email protected]> wrote: > Ah. Thanks for that. No, I don't need the log anymore. I am aware of how > to flush a table from the hbase shell. But since the "fsck /" tells me a > log file is corrupt, but not which table the corruption pertains to, > does this mean I have to flush all my tables (I have a lot of tables). > > -geoff > > -----Original Message----- > From: [email protected] [mailto:[email protected]] On Behalf Of > Stack > Sent: Monday, August 08, 2011 4:09 PM > To: [email protected] > Subject: Re: corrupt .logs block > > On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <[email protected]> > wrote: >> I've got a corrupt HDFS block in a region server's ".logs" directory. > > You see this when you do hdfs fsck? Is the log still needed? You > could do a flush across the cluster and that should do away with your > dependency on this log. > > St.Ack >
