But why it is bad? Split/compaction? I make my own RetryResultIterator which reopen scanner on timeout. But what is best way to reopen scanner. Can you point me where i can find all this exceptions? Or may be here already some sort for recoveratble iterator?
2010/9/22 Ryan Rawson <[email protected]>: > ah ok i think i get it... basically at this point your scanner is bad > and iterating on it again won't work. the scanner should probably > self close itself so you get tons of additional exceptions but instead > we dont. > > there is probably a better fix for this, i'll ponder > > On Wed, Sep 22, 2010 at 1:57 AM, Ryan Rawson <[email protected]> wrote: >> very strange... looks like a bad block ended up in your scanner and >> subsequent nexts were failing due to that short read. >> >> did you have to kill the regionserver or did things recover and >> continue normally? >> >> -ryan >> >> On Wed, Sep 22, 2010 at 1:37 AM, Andrey Stepachev <[email protected]> wrote: >>> Hi All. >>> >>> I get org.apache.hadoop.fs.ChecksumException for a table on heavy >>> write in standalone mode. >>> table tmp.bsn.main created 2010-09-22 10:42:28,860 and then 5 threads >>> writes data to it. >>> At some moment exception thrown. >>> >>> Andrey. >>> >> >
