Yes, but you're not going to like the answer. :-) Here's the high-level recipe:
1) record the table ids for your tables 2) kill all the accumulo servers 3) move /accumulo to a backup location 4) re-initialize, recreate your tables and users 5) use "importDirectory" to load the files in your backup into your new tables You will want to script this last part. -Eric On Wed, May 22, 2013 at 2:54 PM, Mike Hugo <[email protected]> wrote: > We filled the disk on a test server (single node) and looks like we > corrupted some files in the DFS. In particular, the metadata table is > having some issues > > Accumulo is reporting: > > exception trying to assign tablet !0;!0<< /root_tablet > java.io.IOException: Could not obtain block: blk_7026126848942509929_17401 > file=/accumulo/tables/!0/root_tablet/A0000ct9.rf > > And hadoop fsck is showing: > > > /accumulo/tables/!0/default_tablet/A0000ctb.rf 1303 bytes, 1 block(s): > > /accumulo/tables/!0/default_tablet/A0000ctb.rf: CORRUPT block > blk_8698622187813164150 > > MISSING 1 blocks of total size 1303 B > > 0. blk_8698622187813164150_17402 len=1303 MISSING! > > > /accumulo/tables/!0/root_tablet <dir> > > /accumulo/tables/!0/root_tablet/A0000ct9.rf 705 bytes, 1 block(s): > > /accumulo/tables/!0/root_tablet/A0000ct9.rf: CORRUPT block > blk_7026126848942509929 > > MISSING 1 blocks of total size 705 B > > 0. blk_7026126848942509929_17401 len=705 MISSING! > > > /accumulo/tables/!0/table_info <dir> > > /accumulo/tables/!0/table_info/A0000cta.rf 37857 bytes, 1 block(s): > > /accumulo/tables/!0/table_info/A0000cta.rf: CORRUPT block > blk_8020296141595499911 > > MISSING 1 blocks of total size 37857 B > > 0. blk_8020296141595499911_17401 len=37857 MISSING! > > > Is there a way to recover from this? > > > Thanks, > > > > Mike > >
