We filled the disk on a test server (single node) and looks like we corrupted some files in the DFS. In particular, the metadata table is having some issues
Accumulo is reporting: exception trying to assign tablet !0;!0<< /root_tablet java.io.IOException: Could not obtain block: blk_7026126848942509929_17401 file=/accumulo/tables/!0/root_tablet/A0000ct9.rf And hadoop fsck is showing: /accumulo/tables/!0/default_tablet/A0000ctb.rf 1303 bytes, 1 block(s): /accumulo/tables/!0/default_tablet/A0000ctb.rf: CORRUPT block blk_8698622187813164150 MISSING 1 blocks of total size 1303 B 0. blk_8698622187813164150_17402 len=1303 MISSING! /accumulo/tables/!0/root_tablet <dir> /accumulo/tables/!0/root_tablet/A0000ct9.rf 705 bytes, 1 block(s): /accumulo/tables/!0/root_tablet/A0000ct9.rf: CORRUPT block blk_7026126848942509929 MISSING 1 blocks of total size 705 B 0. blk_7026126848942509929_17401 len=705 MISSING! /accumulo/tables/!0/table_info <dir> /accumulo/tables/!0/table_info/A0000cta.rf 37857 bytes, 1 block(s): /accumulo/tables/!0/table_info/A0000cta.rf: CORRUPT block blk_8020296141595499911 MISSING 1 blocks of total size 37857 B 0. blk_8020296141595499911_17401 len=37857 MISSING! Is there a way to recover from this? Thanks, Mike
