EdColeman commented on issue #2085:
URL: https://github.com/apache/accumulo/issues/2085#issuecomment-834580352


   Do you have a lot of data in the write-ahead logs?  Is your ingest streaming 
or do you do mostly bulk ingest?
   
   If you do not have a lot of data in the WAL, and IF you can determine that 
the tables rfiles are intact and accessible, you could try a few things - but 
these will likely lead to some data loss - especially for anything in the WALS.
   
   Does the directory from your error message exist, does it have any files in 
it?
   
   
`hdfs://xxxxxxxx120.xxxxxxxx-dev.local:8020/apps/accumulo/data/wal/xxxxxxxxxx.xxxxxxxxx-dev.local+9997/e31761f2-e600-49f9-9a5c-8972aa37005b`
   
   I am assuming that because this is the root table, you cannot scan anything 
with the accumulo shell to examine the metadata table.
   
   There some information in the [user 
manual](https://accumulo.apache.org/1.10/accumulo_user_manual.html#_troubleshooting)
 on what needs to be done if the root (or metadata) table(s) have a reference 
to a corrupt wal - basically, shut down accumulo, move the tables rfiles, 
reinitialize accumlo and the bulk import the files that you moved. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to