Spot on Eric, You're right - I can't write anything to HDFS. Thanks for pointing me in the right direction!!
Mike On Fri, Mar 8, 2013 at 9:13 AM, Eric Newton <[email protected]> wrote: > Have you tried writing to HDFS? If your namenode is up, HDFS will appear > to be quite healthy until it times out your datanode and decides that it is > down. > > You can see these errors if HDFS is just overwhelmed, but they will be > intermittent. > > -Eric > > > On Fri, Mar 8, 2013 at 10:04 AM, Mike Hugo <[email protected]> wrote: > >> It doesn't appear to be down - I can hit >> http://server:50070/dfshealth.jsp - doesn't show any errors. Also scans >> work correctly and the system is returning data normally. Other than those >> two errors in the logs, everything appears to be working (?). >> >> Am I correct in assuming that if HDFS was down, we would be seeing more >> problems than just the replication error and minc error? >> >> Mike >> >> >> >> >> On Fri, Mar 8, 2013 at 8:19 AM, Eric Newton <[email protected]>wrote: >> >>> Bring HDFS back up. >>> >>> -Eric >>> >>> >>> On Fri, Mar 8, 2013 at 9:17 AM, Mike Hugo <[email protected]> wrote: >>> >>>> We have a test server that is just a single machine accumulo instance. >>>> This morning it began reporting the following error: >>>> >>>> java.io.IOException: File >>>> /accumulo/tables/!0/table_info/F0001ac7.rf_tmp could only be replicated to >>>> 0 nodes, instead of 1 at >>>> >>>> I also noticed one other table is failing on MinC >>>> MinC failed (java.io.IOException: File >>>> /accumulo/tables/28/t-0001a92/F0001ac9.rf_tmp could only be replicated to 0 >>>> nodes, instead of 1 >>>> >>>> How do I resolve these errors? >>>> >>>> Thanks! >>>> >>>> Mike >>>> >>> >>> >> >
