I uploaded bunch .txt data into distributed file system. Cluster summary
shows there is enough heap size memory. However, whenever I ran Mahout 0.8
seqdirectory command. The system displayed out of heap memory exception. I
shutdown hadoop cluster and allocated more memory to
mapred.child.java.opts. I then restarted the hadoop cluster and the log
shows that namenode is corrupted. I tried to check the health of file
system. However, the connection is refused. It ended out I have to reformat
the system. I'm caught between this circle. Any help is appreciated.

Reply via email to