I uploaded data into distributed file system. Cluster summary shows there
is enough heap size memory. However, whenever I try run Mahout 0.8 command.
The system displays out of heap memory exception. I shutdown hadoop cluster
and allocated more memory to mapred.child.java.opts. I then restarted the
hadoop cluster and the namenode is corrupted. Any help is appreciated.

Reply via email to