Hi all, In the following https://cwiki.apache.org/confluence/display/MAHOUT/Partial+Implementation tutorial for running the random forest, maximum split size of "1874231" is used. When I didn't mention this in the command line and the block size of data on HDFS is 32MB it gives "StackOverFlow" error. It overcome this I increase the head size of child jvm to 2GB , then either it gives the same overflow error or the process get hanged.
Does anyone has any idea about this? Regards Karan -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
