Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.
The following page has been changed by AndrewPurtell: http://wiki.apache.org/hadoop/Hbase/Troubleshooting ------------------------------------------------------------------------------ * Hadoop and HBase daemons require 1GB heap, therefore RAM, per daemon. For load intensive environments, HBase regionservers may require more heap than this. There must be enough available RAM to comfortably hold the working sets of all Java processes running on the instance. This includes any mapper or reducer tasks which may run co-located with system daemons. Small and Medium instances do not have enough available RAM to contain typical Hadoop+HBase deployments. * Hadoop and HBase daemons are latency sensitive. There should be enough free RAM so no swapping occurs. Swapping during garbage collection may cause JVM threads to be suspended for a critically long time. Also, there should be sufficient virtual cores to service the JVM threads whenever they become runnable. Large instances have two virtual cores, so they can run HDFS and HBase daemons concurrently, but nothing more. X-Large instances have four virtual cores, so they can run in addition to HDFS and HBase daemons two mappers or reducers concurrently. Configure TaskTracker concurrency limits accordingly, or separate mapreduce computation from storage functions. === Resolution === + * Use X-Large (c1.xlarge) - * Use Large instances for HDFS and HBase storage tasks. - * Use X-Large instances if you are also running mappers and reducers co-located with system daemons. * Consider splitting storage and computational function over disjoint instance sets. [[Anchor(9)]]
