Look at how much Hard Disk utilization you have (IOPS / Svctm). You may just be under scaled for the QPS you desire for both read + write load. If you are performing random gets, you could expect around the low to mid 100's IOPS/sec per HDD. Use bonnie++ / IOZone / IOPing to verify.
Also you could see how efficient your cache is (Saving Disk IOPS). On Thu, May 16, 2013 at 11:50 PM, Viral Bajaria <[email protected]>wrote: > Thanks for all the help in advance! > > Answers inline.. > > Hi Viral, > > > > some questions: > > > > > > Are you adding new data or deleting data over time? > > > > Yes I am continuously adding new data. The puts have not slowed down but > that could also be an after effect of deferred log flush. > > > > Do you have bloom filters enabled? > > > > Yes bloom filters have been enabled: ROWCOL > > > > Which version of Hadoop? > > > > Using 1.0.4 > > > > Anything funny the Datanode logs? > > > > I haven't seen anything funny, not a lot of timeouts either but I will look > into it more. For some reason my datanode metrics refused to show up in > ganglia while regionserver metrics work fine. > > Thanks, > Viral >
