On 06/10/2011 05:31 PM, si...@ugcv.com wrote:
I would add more RAM for sure but there's hardware limitation. How if
the motherboard
couldn't support more than ... say 128GB ? seems I can't keep adding RAM
to resolve it.
compressed pointers, do u mean turning on jvm compressed reference ?
I
On Jun 13, 2011, at 5:52 AM, Steve Loughran wrote:
Unless your cluster is bigger than Facebooks, you have too many small files
+1
(I'm actually sort of surprised the NN is still standing with only 24mb. The
gc logs would be interesting to look at.)
I'd also likely increase the block
On 10/06/2011 10:00 PM, Edward Capriolo wrote:
On Fri, Jun 10, 2011 at 8:22 AM, Brian Bockelmanbbock...@cse.unl.eduwrote:
On Jun 10, 2011, at 6:32 AM, si...@ugcv.com wrote:
Dear all,
I'm looking for ways to improve the namenode heap size usage of a
800-node 10PB testing Hadoop cluster that