anyone?
On Sun, Oct 20, 2013 at 9:56 AM, Rita rmorgan...@gmail.com wrote:
I have asked this question elsewhere and haven't gotten the answers.
Perhaps, I asked it the wrong way or forum.
I have a 40+ node cluster and I would like to make sure the data node
scanning is aggressively done. I
The logs for the maps and reduces show nothing useful. There are a ton of
warnings about deprecated and final config values, but the task runs and
seems to finish without error. The only errors I've found in logs are the
ones I posted above, which were in the NodeManager log files.
Here's an
Hi,
I am running in Hbase in pseudo distributed mode. ( Hadoop version - 1.1.2
, Hbase version - 0.94.7 )
I am getting few exceptions in both hadoop ( namenode , datanode) logs and
hbase(region server).
When i search for these exceptions on google , i concluded that problem is
mainly due to large
Hi Alejandro,
I submit all my applications from a single Client, but all of my
application masters are taking almost the same amount of time for finishing
the above calls. Do you reuse ApplicationMaster instances or do some other
thing for saving this time? Otherwise I felt the fresh
Hi Prashant!
You can set yarn.resourcemanager.max-completed-applicationsin yarn-site.xml of
RM to limit the maximum number of apps it keeps track of (It defaults to
1). You're right that the Heap may also be increased.
HTH
Ravi
On Monday, October 21, 2013 5:54 PM, Prashant Kommireddi
Thanks Ravi!
On Tue, Oct 22, 2013 at 12:55 PM, Ravi Prakash ravi...@ymail.com wrote:
Hi Prashant!
You can set yarn.resourcemanager.max-completed-applications in
yarn-site.xml of RM to limit the maximum number of apps it keeps track of
(It defaults to 1). You're right that the Heap may
Hi,
I have the same problem. I compared Hadoop 2.2.0 with Hadoop 1.0.3 and it
turned out that the terasort for MR2 is 2 times slower than that in MR1. I
cannot really believe it.
The cluster has 20 nodes with 19 data nodes. My Hadoop 2.2.0 cluster
configurations are as follows.
The Terasort output for MR2 is as follows.
2013-10-22 21:40:16,261 INFO org.apache.hadoop.mapreduce.Job (main):
Counters: 46
File System Counters
FILE: Number of bytes read=456102049355
FILE: Number of bytes written=897246250517
FILE: Number
Hi,
I've been running Terasort on Hadoop-2.0.4.
Every time there is s a small number of Map failures (like 4 or 5) because
of container's running beyond virtual memory limit.
I've set mapreduce.map.memory.mb to a safe value (like 2560MB) so most
TaskAttempt goes fine while the values of those
-- Forwarded message --
From: Sathish Kumar sa848...@gmail.com
Date: Tue, Oct 22, 2013 at 4:59 PM
Subject: External Table creation in hive fails on impala integration with
hive
To: cdh-u...@cloudera.org
Hi All,
I am trying to integrate impala with hbase, Received a syntax error
In CDH3u5, when the DataNode is Decommissioned, the DataNode progress will
be shutdown by NameNode.
But In CDH4.3.1, when the DataNode is Decommissioned, the DataNode progress
will be not shutdown by NameNode.
When the datanode is Decommissioned, why the datanode is not automatically
shutdown
hi,all:
i want to do namenode format in script, how can run the format
command non-interactively
thanks
13 matches
Mail list logo