Dear Hadoop experts,
I have a Hadoop cluster which has Hive, HBase installed along with other Hadoop
components. I am currently exploring ways to automate a data migration process
from Hive to HBase which involves new columns of data added ever so often. I
was successful in creating a HBase
Hi,
I'm using the fair scheduler for Yarn. I have not specified any pools, so the
fair-scheduler.xml is basically empty.
However, only one third of the cluster is utilized.
On the scheduler page I see a single Queue which is root and it is specified
that 33.3% used
This 33.3% is independent of
Hi Guys
Quick question - Using the fair scheduler we can restrict access to map tasks,
reduce tasks and overall system resources for each queue. Using the same
mechanism we don't see a any parameter to allocate disk usage by the queues.
Can you pls let us know if there is there a way to do
Hi Vandana
From the configurations, it looks like none of the NodeManagers are registered
with RM because of configuration “yarn.resourcemanager.resource-
tracker.address” issue. May be you can confirm any NM’s are registered with RM.
In the below, there is space after “resource-“ but
Hi
1 - Is there a document on what should be the default settings in the XML file
for say 96 GB.. 48 core system with say 4/queues?
You can refer below the doc for configuring fair scheduler
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html
2 - When we
HDFS and job scheduling queues are entirely different systems.
HDFS disk quota are set at a directory level and then you can intern limit
the permissions of that directory to a group and then it indirectly means
this group has this much dist quota
On Wed, Apr 15, 2015 at 3:55 PM, Vijayadarshan
i had attached nodemanager log of master file and modified yarn-site.xml
file
On Wed, Apr 15, 2015 at 6:21 AM, Rohith Sharma K S
rohithsharm...@huawei.com wrote:
Hi Vandana
From the configurations, it looks like none of the NodeManagers are
registered with RM because of configuration
Please refer
https://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/YarnComma
nds.html#resourcemanager
Best regards,
Nair
From: Vijayadarshan REDDY [mailto:vijayadars...@dbs.com]
Sent: Wednesday, April 15, 2015 6:25 AM
To: user@hadoop.apache.org
Subject: Restriction of
Please check the error logs. and send the logs.
On Wed, Apr 15, 2015 at 3:33 PM, Vandana kumari kvandana1...@gmail.com
wrote:
nodemanager
*Warm Regards,*
*Shashwat Shriparv*
*http://bit.ly/14cHpad http://bit.ly/14cHpad *
*http://goo.gl/rxz0z8 http://goo.gl/rxz0z8*
*http://goo.gl/RKyqO8
i had setup a 3 node hadoop cluster on centos 6.5 but nodemanager is not
running on master and is running on slave nodes. Alse when i submit a job
then job get stuck. the same job runs well on sinle node setup. I am unable
to figure out the problem. Attaching all the configuration files.
Any help
Hi, We are trying to change properties of fair scheduler settings.
1 - Is there a document on what should be the default settings in the XML
file for say 96 GB.. 48 core system with say 4/queues?
2 - When we change the file does the yarn service need to be bounced for
the changed values to get
What is your yarn.nodemanager.address address ?
*Warm Regards,*
*Shashwat Shriparv*
*http://bit.ly/14cHpad http://bit.ly/14cHpad *
*http://goo.gl/rxz0z8 http://goo.gl/rxz0z8*
*http://goo.gl/RKyqO8 http://goo.gl/RKyqO8*
[image: https://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]
Hi,
On master machine, NodeManager is not running because of “Caused by:
java.net.BindException: Problem binding to [kirti:8040], got from logs.
The port 8040 is in use!!! Configure available port number.
Thanks Regards
Rohith Sharma K S
From: Vandana kumari [mailto:kvandana1...@gmail.com]
13 matches
Mail list logo