Nishchay Malhotra, what scheduler are you using? Also, what are the settings
for each queue?
From: Billy Watson
To: nishchay malhotra
Cc: "common-u...@hadoop.apache.org"
Sent: Tuesday, January 30, 2018
Hi Wu. If yarn.nodemanager.resource.memory-mb is greater than the amount of
memory on a specific node, the scheduler will assign more containers to that
node than probably should be running there. They will still run, but it will
cause a lot of disk swapping, which will slow down each task
Hi Krishna
Please follow the instructions at
https://www.apache.org/foundation/mailinglists.html for unsubscribing to
mailing lists.
Thanks,-Eric Payne
From: Krishna <ramakrishna.srinivas.mur...@gmail.com>
To: user@hadoop.apache.org
Sent: Wednesday, May 3, 2017 8:05 AM
S
Hi Brian Wood,
Please follow the instructions at
https://www.apache.org/foundation/mailinglists.html for unsubscribing to
mailing lists.
Thanks,-Eric Payne
From: Brian Wood <bw...@rmrnc.com>
To:
Cc: "gene...@apache.hadoop.org" <gene...@apache.hadoop.org>;
&
Hi Lake Chang,
Please follow the instructions at
https://www.apache.org/foundation/mailinglists.html for unsubscribing to
mailing lists.
Thanks,-Eric Payne
From: Lake Chang <lakech...@gmail.com>
To: gene...@apache.hadoop.org; user@hadoop.apache.org
Sent: Tuesday, May 2, 2017 10
Thanks Sunil.
Todd, also, please note that in order to enable the preemption feature, the
feature's property must be set to
true:yarn.resourcemanager.scheduler.monitor.enable: true
Then, if you want to turn preemption off for any particular queue, you would
then set
I think it would help if I knew what the criteria is for wanting to move the
container. In other words, was the container started on an undesirable node in
the first place? Or, did the node become undesirable after the container
started.
Speculation could be considered a "move" operation for
Nicolae It depends on how big your AM container is compared to the task
containers. By default, the AM container size is 1.5GB and the map/reduce
containers are 1GB. You can adjust these by setting
yarn.app.mapreduce.am.resource.mb, mapreduce.map.memory.mb, and
mapreduce.map.memory.mb. If you
Or, maybe have a look at Apache Falcon:
Falcon - Apache Falcon - Data management and processing platform
Falcon - Apache Falcon - Data management and processing platform
Apache Falcon - Data management and processing platform
View on falcon.incubator.apache.org Preview by Yahoo
I've seen it too. When I get this, I restart the NM, RM, and HS, and it stops
happening.
I don't have a cuase yet.
-Eric
From: Jeffrey Naisbitt [mailto:jnais...@yahoo-inc.com]
Sent: Monday, September 12, 2011 12:23 PM
To: mapreduce-...@hadoop.apache.org
Subject: Failing to contact Am/History
Hi Vighnesh,
Also, Cloudera has a decent screencast that walks you through building in
eclipse:
http://www.cloudera.com/blog/2009/04/configuring-eclipse-for-hadoop-development-a-screencast/
http://wiki.apache.org/hadoop/EclipseEnvironment
-Eric
-Original Message-
From: Uma
Hi A Df,
I haven't set up Hadoop under cygwin, but I use cygwin a lot.
One thing I would suggest is to use the bash shell in cygwin and use the
following format for the $PATH additions:
PATH=$PATH:/cygdrive/c/cygwin/bin:/cygdrive/c/cygwin/usr/bin
My understanding is that the stable version of
Hi Cheny,
I'm pretty sure you should provide the namenode's IP and datanode's prot.
Something more like this:
hadoop dfs -put localfilename hdfs://namenode ip:8020/filename
-Eric
-Original Message-
From: Cheny [mailto:coconuttree9...@gmail.com]
Sent: Wednesday, July 20, 2011 8:34
I see this target in ./mapreduce/src/contrib/streaming/build.xml and
./mapreduce/src/contrib/gridmix/build.xml. It looks like they are for running
all of the unit tests for thos components.
-Eric
-Original Message-
From: 王栓奇 [mailto:wangshua...@163.com]
Sent: Friday, July 01, 2011
14 matches
Mail list logo