If you are requiring this for monitoring purposes, do notice that the older
TaskTracker would be with a different ID (port number will differ). You can
differentiate based on that.
TaskTrackers are mere clients to the JobTracker and hence JobTracker maintains
no complete state of them, like is
Hi Raj,
Thanks for your reply, comments below.
On 09/11/11 18:45, Raj V wrote:
Can you try the following?
- Change the permisson to 775 for /hadoop/mapred/system
Done.
- Change the group to hadoop
Done.
- Make all users who need to submit hadoop jobs a part of the hadoop group.
The
Or more general:
isn't using virtualized i/o counter effective when dealing with hadoop M/R?
I would think that for running hadoop M/R you'd want predictable and consistent
i/o on each node,
not to mention your bottlenecks are usually disk i/o (and maybe CPU), so using
virtualisation makes
yeah, I mean kill as stop tasktracker.
- Original Message -
From: Alexander C.H. Lorenz wget.n...@googlemail.com
To: common-user@hadoop.apache.org; mohmmadanis moulavi
anis_moul...@yahoo.co.in
Cc:
Sent: Monday, 14 November 2011 1:16 PM
Subject: Re: how to start tasktracker only on
It seems you are looking for
the parameter mapred.task.tracker.report.address. Set to a fixed port
number 127.0.0.1:50050 (for example) and try it.
Wei
On Mon, Nov 14, 2011 at 3:39 PM, mohmmadanis moulavi
anis_moul...@yahoo.co.in wrote:
Hello,
Friends I am using Hadoop 0.20.2 version,
My
On 14/11/11 09:38, stephen mulcahy wrote:
Hi Raj,
Thanks for your reply, comments below.
On 09/11/11 18:45, Raj V wrote:
Can you try the following?
- Change the permisson to 775 for /hadoop/mapred/system
As per the previous problem, the permissions still get reset on cluster
restart.
Am
@harsh J
I got it frm Wel Wu,
It can be done using mapred.task.tracker.report.address parameter.
Thanks for your reply
From: Harsh J lt;ha...@cloudera.comgt;
To: common-user@hadoop.apache.org
Sent: Monday, 14 November 2011 2:00 PM
Subject: Re: how to start
I am guessing that /tmp is reset upon cluster restart. Maybe try to use
a persistent directory.
Shi
On Mon, Nov 14, 2011 at 6:33 AM, stephen mulcahy
stephen.mulc...@deri.orgwrote:
On 14/11/11 09:38, stephen mulcahy wrote:
Hi Raj,
Thanks for your reply, comments below.
On 09/11/11 18:45,
I am trying to build hadoop-trunk using eclipse, is this
http://wiki.apache.org/hadoop/EclipseEnvironment the latest document?
Best Regards
Amir Sanjar
Linux System Management Architect and Lead
IBM Senior Software Engineer
Phone# 512-286-8393
Fax# 512-838-8858
On 14/11/11 15:31, Shi Jin wrote:
I am guessing that /tmp is reset upon cluster restart. Maybe try to use
a persistent directory.
Thanks for the suggestion but /tmp will only be reset on server reboot -
not cluster restart (I'm talking about running stop-all.sh and
start-all.sh, not a full
OK, continuing our earlier conversation...
I have a job that schedules 100 map jobs (small number just for testing),
passing data view a set of 100 sequence files. This is based on the PiEstimator
example, that is shipped with the distribution.
The data consist of a blob of serialised state,
Yes, you can follow that.
mvn eclipse:eclipse will generate eclipse related files. After that directly
import in your eclipse.
note: Repository links need to update. hdfs and mapreduce are moved inside to
common folder.
Regatrds,
Uma
From: Amir Sanjar
Hi,
I am writing a Hadoop Streaming job in Python. I know that I can
increment counters by writing a special format to sys.stderr. Is it
possible to *read* the values of counters from my Python program? I am
using the global counter as the denominator of a probability, and must
have this value
The ant steps for building the eclipse plugin are replaced by mvn
eclipse:eclipse, for versions 0.23+, correct?
From: Uma Maheswara Rao G [mahesw...@huawei.com]
Sent: Monday, November 14, 2011 10:11 AM
To: common-user@hadoop.apache.org
Subject: RE:
Hi Stephen
THis is probably happening during jobtracker start. Can you provide any
relevant logs from the task tracker log fiile?
Raj
From: stephen mulcahy stephen.mulc...@deri.org
To: common-user@hadoop.apache.org
Sent: Monday, November 14, 2011 5:33 AM
You are right.
From: Tim Broberg [tim.brob...@exar.com]
Sent: Tuesday, November 15, 2011 1:02 AM
To: common-user@hadoop.apache.org
Subject: RE: setting up eclipse env for hadoop
The ant steps for building the eclipse plugin are replaced by mvn
It looks like your hadoop distro does not have
https://issues.apache.org/jira/browse/HADOOP-4012.
- milind
On 11/10/11 2:40 PM, Raj V rajv...@yahoo.com wrote:
All
I assumed that the input splits for a streaming job will follow the same
logic as a map reduce java job but I seem to be wrong.
I
MIlind
I realised that thankls to Joey from Cloudera. I have given up on bzip.
Raj
From: milind.bhandar...@emc.com milind.bhandar...@emc.com
To: common-user@hadoop.apache.org; rajv...@yahoo.com; cdh-u...@cloudera.org
Sent: Monday, November 14, 2011 2:02 PM
Hello friends,
I want to know,
Suppose if tasktracker goes down, or I stopeed the tasktracker,
then can Jobtrakcer waits until it comes up.
Please let me know it.
Regards,
Mohmmadanis Moulavi
Hi guys !
Q How can i assign data of each job in mumak nodes and what else i need to
do ?
In genreral how can i use the pluggable block-placement for HDFS in Mumak ?
Meaning in my context i am using 19-jobs-trace json file and modified
topology json file consisting of say 4 nodes. Since the
Mohmmadanis,
Yes. Sort of.
The JobTracker will keep submitted jobs in queue until TaskTrackers are
available to be assigned and run tasks on. JobTracker's queues will continue to
be functional and will await the return of slots to assign tasks to in order to
continue running the job.
On
21 matches
Mail list logo