I am using hadoop 2.0.4
1 - Which component manage queues? Is it the jobtracker?
2 - If so, it is possible to define several queues (set
mapred.job.queue.name=$QUEUE_NAME;)?
--
Best regards,
The Resourcemanager manages the queues in hadoop 2.0.4. As far as the
specifics it depends on the scheduler you use. If you use the capacity
scheduler or fair scheduler you can have multiple queues. Take a look at the
documentation here for capacity scheduler:
Hi Rahul,
It is at least because of the reasons that Vinod listed that makes my
life easy for porting my application on to YARN instead of making it work
in the Map Reduce framework. The main purpose of me using YARN is to
exploit the resource management capabilities of YARN.
Thanks,
Kishore
Whatever you are trying to do should work,
Here is the modified WordCount Map
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {String line =
value.toString();
JSONObject line_as_json = new JSONObject(line);
I don't see a direct question asked, but here's a condition in the
source code you want to take a look at (*):
https://github.com/apache/hadoop-common/blob/branch-1/src/mapred/org/apache/hadoop/mapred/JobInProgress.java#L2316
(*) - Yet to appear in MRv2 - See/help out with MAPREDUCE-2723.
On
Thanks! It reports 2.3.0. I will update.
John
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Wednesday, May 29, 2013 12:37 PM
To: user@hadoop.apache.org
Subject: Re: Help: error in hadoop build
What's the output of:
protoc --version
You should be using 2.4.1
Cheers
On Wed, May 29, 2013 at
Hi All,
I amin a team developing hadoop with hive.
we are using fair schedeuler.
but all hive jobs are going to same pool whose name is same as username of
where hive server installed.
this is all about,
my hive server is in user named 'hadoop'.
my hive client program in user named 'abc'.
but
set mapred.job.queue.name=queue-name;
above property will set the queue for that particular hive session. This
property needs to be set by all users using.
On Thu, May 30, 2013 at 5:18 PM, Job Thomas j...@suntecgroup.com wrote:
Hi All,
I amin a team developing hadoop with hive.
we are
Hi ,I suggest you
Always use
set mapred.job.queue.name=$QUEUE_NAME;
before HQL.if not ,the default pool will be used .
You can also change now running job’ queue and priority in
http://ip:port/scheduler by hand,
Same address with JT home
What is the separation of concerns between YARN and Zookeeper? That is,
where does YARN leave off and where does Zookeeper begin? Or is there some
overlap
On Thu, May 30, 2013 at 2:42 AM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Hi Rahul,
It is at least because of the
set mapred.job.queue.name=$QUEUE_NAME;
Where should i use it ? In hive server or hive client or in hadoop
Thank you for your help.
Best Regards,
Job M Thomas
From: zangxiangyu [mailto:zangxian...@qiyi.com]
Sent: Thu 5/30/2013 5:47 PM
To:
Hi brother ,
May I know where Should I set the property set mapred.job.queue.name
https://webmail.suntecsbs.com/exchweb/bin/redir.asp?URL=http://mapred.job.queue.name
=queue-name;
In hive client or Hive server ( here thrift server is up) or in hadoop ?
Best Regards,
Job M Thomas
Hi Philippe,
thanks a lot, that's the solution. I've disable *
mapreduce.tasktracker.outofband.heartbeat* and now everything is fine!
Thanks again,
Roland
On Wed, May 29, 2013 at 4:00 PM, Philippe Signoret
philippe.signo...@gmail.com wrote:
This might be relevant:
Job,
You need to set in on every hive session/CLI client.
This property is a job level one and it is used to indicate which pool/queue a
job should be submitted on to.
Regards
Bejoy KS
Sent from remote device, Please excuse typos
-Original Message-
From: Job Thomas
Hello,
I am trying to add Hadoop to PATH so Flume can access the necessary JAR
files.
I have Hadoop installed in
*root@li339-83:/usr/local/hadoop/hadoop# ls*
binCHANGES.txt docs
hadoop-core-1.0.4.jar hadoop-test-1.0.4.jar ivy.xml LICENSE.txt
README.txt src
build.xml conf
daovh...@gmail.com
Hello Users,
I have a position open in the South Bay for a Hadoop Engineer at a Major
Networking company Headquartered in San Jose. I wanted to reach out to any of
you who may be interested in the position. It is a competitive rate and an
exciting position. Please contact me ASAP for details!
Thanks for help me to build Hadoop! I'm through compile and install of maven
plugins into Eclipse. I could use some pointers for next steps I want to take,
which are:
* Deploy the simplest development only cluster (single node?) and
learn how to debug within it. I read about the
Hi Lenin,
You should add /bin too to the hadoop path(/usr/local/hadoop/hadoop)
Like in your case you should add /usr/local/hadoop/hadoop/bin to run
hadoop command from any folder.
Your path should be some thing like.
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/
Thanks Shekar murali.
I have modified PATH and flume works now :)
Thanks,
Lenin
On Thu, May 30, 2013 at 10:45 PM, murali adireddy murali.adire...@gmail.com
wrote:
Hi Lenin,
You should add /bin too to the hadoop path(/usr/local/hadoop/hadoop)
Like in your case you should add
Hi Thanks guys.
I figured out the issue. Hence i have another question.
I am using a third party library and I thought that once I have created the
jar file I dont need to specify the dependancies but aparently thats not
the case. (error below)
Very very naive question...probably stupid. How do i
For starters, you can specify them through the -libjars parameter when you
kick off your M/R job. This way the jars will be copied to all TTs.
Regards,
Shahab
On Thu, May 30, 2013 at 2:43 PM, jamal sasha jamalsha...@gmail.com wrote:
Hi Thanks guys.
I figured out the issue. Hence i have
The same has helped me.
Thanks a lot!!
On 30.05.2013 17:00, Roland von Herget wrote:
Hi Philippe,
thanks a lot, that's the solution. I've disable
*mapreduce.tasktracker.outofband.heartbeat* and now everything is fine!
Thanks again,
Roland
On Wed, May 29, 2013 at 4:00 PM, Philippe Signoret
Dear list,
I have created a sequence file like this:
seqWriter = SequenceFile.createWriter(fs, getConf(), new
Path(hdfsPath), IntWritable.class, BytesWritable.class,
SequenceFile.CompressionType.NONE);
seqWriter.append(new IntWritable(index++), new BytesWritable(buf));
(with buf a byte
Hi,
I did that but still same exception error.
I did:
export HADOOP_CLASSPATH=/path/to/external.jar
And then had a -libjars /path/to/external.jar added in my command but still
same error
On Thu, May 30, 2013 at 11:46 AM, Shahab Yunus shahab.yu...@gmail.comwrote:
For starters, you can specify
Ok got this thing working..
Turns out that -libjars should be mentioned before specifying hdfs input
and output.. rather than after it..
:-/
Thanks everyone.
On Thu, May 30, 2013 at 1:35 PM, jamal sasha jamalsha...@gmail.com wrote:
Hi,
I did that but still same exception error.
I did:
Hi Jens,
Please read this old thread at http://search-hadoop.com/m/WHvZDCfVsD
which covers the issue, the solution and more.
On Fri, May 31, 2013 at 1:39 AM, Jens Scheidtmann
jens.scheidtm...@gmail.com wrote:
Dear list,
I have created a sequence file like this:
seqWriter =
28 matches
Mail list logo