cdh4b2 version. Is this a correct behavior
of the API?
Cheers,
Subroto Sanyal
Thanks Harsh….. :-)
Cheers,
Subroto Sanyal
On May 18, 2012, at 4:30 PM, Harsh J wrote:
> With 0.23/2.x onwards, to have better behavior, its your Path that
> will determine the FS in SequenceFile readers/writers. Hence, always
> make sure your Path carries the proper URI if it is su
provides an API only to check whether a key is
deprecated or not but, doesn't provides a way to get the corresponding new key.
Cheers,
Subroto Sanyal
Thanks Arun…
Same has been filed as HADOOP-8426
Cheers,
Subroto Sanyal
On May 22, 2012, at 6:26 PM, Arun C Murthy wrote:
> Good point. Please file a jira to add the new key to the deprecation warning.
> Thanks.
>
> On May 22, 2012, at 12:52 AM, Subroto wrote:
>
>> H
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.
Currently the framework is not letting the Jobs written with MRV1 to run
properly.
Any thoughts….. ??
Cheers,
Subroto Sanyal
Thanks Harsh….
Filed MAPREDUCE-4280 for the same….
Cheers,
Subroto Sanyal
On May 23, 2012, at 1:18 PM, Harsh J wrote:
> This is related: https://issues.apache.org/jira/browse/MAPREDUCE-2493
>
> But the real issue is LocalJobRunner does:
>
> OutputCommitter
.
Cheers,
Subroto Sanyal
On Jun 4, 2012, at 2:12 PM, Arpit Wanchoo wrote:
> Hi
>
> I wanted to check what exactly we gain when JVM reusability is enabled in
> mapped job.
>
> My doubt was regarding the setup() method of mapper. Is it called for a
> mapper even if it is using t
APRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$YARN_HOME/*,$YARN_HOME/lib/*
Please let me know if I am missing anything.
Cheers,
Subroto Sanyal
/hadoop/etc/hadoop
export YARN_CONF_DIR=$HADOOP_CONF_DIR
Is it expected to have these variables in profile file of the Linux user??
I am not using Windows client. My client is running on Mac and the cluster is
running on Linux versions.
Cheers,
Subroto Sanyal
On Jun 5, 2012, at 10:50 AM, Devaraj k
,/usr/local/hadoop/*,/usr/local/hadoop/lib/*,/usr/local/hadoop/*,/usr/local/hadoop/*
Cheers,
Subroto Sanyal
On Jun 5, 2012, at 12:07 PM, Devaraj k wrote:
> Hi Subroto,
>
> It will not use yarn-env.sh for launching the application master. NM uses
> the environment set by the cli
/hdfs/*,/usr/local/hadoop/share/hadoop/hdfs/lib*
I am trying to run the application not from the cluster. Are there any specific
settings needs to be done in Cluster so that I can go ahead with default yarn
application.classpath?
Regards,
Subroto Sanyal
On Jun 5, 2012, at 12:25 PM, Subroto wrote
common
hdfs
httpfs
mapreduce
tools
src
Cheers,
Subroto Sanyal
Thanks Jagat….
The tutorial is really nice ….
Cheers,
Subroto Sanyal
On Jun 6, 2012, at 9:47 AM, Jagat wrote:
> Hello Subroto ,
>
> There are multiple ways to install and set the environment variables for 2.x
> series.
> Download the latest tar in your computer for Hadoop 2.0
ge of API
is incorrect.
The build being used is:0.23.1-cdh4.0.0b2
Cheers,
Subroto Sanyal
Job.monitorAndPrintJob() line: 1280
JobClient$NetworkedJob.monitorAndPrintJob() line: 432
JobClient.monitorAndPrintJob(JobConf, RunningJob) line: 902
Any thoughts!!
Cheers,
Subroto Sanyal
is correct as it compels the user to read the records in
synchronous fashion.
Please let me know if there is any workaround for getting the correct
statistics from the MR job.
Cheers,
Subroto Sanyal
?
Cheers,
Subroto Sanyal
well….
Cheers,
Subroto Sanyal
On Jul 11, 2012, at 3:14 PM, Andreas Reiter wrote:
> Hi Subroto,
>
> i have the same problem, can not get my mapreduce jobs to run...
> The container log sais, that org.apache.hadoop.mapreduce.v2.app.MRAppMaster
> can not be found... :-(
>
Hi Smriti,
I would suggest you to have custom OutputCommiter which will be extension of
FileOutputCommiter and will help you achieve your desired functionality.
Regards,
Regards,
Computers make very fast and accurate mistakes...
This email and its attachments contain confidenti
,
Subroto Sanyal
_
From: 谭军 [mailto:tanjun_2...@163.com]
Sent: Sunday, August 07, 2011 12:15 PM
To: mapreduce-user@hadoop.apache.org
Subject: How can I know how many mappers created?
Hi,
I just want 2 mappers created to do my job for there are only 2 data nodes.
I think that is the
application should be independent of hardware details)
It is possible to run multiple instance of DataNode in same physical server,
provide the configurations are different for each DataNode process.
Regards,
Subroto Sanyal
_
From: 谭军 [mailto:tanjun_2...@163.com]
Sent: Monday
instead
of CPU?
Hi Subroto,
I'm sorry for my poor English.
Are you thinking about CPU core to Hadoop process mapping?
Maybe this is the issue.
2 computers with 2 CPUs.
Each CPU has 2 cores.
Now I have 2 physical datanodes.
Can I get 4 physical datanodes?
I don't know wether
The context parameter may provide you the reference you need.
Regards,
Subroto Sanyal
_
From: 谭军 [mailto:tanjun_2...@163.com]
Sent: Tuesday, August 09, 2011 11:58 AM
To: mapreduce
Subject: Can reducer get parameters from mapper besides key and value?
Hi,
Can reducer gets paramet
Reducer.
Further more, I don’t feel it will be good idea to keep such dependency.
Please let me know more about your scenario…may be we/community can suggest
some solution…
By the way, Can reducer get side files in cache?
Please let me know about “Side Files”…..
Regards,
Subroto Sanyal
connected directly to which node.
Reducer:
Reducer will consume mapper output and will hold the output of node (Key)
and all the adjacent node(value)
The solution provided may not be optimized.
Regards,
Subroto Sanyal
_
From: 谭军 [mailto:tanjun_2...@163.com]
Sent: Tuesday
Dear Arko,
The class org.apache.hadoop.fs.Path is available in the
hadoop-common-0.21.0.jar.
>From your code I can see you are trying to create a file in HDFS and write
into it.
The code snippet needs the hdfs and common jar for compiling.
Regards,
Subroto Sanyal
_
F
Hi Shreya,
The functionality expected by you achieved by Hive (Internally by Mapred).
May be you can take look in Hive Join logics or can use Hive directly.
You can look into org.apache.hadoop.mapred.lib.CombineFileInputFormat
and related classes for more details.
Regards,
Subroto
ebugging of Child process.
If you have access through putty, then you can use remote Debug option and
connect through eclipse as well.
Regards,
Subroto Sanyal
_
From: bejoy.had...@gmail.com [mailto:bejoy.had...@gmail.com]
Sent: Thursday, September 15, 2011 9:49 AM
To: mapreduce-
Hi David,
Request you to go through the link:
https://issues.apache.org/jira/browse/HADOOP-4305?focusedCommentId=12678406&;
page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#commen
t-12678406
Hope this link answers your question.
Regards,
Subroto Sanyal
-Orig
ogs".
>From MapReduce tutorial at:
http://hadoop.apache.org/common/docs/r0.18.3/mapred_tutorial.html
*The standard output (stdout) and error (stderr) streams of the task are
read by the TaskTracker and logged to ${HADOOP_LOG_DIR}/userlogs *
Regards,
Subroto Sanyal
__
-piFBP4fE&feature=player_embedded#at=812
or
http://www.danielblaisdell.com/2008/map-reduce-graph-traversal/
Hope the mentioned links answers your questions.
Regards,
Subroto Sanyal
Hi Subroto,
Just one more thing
Mapper:
As “wordcount” mapper consumes line by line; try to consume the hdfs file line
31 matches
Mail list logo