Inline
Best Regards,
Sonal
Nube Technologies http://www.nubetech.co
http://in.linkedin.com/in/sonalgoyal
On Fri, Sep 27, 2013 at 10:42 AM, Sai Sai saigr...@yahoo.in wrote:
Hi
I have a few questions i am trying to understand:
1. Is each input split same as a record, (a rec can be a
The input splits are not copied, only the information on the location of
the splits is copied to the jobtracker so that it can assign tasktrackers
which are local to the split.
Check the Job Initialization section at
http://answers.oreilly.com/topic/459-anatomy-of-a-mapreduce-job-run-with-hadoop/
Hi All,
Since few years, I'm working as hadoop admin on Linux platform,Though we
have majority of servers on Solaris (Sun Sparc hardware). Many times I have
seen that hadoop is compatible with Linux. Is that right?. If yes then what
all things I need to have so that I can run hadoop on Solaris in
hi, thank you for your reply.
Hadoop version is hadoop-0.20.2-cdh3u4 ,
I guess the jetty version is jetty-6.1.26 ( because I see the files
jetty-6.1.26.cloudera.1.jar,
jetty-servlet-tester-6.1.26.cloudera.1.jar,jetty-util-6.1.26.cloudera.1.jar
in $HADOOP_HOME/lib/ )
how to ship a patched
Hi,
i'm just trying to backup some files to our ftp-server.
hadoop distcp hdfs:///data/ ftp://user:pass@server/data/
returns after some minutes with:
Task TASKID=task_201308231529_97700_m_02 TASK_TYPE=MAP
TASK_STATUS=FAILED FINISH_TIME=1380217916479 ERROR=java\.io\.IOException:
Cannot
Hi,
Can we submit container requests from multiple threads in parallel to the
Resource Manager?
Thanks,
Kishore
Hi,
I suggest you should not do that. After YARN-744 goes in this will be
prevented on RM side. May I know why you want to do this? any advantage/
use case?
Thanks,
Omkar Joshi
*Hortonworks Inc.* http://www.hortonworks.com
On Fri, Sep 27, 2013 at 8:31 AM, Krishna Kishore Bonagiri
Hi Omkar,
Thanks for the quick reply. I have a requirement for sets of containers
depending on some of my business logic. I found that each of the request
allocations is taking around 2 seconds, so I am thinking of doing the
requests at the same from multiple threads.
Kishore
On Fri, Sep 27,
My point is why you want multiple threads as a part of single AM talking to
RM simultaneously? I think AMRMProtocol only AM is suppose to use and if
the requirement is to have multiple requestor requesting resources then it
should be clubbed into one single request and sent to RM. One more thing
I am trying to get the job tracker counters in my reducer. It works on
single node demo hadoop but fails on a real cluster where kerberos is used
for authentication.
RunningJob parentJob =
client.getJob(JobID.forName(
For the JobClient to compute the input splits doesn't it need to contact
Name Node. Only Name Node knows where the splits are, how can it compute it
without that additional call?
On Fri, Sep 27, 2013 at 1:41 AM, Sonal Goyal sonalgoy...@gmail.com wrote:
The input splits are not copied, only the
Technically, the block locations are provided by the InputSplit which in
the FileInputFormat case, is provided by the FileSystem Interface.
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/InputSplit.html
The thing to realize here is that the FileSystem implementation is
I wanted to elaborate on what happened.
A hadoop slave was added to a live cluster. Turns out, I think the
mapred-site.xml was not configured with the correct master host. But
alas, in any case when the commands were run:
* |$ hadoop mradmin -refreshNodes|
* |$ hadoop dfsadmin
13 matches
Mail list logo