Please don't hijack threads, start a new one. Thanks.

On Jan 6, 2012, at 10:41 AM, Arun C Murthy wrote:

> You're probably hitting MAPREDUCE-3537. Try using the hadoop-0.23.1-SNAPSHOT 
> or build it yourself from branch-0.23 on ASF svn.
> 
> Arun
> 
> On Jan 5, 2012, at 10:45 PM, raghavendhra rahul wrote:
> 
>> Yes i am writing my own application master.Is there a way to specify
>> node1: 10 conatiners
>> node2: 10 containers 
>> Can we specify this kind of list using the application master????
>> 
>> Also i set request.setHostName("client"); where client is the hostname of a 
>> node
>> I checked the log to find the following error
>> java.io.FileNotFoundException: File /local1/yarn/.yarn/local/
>> usercache/rahul_2011/appcache/application_1325760852770_0001 does not exist
>>         at 
>> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:431)
>>         at 
>> org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:815)
>>         at 
>> org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
>>         at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
>>         at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:700)
>>         at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:697)
>>         at 
>> org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)
>>         at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:697)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:122)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:237)
>>         at 
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:67)
>>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>         at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>         at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>         at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>         at java.lang.Thread.run(Thread.java:636)
>> 
>> Any ideas.....
>> 
>> 
>> On Fri, Jan 6, 2012 at 12:41 AM, Arun C Murthy <a...@hortonworks.com> wrote:
>> Are you writing your own application i.e. custom ApplicationMaster?
>> 
>> You need to pass ResourceRequest (RR) with a valid hostname alongwith 
>> (optionally) RR with rack and also a mandatory RR with * as the 
>> resource-name.
>> 
>> Arun
>> 
>> On Jan 4, 2012, at 8:04 PM, raghavendhra rahul wrote:
>> 
>>> Hi,
>>> 
>>> I tried to set the client node for launching the container within the 
>>> application master.
>>> I have set the parameter as
>>> request.setHostName("client");
>>> but the containers are not launched in the destined host.Instead the loop 
>>> goes on continuously.
>>> 2012-01-04 15:11:48,535 INFO appmaster.ApplicationMaster 
>>> (ApplicationMaster.java:run(204)) - Current application state: loop=95, 
>>> appDone=false, total=2, requested=2, completed=0, failed=0, 
>>> currentAllocated=0
>>> 
>>> On Wed, Jan 4, 2012 at 11:24 PM, Robert Evans <ev...@yahoo-inc.com> wrote:
>>> Ann,
>>> 
>>> A container more or less corresponds to a task in MRV1.  There is one 
>>> exception to this, as the ApplicationMaster also runs in a container.  The 
>>> ApplicationMaster will request new containers for each mapper or reducer 
>>> task that it wants to launch.  There is separate code from the container 
>>> that will serve up the intermediate mapper output and is run as part of the 
>>> NodeManager (Similar to the TaskTracker from before).  When the 
>>> ApplicationMaster requests a container it also includes with it a hint as 
>>> to where it would like the container placed.  In fact it actually makes 
>>> three request one for the exact node, one for the rack the node is on, and 
>>> one that is generic and could be anywhere.  The scheduler will try to honor 
>>> those requests in the same order so data locality is still considered and 
>>> generally honored.  Yes there is the possibility of back and forth to get a 
>>> container, but the ApplicationMaster generally will try to use all of the 
>>> containers that it is given, even if they are not optimal.
>>> 
>>> --Bobby Evans
>>> 
>>> 
>>> On 1/4/12 10:23 AM, "Ann Pal" <ann_r_...@yahoo.com> wrote:
>>> 
>>> Hi,
>>> I am trying to understand more about Hadoop Next Gen Map Reduce and had the 
>>> following questions based on the following post:
>>> 
>>> http://developer.yahoo.com/blogs/hadoop/posts/2011/03/mapreduce-nextgen-scheduler/
>>> 
>>> [1] How does application decide how many containers it needs? The 
>>> containers are used to store the intermediate result at the map nodes?
>>> 
>>> [2] During resource allocation, if the resource manager has no mapping 
>>> between map tasks to resources allocated, how can it properly allocate the 
>>> right resources. It might end up allocating resources on a node, which does 
>>> not have data for the map task, and hence is not optimal. In this case the 
>>> Application Master will have to reject it and request again . There could 
>>> be considerable back- and- forth between application master and resource 
>>> manager before it could converge. Is this right?
>>> 
>>> Thanks!
>>> 
>>> 
>> 
>> 
> 

Reply via email to