Hi Chris, Did you make sure to pass in the ContainerRequest rack param as null?
-Sandy On Wed, May 7, 2014 at 12:44 PM, Chris Riccomini <[email protected]>wrote: > Hey Guys, > > I am creating a container request: > > > protected def requestContainers(memMb: Int, cpuCores: Int, containers: > Int) { > > info("Requesting %d container(s) with %dmb of memory" format > (containers, memMb)) > > val capability = Records.newRecord(classOf[Resource]) > > val priority = Records.newRecord(classOf[Priority]) > > priority.setPriority(0) > > capability.setMemory(memMb) > > capability.setVirtualCores(cpuCores) > > (0 until containers).foreach(idx => amClient.addContainerRequest(new > ContainerRequest(capability, null, null, priority))) > > } > > This pretty closely mirrors the distributed shell example. > > If I put an array with a host string in the ContainerRequest, YARN seems > to completely ignore this request, and continues to put all containers on > one or two nodes in the grid, which aren't the ones I requested, even > though the grid is completely empty, and there are 15 nodes available. This > also holds true if I put "false" for relax locality. I'm running the > CapacityScheduler with a node-locality-delay set to 40. Previously, I tried > the FifoScheduler, and it exhibited the same behavior. > > All NMs are just using the /default-rack for their rack. The strings that > I'm putting in the hosts String[] parameter in ContainerRequest are hard > coded to exactly match the NodeIds being listed in the NMs. > > What am I doing wrong? I feel like I'm missing some configuration on the > capacity scheduler or NMs or something. > > Cheers, > Chris >
