I checked the link for yarn job tracking page, and I saw one map task and one 
reduce task, however they were all in "pending" status. Then I drilled down by 
click the number, I saw the item shows "scheduled" in the state column. I  went 
back to the yarn's job tracking page and clicked the "logs" link, and the page 
shows three links:

stderr : Total file length is 2661 bytes.

stdout : Total file length is 0 bytes.

syslog : Total file length is 481344 bytes.

I clicked the stderr link and found nothing wrong, but the syslog link shows 
the message repeatedly:
2016-07-25 05:46:07,532 INFO [RMCommunicator Allocator] 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Ramping down all 
scheduled reduces:0
2016-07-25 05:46:07,532 INFO [RMCommunicator Allocator] 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Going to preempt 3 
due to lack of space for maps
2016-07-25 05:46:07,532 INFO [RMCommunicator Allocator] 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Recalculating 
schedule, headroom=<memory:2048, vCores:0>
2016-07-25 05:46:07,532 INFO [RMCommunicator Allocator] 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Reduce slow start 
threshold not met. completedMapsForReduceSlowstart 1 

I am eager to know what should I do to fix it and make the job start to run.
Thanks~!




At 2016-07-25 09:30:09, "hongbin ma" <[email protected]> wrote:

Skipping is normal as it reflects switching from layered cubing to fast cubing.
There's a icon in "Build Cube" step linking to the hadoop job status, you 
should checkout that







--

Regards,

Bin Mahone | 马洪宾

Reply via email to