Hi all,

I stumbled upon this problem as well while trying to run the default wordcount 
shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 
7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the 
other as TT+DN. Security is enabled. The input file is about 600 kB and the 
error is 

2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room 
for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map 
to take 9223372036854775807

The logfile is attached, together with the configuration files. The version I'm 
using is

Hadoop 1.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 
-r 1479473
Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
This command was run using 
/home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar

If I run the default configuration (i.e. no securty), then the job succeeds.

Is there something missing in how I set up my nodes? How is it possible that 
the envisaged value for the needed space is so big?

Thanks in advance.

Matteo



>Which version of Hadoop are you using. A quick search shows me a bug
>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>similar symptoms. However, that was fixed a long while ago.
>
>
>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>[email protected]> wrote:
>
>> This the content of the jobtracker log file :
>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000000 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000001 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000002 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000003 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000004 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000005 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000006 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> reduce tasks.
>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> task_201303231139_0001_m_000008, for tracker
>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>> 'attempt_201303231139_0001_m_000008_0' has completed
>> task_201303231139_0001_m_000008 successfully.
>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>> expect map to take 1317624576693539401
>>
>>
>> The value in we excpect map to take is too huge   1317624576693539401
>> bytes  !!!!!!!
>>
>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> [email protected]> wrote:
>>
>>> The estimated value that the hadoop compute is too huge for the simple
>>> example that i am running .
>>>
>>> ---------- Forwarded message ----------
>>> From: Redwane belmaati cherkaoui <[email protected]>
>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>> Subject: Re: About running a simple wordcount mapreduce
>>> To: Abdelrahman Shettia <[email protected]>
>>> Cc: [email protected], reduno1985 <[email protected]>
>>>
>>>
>>> This the output that I get I am running two machines  as you can see  do
>>> u see anything suspicious ?
>>> Configured Capacity: 21145698304 (19.69 GB)
>>> Present Capacity: 17615499264 (16.41 GB)
>>> DFS Remaining: 17615441920 (16.41 GB)
>>> DFS Used: 57344 (56 KB)
>>> DFS Used%: 0%
>>> Under replicated blocks: 0
>>> Blocks with corrupt replicas: 0
>>> Missing blocks: 0
>>>
>>> -------------------------------------------------
>>> Datanodes available: 2 (2 total, 0 dead)
>>>
>>> Name: 11.1.0.6:50010
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765019648 (1.64 GB)
>>> DFS Remaining: 8807800832(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.31%
>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>
>>>
>>> Name: 11.1.0.3:50010
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765179392 (1.64 GB)
>>> DFS Remaining: 8807641088(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.3%
>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>
>>>
>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>> [email protected]> wrote:
>>>
>>>> Hi Redwane,
>>>>
>>>> Please run the following command as hdfs user on any datanode. The
>>>> output will be something like this. Hope this helps
>>>>
>>>> hadoop dfsadmin -report
>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>> Present Capacity: 70375292928 (65.54 GB)
>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>> DFS Used: 480129024 (457.89 MB)
>>>> DFS Used%: 0.68%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> Thanks
>>>> -Abdelrahman
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 
>>>> <[email protected]>wrote:
>>>>
>>>>>
>>>>> I have my hosts running on openstack virtual machine instances each
>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>> the hdfs without web ui .
>>>>>
>>>>>
>>>>> Sent from Samsung Mobile
>>>>>
>>>>> Serge Blazhievsky <[email protected]> wrote:
>>>>> Check web ui how much space you have on hdfs???
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>> [email protected]> wrote:
>>>>>
>>>>> Hi Redwane ,
>>>>>
>>>>> It is possible that the hosts which are running tasks are do not have
>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>> [email protected]> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: Redwane belmaati cherkaoui <[email protected]>
>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>> To: [email protected]
>>>>>>
>>>>>>
>>>>>> Hi
>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>> The jobtracker log file shows the following warning:
>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>take
>>>>>> 1317624576693539401
>>>>>>
>>>>>> Please help me ,
>>>>>> Best Regards,
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>


Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://10.156.120.41:9000</value>
  </property>

  <property>
    <name>hadoop.security.authentication</name>
    <value>kerberos</value>
  </property>

  <property>
    <name>hadoop.security.authorization</name>
    <value>true</value>
  </property>

  <property>
    <name>hadoop.kerberos.kinit.command</name>
    <value>/usr/bin/kinit</value>
  </property>

  <property>
    <name>hadoop.http.filter.initializers</name>
    <value>org.apache.hadoop.security.AuthenticationFilterInitializer</value>
  </property>

  <property>
    <name>hadoop.http.authentication.type</name>
    <value>simple</value>
  </property>

  <property>
    <name>hadoop.http.authentication.token.validity</name>
    <value>36000</value>
  </property>

  <property>
    <name>hadoop.http.authentication.signature.secret.file</name>
    <value>/home/hadoop-user/hadoop-tutorial-conf/http-secret-file</value>
  </property>

  <property>
    <name>hadoop.http.authentication.cookie.domain</name>
    <value></value>
  </property>

  <property>
    <name>hadoop.http.authentication.simple.anonymous.allowed</name>
    <value>true</value>
  </property>

  <property>
    <name>hadoop.http.authentication.kerberos.principal</name>
    <value>HTTP/[email protected]</value>
  </property>

  <property>
    <name>hadoop.http.authentication.kerberos.keytab</name>
    <value>/home/hadoop-user/hadoop/conf/keytabs/hdfs.keytab</value>
  </property>

  <property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec</value>
  </property>

  <property>
    <name>hadoop.security.auth_to_local</name>
    <value>RULE:[2:$1@$0](.*@LOCALDOMAIN)s/@.*//
        RULE:[1:$1@$0](.*@LOCALDOMAIN)s/@.*//
        RULE:[2:$1@$0](mapred@.*HADOOP.LRZ.DE)s/.*/hadoop-user/
        RULE:[2:$1@$0](hdfs@.*HADOOP.LRZ.DE)s/.*/hadoop-user/
        RULE:[2:$1@$0](dn@.*HADOOP.LRZ.DE)s/.*/hadoop-user/
        RULE:[2:$1@$0](tt@.*HADOOP.LRZ.DE)s/.*/hadoop-user/
        RULE:[2:$1@$0](HTTP@.*HADOOP.LRZ.DE)s/.*/hadoop-user/
        RULE:[2:$1@$0](.*@HADOOP.LRZ.DE)s/@.*//
        RULE:[1:$1@$0](.*@HADOOP.LRZ.DE)s/@.*//
        RULE:[2:$1@$0](.*@LRZ-MUENCHEN.DE)s/@.*//
        RULE:[1:$1@$0](.*@LRZ-MUENCHEN.DE)s/@.*//
        RULE:[2:$1@$0]([email protected])s/@.//
        RULE:[1:$1@$0]([email protected])s/@.//
        DEFAULT</value>
  </property>

</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>

  <property>
    <name>dfs.hosts.exclude</name>
    <value>/tmp/nodes.dismiss</value>
  </property>

  <property>
    <name>dfs.block.access.token.enable</name>
    <value>true</value>
    <description>
If "true", access tokens are used as capabilities for accessing datanodes.
If "false", no access tokens are checked on accessing datanodes.
    </description>
  </property>

  <property>
    <name>dfs.datanode.address</name>
    <value>0.0.0.0:1004</value>
  </property>

  <property>
    <name>dfs.datanode.http.address</name>
    <!-- cluster yahoo standard -->
    <value>0.0.0.0:1006</value>
  </property>

  <property>
    <name>dfs.namenode.kerberos.principal</name>
    <value>hdfs/[email protected]</value>
    <description>
Kerberos principal name for the NameNode
    </description>
  </property>

  <property>
    <name>dfs.namenode.keytab.file</name>
    <value>/home/hadoop-user/hadoop/conf/keytabs/hdfs.keytab</value>
 <description>
        Combined keytab file containing the namenode service (and host) principals.
    </description>
  </property>

  <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
  </property>

  <property>    
    <name>dfs.web.authentication.kerberos.principal</name>    
    <value>HTTP/[email protected]</value>    
    <description> The HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. 
The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPNEGO specification.    
    </description>  
  </property>

  <property>    
    <name>dfs.web.authentication.kerberos.keytab</name>    
    <value>/home/hadoop-user/hadoop/conf/keytabs/hdfs.keytab</value>
    <description>The Kerberos keytab file with the credentials for the HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint.    
    </description>  
  </property>

  <property>    
    <name>dfs.datanode.kerberos.principal</name>    
    <value>dn/[email protected]</value>  
    <description>The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real host name.   
    </description>  
  </property>

  <property>     
    <name>dfs.datanode.keytab.file</name>    
    <value>/home/hadoop-user/hadoop/conf/keytabs/dn.keytab</value>
    <description>The filename of the keytab file for the DataNode.    
    </description>  
  </property>

</configuration>
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>10.156.120.41:9001</value>
  </property>

  <property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx256m</value>
  </property>

  <property>
    <name>mapred.system.dir</name>
    <value>/hadoop/mapred/system</value>
  </property>

  <property>
    <name>mapreduce.map.output.compress</name>
    <value>true</value>
  </property>

  <property>
    <name>mapred.queue.names</name>
    <value>default,low_prio,high_prio</value>
  </property>

  <property>
    <name>mapred.acls.enabled</name>
    <value>true</value>
  </property>

  <property>
    <name>mapred.jobtracker.taskScheduler</name>
    <value>org.apache.hadoop.mapred.FairScheduler</value>
  </property>

  <property>
    <name>mapred.fairscheduler.preemption</name>
    <value>true</value>
  </property>

  <property>
    <name>mapred.fairscheduler.poolnameproperty</name>
    <value>mapred.job.queue.name</value>
  </property>

  <property>
    <name>mapreduce.jobtracker.kerberos.principal</name>
    <value>mapred/[email protected]</value>
    <description>
JT principal
   </description>
</property>

 <property>
   <name>mapreduce.jobtracker.keytab.file</name>
   <value>/home/hadoop-user/hadoop/conf/keytabs/mapred.keytab</value>
   <description>
       The keytab for the jobtracker principal.
   </description>
 </property>

  <property>  
    <name>mapreduce.tasktracker.kerberos.principal</name>   
    <value>tt/[email protected]</value>  
    <description>Kerberos principal name for the TaskTracker."_HOST" is replaced by the host name of the TaskTracker.  
    </description> 
  </property>

  <property>   
    <name>mapreduce.tasktracker.keytab.file</name>   
    <value>/home/hadoop-user/hadoop/conf/keytabs/tt.keytab</value>
    <description>The filename of the keytab for the TaskTracker</description>  
  </property>

</configuration>
2013-06-01 12:14:48,556 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = hadoop-master/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.0
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May  6 06:59:37 UTC 2013
STARTUP_MSG:   java = 1.7.0_21
************************************************************/
2013-06-01 12:15:05,251 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-06-01 12:15:06,266 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-06-01 12:15:06,423 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-06-01 12:15:06,425 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics system started
2013-06-01 12:15:20,762 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source QueueMetrics,q=default registered.
2013-06-01 12:15:21,463 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source QueueMetrics,q=low_prio registered.
2013-06-01 12:15:21,571 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source QueueMetrics,q=high_prio registered.
2013-06-01 12:15:43,320 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-06-01 12:15:53,081 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user mapred/[email protected] using keytab file /home/hadoop-user/hadoop/conf/keytabs/mapred.keytab
2013-06-01 12:15:53,278 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2013-06-01 12:15:54,414 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
2013-06-01 12:15:54,516 INFO org.apache.hadoop.mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2013-06-01 12:15:54,521 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens
2013-06-01 12:15:54,667 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-06-01 12:15:54,881 INFO org.apache.hadoop.mapred.JobTracker: Starting jobtracker with owner as hadoop-user
2013-06-01 12:16:01,119 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-06-01 12:16:01,446 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9001 registered.
2013-06-01 12:16:01,509 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9001 registered.
2013-06-01 12:16:09,022 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-06-01 12:16:16,810 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-06-01 12:16:17,157 INFO org.apache.hadoop.http.HttpServer: Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context WepAppsContext
2013-06-01 12:16:17,166 INFO org.apache.hadoop.http.HttpServer: Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context logs
2013-06-01 12:16:17,175 INFO org.apache.hadoop.http.HttpServer: Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context static
2013-06-01 12:16:17,382 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030
2013-06-01 12:16:17,450 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030
2013-06-01 12:16:17,452 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030
2013-06-01 12:16:17,458 INFO org.mortbay.log: jetty-6.1.26
2013-06-01 12:16:52,507 INFO org.mortbay.log: Started [email protected]:50030
2013-06-01 12:16:53,308 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-06-01 12:16:53,399 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source JobTrackerMetrics registered.
2013-06-01 12:16:54,938 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001
2013-06-01 12:16:55,023 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030
2013-06-01 12:16:56,139 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-06-01 12:16:56,138 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting
2013-06-01 12:16:56,211 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001: starting
2013-06-01 12:16:56,224 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9001: starting
2013-06-01 12:16:56,235 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9001: starting
2013-06-01 12:16:56,227 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9001: starting
2013-06-01 12:16:56,251 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9001: starting
2013-06-01 12:16:56,260 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9001: starting
2013-06-01 12:16:56,278 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9001: starting
2013-06-01 12:16:56,296 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9001: starting
2013-06-01 12:16:56,296 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9001: starting
2013-06-01 12:16:56,566 INFO org.apache.hadoop.mapred.JobTracker: Setting safe mode to true. Requested by : hadoop-user
2013-06-01 12:16:56,570 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001: starting
2013-06-01 12:17:05,106 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:09,396 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:12,624 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:14,862 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:17,479 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:19,567 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:20,977 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:22,757 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:24,118 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:25,474 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:27,049 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:28,585 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:30,148 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:31,640 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:33,048 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:34,537 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:35,846 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:37,157 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:38,836 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:40,435 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:41,804 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:43,246 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:44,918 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:46,605 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:47,981 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:49,306 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:50,703 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:51,979 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:53,254 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:54,610 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:55,886 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:57,262 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:58,562 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:17:59,939 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:01,263 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:03,171 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:04,526 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:05,788 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:07,110 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:08,485 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:09,755 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:11,390 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:12,667 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:14,016 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:15,327 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:16,606 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:17,861 INFO org.apache.hadoop.mapred.JobTracker: HDFS initialized but not 'healthy' yet, waiting...
2013-06-01 12:18:19,562 INFO org.apache.hadoop.mapred.JobTracker: Setting safe mode to false. Requested by : hadoop-user
2013-06-01 12:18:19,742 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2013-06-01 12:18:20,739 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
2013-06-01 12:18:20,939 INFO org.apache.hadoop.mapred.JobHistory: Creating DONE folder at file:/home/hadoop-user/hadoop-1.2.0/logs/history/done
2013-06-01 12:18:21,025 INFO org.apache.hadoop.mapred.JobHistory: Job History MaxAge is 2592000000 ms (30.00 days), Cleanup Frequency is 86400000 ms (1.00 days)
2013-06-01 12:18:21,069 INFO org.apache.hadoop.mapred.JobTracker: History server being initialized in embedded mode
2013-06-01 12:18:21,149 INFO org.apache.hadoop.mapred.JobHistoryServer: Started job history server at: localhost:50030
2013-06-01 12:18:21,150 INFO org.apache.hadoop.mapred.JobTracker: Job History Server web address: localhost:50030
2013-06-01 12:18:21,175 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive
2013-06-01 12:18:23,317 INFO org.apache.hadoop.mapred.FairScheduler: Successfully configured FairScheduler
2013-06-01 12:18:23,320 INFO org.apache.hadoop.mapred.JobTracker: Starting the recovery process for 0 jobs ...
2013-06-01 12:18:23,321 INFO org.apache.hadoop.mapred.JobTracker: Recovery done! Recoverd 0 of 0 jobs.
2013-06-01 12:18:23,322 INFO org.apache.hadoop.mapred.JobTracker: Recovery Duration (ms):2
2013-06-01 12:18:23,323 INFO org.apache.hadoop.mapred.JobTracker: Refreshing hosts information
2013-06-01 12:18:23,875 INFO org.apache.hadoop.util.HostsFileReader: Setting the includes file to 
2013-06-01 12:18:23,876 INFO org.apache.hadoop.util.HostsFileReader: Setting the excludes file to 
2013-06-01 12:18:23,877 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-06-01 12:18:23,878 INFO org.apache.hadoop.mapred.JobTracker: Decommissioning 0 nodes
2013-06-01 12:18:23,907 INFO org.apache.hadoop.mapred.JobTracker: Starting RUNNING
2013-06-01 12:18:27,027 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.156.120.49
2013-06-01 12:18:27,083 INFO org.apache.hadoop.mapred.JobTracker: Adding tracker tracker_10.156.120.49:localhost/127.0.0.1:57537 to host 10.156.120.49
2013-06-01 12:20:45,527 INFO org.apache.hadoop.mapred.JobTracker: jobToken generated and stored with users keys in /hadoop/mapred/system/job_201306011214_0001/jobToken
2013-06-01 12:20:48,091 INFO org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: registering token for renewal for service =10.156.120.41:9000 and jobID = job_201306011214_0001
2013-06-01 12:20:48,208 INFO org.apache.hadoop.hdfs.DFSClient: Renewing HDFS_DELEGATION_TOKEN token 4 for hadoop-user on 10.156.120.41:9000
2013-06-01 12:20:48,358 INFO org.apache.hadoop.mapred.JobInProgress: job_201306011214_0001: nMaps=1 nReduces=1 max=-1
2013-06-01 12:20:48,474 INFO org.apache.hadoop.mapred.AuditLogger: USER=hadoop-user	IP=10.156.120.1	OPERATION=SUBMIT_JOB	TARGET=job_201306011214_0001 in queue default	RESULT=SUCCESS
2013-06-01 12:20:48,604 INFO org.apache.hadoop.mapred.JobTracker: Initializing job_201306011214_0001
2013-06-01 12:20:48,624 INFO org.apache.hadoop.mapred.JobTracker: Job job_201306011214_0001 added successfully for user 'hadoop-user' to queue 'default'
2013-06-01 12:20:48,623 INFO org.apache.hadoop.mapred.JobInProgress: Initializing job_201306011214_0001
2013-06-01 12:20:48,639 INFO org.apache.hadoop.mapred.AuditLogger: USER=hadoop-user	IP=10.156.120.1	OPERATION=SUBMIT_JOB	TARGET=job_201306011214_0001	RESULT=SUCCESS
2013-06-01 12:20:59,687 INFO org.apache.hadoop.mapred.JobInProgress: Input size for job job_201306011214_0001 = 674570. Number of splits = 1
2013-06-01 12:20:59,727 INFO org.apache.hadoop.mapred.JobInProgress: tip:task_201306011214_0001_m_000000 has split on node:/default-rack/10.156.120.49
2013-06-01 12:20:59,742 INFO org.apache.hadoop.mapred.JobInProgress: job_201306011214_0001 LOCALITY_WAIT_FACTOR=1.0
2013-06-01 12:20:59,819 INFO org.apache.hadoop.mapred.JobInProgress: Job job_201306011214_0001 initialized successfully with 1 map tasks and 1 reduce tasks.
2013-06-01 12:21:01,284 INFO org.apache.hadoop.mapred.JobTracker: Adding task (JOB_SETUP) 'attempt_201306011214_0001_m_000002_0' to tip task_201306011214_0001_m_000002, for tracker 'tracker_10.156.120.49:localhost/127.0.0.1:57537'
2013-06-01 12:22:36,761 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201306011214_0001_m_000002_0' has completed task_201306011214_0001_m_000002 successfully.
2013-06-01 12:22:37,437 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854574080 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:37,842 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854574080 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:38,259 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854651904 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:38,646 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854651904 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:39,076 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854651904 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:39,494 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854651904 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:39,908 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854651904 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:40,326 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854651904 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:40,741 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854651904 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:41,137 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854651904 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:41,521 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854651904 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:41,931 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854647808 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:42,345 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854651904 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:42,711 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:43,106 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:43,497 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:43,905 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:44,311 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:44,715 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:45,105 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:45,489 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:45,882 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:46,281 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:46,667 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:47,143 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:47,569 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:47,987 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:48,406 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:48,790 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:49,178 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:49,569 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:49,955 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:50,295 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:50,633 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:50,970 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:51,309 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:51,654 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
2013-06-01 12:22:52,032 INFO org.apache.hadoop.mapred.AuditLogger: USER=hadoop-user	IP=10.156.120.1	OPERATION=KILL_JOB	TARGET=job_201306011214_0001 in queue default	RESULT=SUCCESS
2013-06-01 12:22:52,039 INFO org.apache.hadoop.mapred.JobTracker: Killing job job_201306011214_0001
2013-06-01 12:22:52,049 INFO org.apache.hadoop.mapred.JobInProgress: Killing job 'job_201306011214_0001'
2013-06-01 12:22:52,344 INFO org.apache.hadoop.mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_201306011214_0001_m_000001_0' to tip task_201306011214_0001_m_000001, for tracker 'tracker_10.156.120.49:localhost/127.0.0.1:57537'
2013-06-01 12:24:09,526 INFO org.apache.hadoop.mapred.JobInProgress: Task 'attempt_201306011214_0001_m_000001_0' has completed task_201306011214_0001_m_000001 successfully.
2013-06-01 12:24:09,610 INFO org.apache.hadoop.mapred.JobInProgress$JobSummary: jobId=job_201306011214_0001,submitTime=1370082045797,launchTime=1370082059756,firstJobSetupTaskLaunchTime=1370082060065,firstJobCleanupTaskLaunchTime=1370082172340,finishTime=1370082249556,numMaps=1,numSlotsPerMap=1,numReduces=1,numSlotsPerReduce=1,user=hadoop-user,queue=default,status=KILLED,mapSlotSeconds=154,reduceSlotsSeconds=0,clusterMapCapacity=2,clusterReduceCapacity=2,jobName=word count
2013-06-01 12:24:10,251 INFO org.apache.hadoop.mapred.JobHistory: Creating DONE subfolder at file:/home/hadoop-user/hadoop-1.2.0/logs/history/done/version-1/10.156.120.41_1370081762283_/2013/06/01/000000
2013-06-01 12:24:10,297 INFO org.apache.hadoop.mapred.JobHistory: Moving file:/home/hadoop-user/hadoop-1.2.0/logs/history/job_201306011214_0001_1370082045797_hadoop-user_word+count to file:/home/hadoop-user/hadoop-1.2.0/logs/history/done/version-1/10.156.120.41_1370081762283_/2013/06/01/000000
2013-06-01 12:24:10,418 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201306011214_0001_m_000001_0'
2013-06-01 12:24:10,442 INFO org.apache.hadoop.mapred.JobTracker: Removing task 'attempt_201306011214_0001_m_000002_0'
2013-06-01 12:24:10,726 INFO org.apache.hadoop.mapred.JobHistory: Moving file:/home/hadoop-user/hadoop-1.2.0/logs/history/job_201306011214_0001_conf.xml to file:/home/hadoop-user/hadoop-1.2.0/logs/history/done/version-1/10.156.120.41_1370081762283_/2013/06/01/000000
2013-06-01 12:24:11,381 INFO org.apache.hadoop.hdfs.DFSClient: Cancelling HDFS_DELEGATION_TOKEN token 4 for hadoop-user on 10.156.120.41:9000
  • [no subject] Lanati, Matteo

Reply via email to