Yes. That explains it and confirms my guess too :-)

stderr:156 0
syslog:995 166247

What are these numbers ? Byte offset in corresponding files from where logs of 
this task starts.



Regards,
Ajay Srivastava


On 24-Jul-2013, at 12:10 PM, Vinod Kumar Vavilapalli wrote:


Ah, I should've guessed that. You seem to have JVM reuse enabled. Even if JVMs 
are reused, all the tasks write to the same files as they share the JVM. They 
only have different index files. The same thing happens for what we call the 
TaskCleanup tasks which are launched for failing/killed tasks.

Thanks,
+Vinod

On Jul 23, 2013, at 10:55 PM, Ajay Srivastava wrote:

Hi Vinod,

Thanks. It seems that something else is going on -

Here is the content of log.index -

ajay-srivastava:userlogs ajay.srivastava$ cat 
job_201307222115_0188/attempt_201307222115_0188_r_000000_0/log.index
LOG_DIR:/opt/hadoop/bin/../logs/userlogs/job_201307222115_0188/attempt_201307222115_0188_r_000008_0
stdout:0 0
stderr:156 0
syslog:995 166247

Looks like that the log.index is pointing to another attempt directory.
Is it doing some kind of optimization ? What is purpose of log.index ?


Regards,
Ajay Srivastava


On 24-Jul-2013, at 11:09 AM, Vinod Kumar Vavilapalli wrote:


It could either mean that all those task-attempts are crashing before the 
process itself is getting spawned (check TT logs) or those logs are getting 
deleted after the fact. Suspect the earlier.

Thanks,
+Vinod

On Jul 23, 2013, at 9:33 AM, Ajay Srivastava wrote:

Hi,

I see that most of the tasks have only log.index created in 
/opt/hadoop/logs/userlogs/jobId/task_attempt directory.
When does this happen ?
Is there a config setting for this OR this is a bug ?


Regards,
Ajay Srivastava




Reply via email to