Thanks but we were already on log4j 1.2.16.

The underlying problem is trying to use two different log4j configs it the same 
process. The cassandra one is causing all the appenders in the Hadoop config to 
be closed when it starts watching it's own config. And as the hadoop config is 
not in the cassandra config those appenders are not re-activated. 

I'll put up a patch to make the TaskLogAppender a little safer by checking if 
if it's closed before flush. But we need to keep the diff configs away from 
each other. 

Cheers
  
-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 19/08/2011, at 4:26 AM, Aaron Baff wrote:

> Yes, we had this issue, we simply grabbed the latest log4j jar file, and 
> copied it over on top of the existing one. See 
> https://issues.apache.org/jira/browse/MAPREDUCE-2378
> 
> 
> -----Original Message-----
> From: Isaac Dooley [mailto:[email protected]]
> Sent: Thursday, August 18, 2011 8:00 AM
> To: [email protected]
> Subject: RE: NPE in TaskLogAppender
> 
> I've had this problem on linux with 20.203. It occured when a map or reduce 
> task would mess around with log4j configurations. Basically, I had to create 
> a modified TaskManager that would set some environment variables that would 
> keep my code's static blocks from reconfiguring log4j.
> 
> 
> 
> -----Original Message-----
> From: aaron morton [mailto:[email protected]]
> Sent: Thursday, August 18, 2011 9:30 AM
> To: [email protected]
> Subject: Re: NPE in TaskLogAppender
> 
> An update incase anyone else has this problem. It looks like an issue with 
> competing log4j configurations that could be resolved by using Logger 
> Repositories. I've updated a thread in the brisk user group 
> http://groups.google.com/group/brisk-users/msg/619b5ebba2fb0e89
> 
> Has anyone else had this problem ?
> 
> Cheers
> 
> -----------------
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
> 
> On 15/08/2011, at 2:04 PM, aaron morton wrote:
> 
>> I'm running the Cassandra Brisk server with Haddop core 20.203 on OSX, 
>> everything is local.
>> 
>> I keep running into this problem for Hive jobs
>> 
>> INFO 13:52:39,923 Error from attempt_201108151342_0001_m_000001_1: 
>> java.lang.NullPointerException
>>      at 
>> org.apache.hadoop.mapred.TaskLogAppender.flush(TaskLogAppender.java:67)
>>      at org.apache.hadoop.mapred.TaskLog.syncLogs(TaskLog.java:264)
>>      at org.apache.hadoop.mapred.Child$4.run(Child.java:261)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>      at org.apache.hadoop.mapred.Child.main(Child.java:253)
>> 
>> The only info I've found online was 
>> http://www.mail-archive.com/[email protected]/msg12829.html
>> 
>> Just for fun I tried...
>> * setting mapred.acls.enabled to true
>> * setting mapred.queue.default.acl-submit-job and 
>> mapred.queue.default.acl-administer-jobs to *
>> 
>> There was no discernible increase in joy though.
>> 
>> Any thoughts ?
>> 
>> Cheers
>> 
>> -----------------
>> Aaron Morton
>> Freelance Cassandra Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>> 
> 

Reply via email to