[ 
https://issues.apache.org/jira/browse/HIVE-1883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12995934#comment-12995934
 ] 

MIS commented on HIVE-1883:
---------------------------

Carl is right on this. There is no need to have a 'scheduled' timer task to 
take care of the log files. There are enough handles already available in log4j 
library used by Hive to handle the log files.
As far as the current issue is concerned, RolllingFileAppender can be used and 
a max limit can be set.
if it is wished that no data should be lost then DailyRollingFileAppender can 
be used and a cron job can be run to handle the a week's[or what ever the time 
frame chosen] log files.

Further, there is one more disadvantage in running the 'scheduled' timer task 
to handle log files, creates more problems than it solves. Though 
ScheduledThreadPoolExecutor could be an answer, but its just not worth the 
effort.

> Periodic cleanup of Hive History log files.
> -------------------------------------------
>
>                 Key: HIVE-1883
>                 URL: https://issues.apache.org/jira/browse/HIVE-1883
>             Project: Hive
>          Issue Type: Bug
>          Components: Query Processor
>    Affects Versions: 0.6.0
>         Environment: Hive 0.6.0,  Hadoop 0.20.1
> SUSE Linux Enterprise Server 11 (i586)
> VERSION = 11
> PATCHLEVEL = 0
>            Reporter: Mohit Sikri
>
> After starting hive and running queries transaction history files are getting 
> creating in the /tmp/root folder.
> These files we should remove periodically(not all of them but) which are too 
> old to represent any significant information.
> Solution :-
> A scheduled timer task, which cleans up the log files older than the 
> configured time.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to