[ 
https://issues.apache.org/jira/browse/HADOOP-1139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12495210
 ] 

dhruba borthakur commented on HADOOP-1139:
------------------------------------------

I ran an experiment on a cluster that has about 15M blocks. I switched on 
logging when a block is allocated, when the namenode receives a 
block-confirmation from the datanode for a block. The log size at the end of a 
8 hour random-writer run was about 6.5GB. This shows that increasing the 
debug-levels of block transitions might flood the logs.

An alternative is to print log messages at the following events only:

1. When a file is created and then closed, log all blocks that belong to that 
file.
2. When a file gets deleted, log all the blocks that belong to that file.
3. When the namenode replication engine triggers re-replication of a block, log 
it.
4. When a replica is detected to be corrupt, it is deleted. Log it.


> All block trasitions should be logged at log level INFO
> -------------------------------------------------------
>
>                 Key: HADOOP-1139
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1139
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>
> The namenode records block trasitions in its log file. It is seen that some 
> of the block transition messages were being logged at debug level. These 
> should be done at INFO level.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to