[ 
https://issues.apache.org/jira/browse/HADOOP-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12716566#action_12716566
 ] 

Luca Telloli edited comment on HADOOP-5188 at 6/5/09 3:26 AM:
--------------------------------------------------------------

Here a few comments to reply to the latest questions:

@Flavio:
- Append: in a few cases thorough the current code, file edit streams are 
closed and later reopened. Your proposal of using multiple ledgers is good but 
at the same time it sounds more hacky than clean. You should save somewhere a 
persistent list of ledgers opened and their role (were they equivalent to 
edits? to edits.new?) and allow this role to be changed (when edits.new is 
"renamed" as edits). 
- LogDevice not fully used: currently it's used only for output streams. Input 
streams do not currently use it. 
- BackupNode and Bookkeeper: my feeling is that the expectation is to spawn a 
new process when using Bookkeeper as logger

@Ben: 
I don't think we're down the road of mixed API. As far as I remember there's 
only one duplicated method, and the duplication should disappear when the 
integration of LogDevice is completed. 
On the other side, Konstantin keeps saying that to implement 5189 there's only 
the need to implement the Input/Output stream classes but I'm still not fully 
convinced about it. 

FINALLY, since there's no general agreement on LogDevice, for the moment I'm 
unlinking this patch and HADOOP-5189, to work independently on them. 

      was (Author: lucat):
    Here a few comments to reply to the latest questions:

@Flavio:
- Append: in a few cases thorough the current code, file edit streams are 
closed and later reopened. Your proposal of using multiple ledgers is good but 
at the same time it sounds more hacky than clean. You should save somewhere a 
persistent list of ledgers opened and their role (were they equivalent to 
edits? to edits.new?) and allow this role to be changed (when edits.new is 
"renamed" as edits). 
- LogDevice not fully used: currently it's used only for output streams. Input 
streams do not currently use it. 
- BackupNode and Bookkeeper: my feeling is that the expectation is to spawn a 
new process when using Bookkeeper as logger

@Ben: 
I don't think we're down the road of mixed API. As far as I remember there's 
only one duplicated method, and the duplication should disappear when the 
integration of LogDevice is completed. 
On the other side, Konstantin keeps saying that to implement 5189 there's only 
the need to implement the Input/Output stream classes but I'm still not fully 
convinced about it. 


  
> Modifications to enable multiple types of logging 
> --------------------------------------------------
>
>                 Key: HADOOP-5188
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5188
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>    Affects Versions: 0.20.0
>            Reporter: Luca Telloli
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-5188.patch, HADOOP-5188.patch, HADOOP-5188.patch, 
> HADOOP-5188.patch, HADOOP-5188.patch, HADOOP-5188.patch, HADOOP-5188.pdf
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to