[
https://issues.apache.org/jira/browse/HADOOP-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12614571#action_12614571
]
Raghu Angadi commented on HADOOP-3573:
--------------------------------------
I think the following are the reasons why these still exist :
# If cur trunk is started on a image directory created by 0.12, it should fail
without causing any loss of data. i.e. user should be able to start 0.12 back
on that directory.
# If a user starts 0.12 on a directory created by current trunk, it should fail
with an error.
The above were the reasons why these existed before HADOOP-2797 and looks like
these are valid reasons even now. Let me know other wise.
> Remove code related to conversion of name-node and data-node storage
> directories to the format introduced in hadoop 0.13
> ------------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-3573
> URL: https://issues.apache.org/jira/browse/HADOOP-3573
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Affects Versions: 0.17.0
> Reporter: Konstantin Shvachko
> Assignee: Raghu Angadi
> Fix For: 0.19.0
>
>
> Hadoop 0.18 does not support direct HDFS upgrades from versions 0.13 or
> earlier as stated in HADOOP-2797.
> A 2 step upgrade is required in this case first from 0.x <= 0.13 to one of
> version 0.14 through 0.17 and then to 0.18.
> This implies that current hdfs does not need to support code related to
> conversions of the old (pre 0.13) storage layout to the current one
> introduced in 0.13 (see. HADOOP-702).
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.