[ 
https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15799802#comment-15799802
 ] 

Sean Mackrory edited comment on HDFS-11096 at 1/5/17 8:03 PM:
--------------------------------------------------------------

I've been doing a lot of testing. I've posted some automation here, we may want 
to hook into a Jenkins job or something: 
https://github.com/mackrorysd/hadoop-compatibility. I've tested running a bunch 
of MapReduce jobs while doing a rolling upgrade of HDFS, and haven't had any 
failures that indicate an incompatibility. I've also tested pulling data from 
an old cluster onto a new cluster. I'll keep adding other aspects to the tests 
to improve coverage.

I haven't seen a way to whitelist stuff. Filed an issue with jacc: 
https://github.com/lvc/japi-compliance-checker/issues/36.

As for the incompatibilities, I think there's relatively little action to be 
taken, so I'll file JIRAs for those. In detail: metrics and s3a are technically 
violating the contract, but in all cases it would be some serious baggage and 
due to their nature I think it's acceptable. I think SortedMapWritable should 
be put back but deprecated (I'm sure someone's depending on it somewhere and it 
should be trivial), and FileStatus should still implement Comparable. Not so 
sure about NameodeMXBean, the missing configuration keys, or the cases of 
reduced visibility. I'm inclined to leave these as-is unless we know it breaks 
something and they care. They are technically incompatibilities, so maybe 
someone else feels differently (or is aware of applications they are likely to 
break), but it would be nice to shed baggage and poor practices where we can. 
All other issues I feel more confident that they're either not actually 
breaking the contract or are extremely unlikely to break anything enough to 
warrant sticking with the old way. I'll sleep on some of these one more night 
and file JIRAs to start addressing the issues I think are important enough 
tomorrow.


was (Author: mackrorysd):
I've been doing a lot of testing. I've posted some automation here, we may want 
to hook into a Jenkins job or something: 
https://github.com/mackrorysd/hadoop-compatibility. I've tested running a bunch 
of MapReduce jobs while doing a rolling upgrade of HDFS, and haven't had any 
failures that indicate an incompatibility. I've also tested pulling data from 
an old cluster onto a new cluster. I'll keep adding other aspects to the tests 
to improve coverage.

I haven't seen a way to whitelist stuff. Filed an issue with jacc: 
https://github.com/lvc/japi-compliance-checker/issues/36.

As for the incompatibilities, I think there's relatively action to be taken, so 
I'll file JIRAs for those. In detail: metrics and s3a are technically violating 
the contract, but in all cases it would be some serious baggage and due to 
their nature I think it's acceptable. I think SortedMapWritable should be put 
back but deprecated (I'm sure someone's depending on it somewhere and it should 
be trivial), and FileStatus should still implement Comparable. Not so sure 
about NameodeMXBean, the missing configuration keys, or the cases of reduced 
visibility. I'm inclined to leave these as-is unless we know it breaks 
something and they care. They are technically incompatibilities, so maybe 
someone else feels differently (or is aware of applications they are likely to 
break), but it would be nice to shed baggage and poor practices where we can. 
All other issues I feel more confident that they're either not actually 
breaking the contract or are extremely unlikely to break anything enough to 
warrant sticking with the old way. I'll sleep on some of these one more night 
and file JIRAs to start addressing the issues I think are important enough 
tomorrow.

> Support rolling upgrade between 2.x and 3.x
> -------------------------------------------
>
>                 Key: HDFS-11096
>                 URL: https://issues.apache.org/jira/browse/HDFS-11096
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: rolling upgrades
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Andrew Wang
>            Priority: Blocker
>
> trunk has a minimum software version of 3.0.0-alpha1. This means we can't 
> rolling upgrade between branch-2 and trunk.
> This is a showstopper for large deployments. Unless there are very compelling 
> reasons to break compatibility, let's restore the ability to rolling upgrade 
> to 3.x releases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to