[
https://issues.apache.org/jira/browse/HADOOP-9151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13538914#comment-13538914
]
Suresh Srinivas commented on HADOOP-9151:
-----------------------------------------
Cos and Arun, thank you for your comments.
bq. how painful will the change be for users who have already deployed 2.x,
whether via CDH, HDP2, or Apache
Do you know how many production deployments are there on apache 2.0.2-alpha?
Why they will be reluctant to upgrade their entire stack, inspite of having
chosen an alpha release and what justifies the expectation of wire
compatibility that has never been promised on Apache releases? As regards to
HDP, CDH or other distirubtions, I do not think it is a problem that Apache
commnity needs to address.
Here is some of the thoughts I had written down. Some of it is stale, given
others stepping in to the conversation. But still I would like it to be
recorded here to avoid wasting energy on these type of discussions in the
futures.
The technical justifications for this veto is not valid. Some argument I have
seen are:
h5. This is an incompatible change:
I disagree. This changes does not affect API compatibility. No downstream
projects or applications need to make any code change. All they have to do is,
follow the existing practice of picking up new hadoop jars. This is the only
compatibility promised in Hadoop for now.
h5. Downstream projects use their own Hadoop client jars and need to be updated:
This is a bad idea. Even with Wire/API compatibility, there may be critical
fixes in the hadoop libary, needed by downstream projects. Not being able to
pick up that with just Hadoop release upgrade, and instead having to deliver
these fixes with an upgrade of downstream project is bad. Also I cannot
understand picking a copy of 2.0-alpha library prematurely and not waiting
until GA.
h5. A non Apache distribution is tagged stable. Hence no incompatible change
can be made in Apache:
I cannot understand how a non Apache distribution can be tagged stable when
underlying Apache code does not make any such promise.
What a non Apache distribution does, how they tag their release, what content
they include is outside the control of Apache community. Hence, while as a
community we may make it easy for other distributions, a veto that mandates
Apache community * must * play nice to an outside distribution is without merit.
BTW I fail to understand why it is such a big deal. You can chose not to
include this and related changes in CDH.
h5. Apache 2.0 HDFS and common are stable:
They may be. But the release still carries alpha tag. Because an individual
thinks it is stable, does not mean every one should treat it like a GA/stable
release. If the tag currently we are using does not reflect the reality, lets
change it based on community decision. Only way to indicate stability is,
Apache release changes the stability tag and not based on individual perception.
h5. The benefit of this change does not warrant incompatible change:
First, see my argument that about why this is not incompatible. Second, citing
the returns are not worth it, is not a technical argument. In fact you have
said and so do others that this is a cleanup that is necessary and should have
been done. At alpha stage of the release, this kind of change should go in. As
Arun has stated, that is the main reason for alpha tag. The fact that it could
make another distro incompatible when it goes in, is not a problem that we
should discuss/solve here.
h5. Only few individuals in the world get the benefit of this change:
It does not matter whether 100s or a very few get benefit out of a change. It
improves the quality and completes some of the cleanups that were happening. I
can turn this argument of limited people benefit around and say, only a very
few installations of 2.0.2-alpha (how many?) and those users know what to
expect from an alpha release.
Based on all this, I believe this veto makes no sense in the context of Apache
hadoop releases. Wearing my Apache hat, it is not justified to expect Apache
community to pay for decisions made elsewhere.
These are some ways you can go around the issue:
# Change your distribution to include a layer that can support compatibility
with Apache releases.
# Upgrade all the components when the next release comes out or include these
changes at your own convenience, later.
# Pick this change up in a dot release, say 4.1 where you could upgrade the
entire stack.
# There could be other alternatives, but I am not here to solve issues outside
the scope of Apache Hadoop releases.
> Include RPC error info in RpcResponseHeader instead of sending it separately
> ----------------------------------------------------------------------------
>
> Key: HADOOP-9151
> URL: https://issues.apache.org/jira/browse/HADOOP-9151
> Project: Hadoop Common
> Issue Type: Sub-task
> Reporter: Sanjay Radia
> Assignee: Sanjay Radia
> Attachments: HADOOP-9151.patch
>
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira