Many thanks to Ted Yu, Steve Loughran and Andrew Wang for replying in the
jira and Steve/Andrew for making the related changes!

--Yongjun

On Thu, Dec 11, 2014 at 12:41 PM, Yongjun Zhang <yzh...@cloudera.com> wrote:

> Hi,
>
> I wonder if anyone can help on resolving HADOOP-11320
> <https://issues.apache.org/jira/browse/HADOOP-11320> to increase timeout
> for jenkins test of crossing-subproject patches?
>
> Thanks a lot,
>
> --Yongjun
>
> On Tue, Dec 2, 2014 at 10:10 AM, Yongjun Zhang <yzh...@cloudera.com>
> wrote:
>
>> Hi,
>>
>> Thank you all for the input.
>>
>> https://issues.apache.org/jira/browse/HADOOP-11320
>>
>> was created for this issue. Welcome to give your further comments there.
>>
>> Best,
>>
>> --Yongjun
>>
>> On Tue, Nov 25, 2014 at 10:26 PM, Colin McCabe <cmcc...@alumni.cmu.edu>
>> wrote:
>>
>>> +1 for increasing the test timeout for tests spanning multiple
>>> sub-projects.
>>>
>>> I can see the value in what Steve L. suggested... if you make a major
>>> change that touches a particular subproject, you should try to get the
>>> approval of a committer who knows that subproject.  But I don't think
>>> that
>>> forcing artificial patch splits is the way to do this...  There are also
>>> some patches that are completely mechanical and don't really require the
>>> involvement of YARN / HDFS committer, even if they change that project.
>>> For example, fixing a misspelling in the name of a hadoop-common API.
>>>
>>> Colin
>>>
>>> On Tue, Nov 25, 2014 at 8:45 AM, Yongjun Zhang <yzh...@cloudera.com>
>>> wrote:
>>>
>>> > Thanks all for the feedback. To summarize (and I have a suggestion at
>>> the
>>> > end of this email), there are two scenarios:
>>> >
>>> >    1. A change that span multiple *bigger* projects. r.g. hadoop,
>>> hbase.
>>> >    2. A change that span multiple *sub* projects* within hadoop, e.g.,
>>> >    common, hdfs, yarn
>>> >
>>> > For 1, it's required for the change to be backward compatible, thus
>>> > splitting change for multiple *bigger* projects is a must.
>>> >
>>> > For 2, there are two sub types,
>>> >
>>> >    - 2.1 those changes that can be made within hadoop sub-projects, and
>>> >    there is no external impact
>>> >    - 2.2 those changes that have external impact, that is, the changes
>>> >    involve adding new APIs and marking old API deprecated, and
>>> > corresponding
>>> >    changes in other *bigger* projects will have to be made
>>> independently.
>>> > *But
>>> >    the changes within hadoop subjects can still be done altogether.*
>>> >
>>> > I think (Please correct me if I'm wrong):
>>> >
>>> >    - What Colin referred to is 2.1 and changes within hadoop
>>> sub-subjects
>>> >    for 2.2;
>>> >    - Steve's "not for changes across hadoop-common and hdfs, or
>>> >    hadoop-common and yarn" means 2.1, Steve's  "changes that only
>>> >    span hdfs-and-yarn would be fairly doubtful too." implies his doubt
>>> of
>>> >    existence of 2.1.
>>> >
>>> > For changes of 2.1 (if any) and *hadoop* changes of 2.2, we do have an
>>> > option of making the change across all hadoop sub-projects altogether,
>>> to
>>> > save the multiple steps Colin referred to.
>>> >
>>> > If this option is feasible, should we consider increasing the jenkins
>>> > timeout for this kind of changes (I mean making the timeout
>>> adjustable, if
>>> > it's for single sub-project, use the old timeout; otherwise, increase
>>> > accordingly)  so that we have at least this option when needed?
>>> >
>>> > Thanks.
>>> >
>>> > --Yongjun
>>> >
>>> >
>>> > On Tue, Nov 25, 2014 at 2:28 AM, Steve Loughran <
>>> ste...@hortonworks.com>
>>> > wrote:
>>> >
>>> > > On 25 November 2014 at 00:58, Bernd Eckenfels <
>>> e...@zusammenkunft.net>
>>> > > wrote:
>>> > >
>>> > > > Hello,
>>> > > >
>>> > > > Am Mon, 24 Nov 2014 16:16:00 -0800
>>> > > > schrieb Colin McCabe <cmcc...@alumni.cmu.edu>:
>>> > > >
>>> > > > > Conceptually, I think it's important to support patches that
>>> modify
>>> > > > > multiple sub-projects.  Otherwise refactoring things in common
>>> > > > > becomes a multi-step process.
>>> > > >
>>> > > > This might be rather philosophical (and I dont want to argue the
>>> need
>>> > > > to have the patch infrastructure work for the multi-project case),
>>> > > > howevere if a multi-project change cannot be applied in multiple
>>> steps
>>> > > > it is probably also not safe at runtime (unless the multiple
>>> projects
>>> > > > belong to a single instance/artifact). And then beeing forced to
>>> > > > commit/compile/test in multiple steps actually increases the
>>> > > > dependencies topology.
>>> > > >
>>> > >
>>> > > +1 for changes that span, say hadoop and hbase. but not for changes
>>> > across
>>> > > hadoop-common and hdfs, or hadoop-common and yarn. changes that only
>>> span
>>> > > hdfs-and-yarn would be fairly doubtful too.
>>> > >
>>> > > there is a dependency graph in hadoop's own jars —and cross module
>>> (not
>>> > > cross project) changes do need to happen.
>>> > >
>>> > > --
>>> > > CONFIDENTIALITY NOTICE
>>> > > NOTICE: This message is intended for the use of the individual or
>>> entity
>>> > to
>>> > > which it is addressed and may contain information that is
>>> confidential,
>>> > > privileged and exempt from disclosure under applicable law. If the
>>> reader
>>> > > of this message is not the intended recipient, you are hereby
>>> notified
>>> > that
>>> > > any printing, copying, dissemination, distribution, disclosure or
>>> > > forwarding of this communication is strictly prohibited. If you have
>>> > > received this communication in error, please contact the sender
>>> > immediately
>>> > > and delete it from your system. Thank You.
>>> > >
>>> >
>>>
>>
>>
>

Reply via email to