Re: Local repo sharing for maven builds

2015-10-07 Thread Allen Wittenauer

yetus-5 was just committed which does all of this (and more, of course).

On Oct 6, 2015, at 2:35 AM, Steve Loughran  wrote:

> 
>> On 5 Oct 2015, at 19:45, Colin McCabe  wrote:
>> 
>> On Mon, Sep 28, 2015 at 12:52 AM, Steve Loughran  
>> wrote:
>>> 
>>> the jenkins machines are shared across multiple projects; cut the executors 
>>> to 1/node and then everyone's performance drops, including the time to 
>>> complete of all jenkins patches, which is one of the goals.
>> 
>> Hi Steve,
>> 
>> Just to be clear, the proposal wasn't to cut the executors to 1 per
>> node, but to have multiple Docker containers per node (perhaps 3 or 4)
>> and run each executor in an isolated container.  At that point,
>> whatever badness Maven does on the .m2 stops being a problem for
>> concurrently running jobs.
>> 
> 
> I'd missed that bit. Yes, something with a containerized ~//m2 repo gets the 
> isolation without playing with mvn  version fixup
> 
>> I guess I don't feel that strongly about this, but the additional
>> complexity of the other solutions (like running a "find" command in
>> .m2, or changing artifactID) seems like a disadvantage compared to
>> just using multiple containers.  And there may be other race
>> conditions here that we're not aware of... like a TOCTOU between
>> checking for a jar in .m2 and downloading it, for example.  The
>> Dockerized solution skips all those potential failure modes and
>> complexity.
>> 
>> cheers,
>> Colin
>> 
> 



Re: Local repo sharing for maven builds

2015-10-07 Thread sanjay reddy
pls remove me from this group

On Tue, Sep 22, 2015 at 8:26 PM, Steve Loughran 
wrote:

>
> > On 22 Sep 2015, at 12:16, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
> >
> > After using timestamped jars, hadoop-hdfs module might still continue to
> use earlier timestamped jars (correct) and may complete run.But later
> modules might refer to updated jars which are from some other build.
>
>
> why?
>
> If I do a build with a forced mvn versions set first,
>
> mvn versions:set -DnewVersion=3.0.0.20120922155143
>
> then maven will go through all the poms and set the version.
>
> the main source of trouble there would be any patch to a pom whose diff
> was close enough to the version value that the patch wouldn't apply
>



-- 
*Regards,*
*Sanju Reddy*
*+91 8977977443*


Re: Local repo sharing for maven builds

2015-10-06 Thread Steve Loughran

> On 5 Oct 2015, at 19:45, Colin McCabe  wrote:
> 
> On Mon, Sep 28, 2015 at 12:52 AM, Steve Loughran  
> wrote:
>> 
>> the jenkins machines are shared across multiple projects; cut the executors 
>> to 1/node and then everyone's performance drops, including the time to 
>> complete of all jenkins patches, which is one of the goals.
> 
> Hi Steve,
> 
> Just to be clear, the proposal wasn't to cut the executors to 1 per
> node, but to have multiple Docker containers per node (perhaps 3 or 4)
> and run each executor in an isolated container.  At that point,
> whatever badness Maven does on the .m2 stops being a problem for
> concurrently running jobs.
> 

I'd missed that bit. Yes, something with a containerized ~//m2 repo gets the 
isolation without playing with mvn  version fixup

> I guess I don't feel that strongly about this, but the additional
> complexity of the other solutions (like running a "find" command in
> .m2, or changing artifactID) seems like a disadvantage compared to
> just using multiple containers.  And there may be other race
> conditions here that we're not aware of... like a TOCTOU between
> checking for a jar in .m2 and downloading it, for example.  The
> Dockerized solution skips all those potential failure modes and
> complexity.
> 
> cheers,
> Colin
> 



Re: Local repo sharing for maven builds

2015-10-05 Thread Colin McCabe
On Mon, Sep 28, 2015 at 12:52 AM, Steve Loughran  wrote:
>
> the jenkins machines are shared across multiple projects; cut the executors 
> to 1/node and then everyone's performance drops, including the time to 
> complete of all jenkins patches, which is one of the goals.

Hi Steve,

Just to be clear, the proposal wasn't to cut the executors to 1 per
node, but to have multiple Docker containers per node (perhaps 3 or 4)
and run each executor in an isolated container.  At that point,
whatever badness Maven does on the .m2 stops being a problem for
concurrently running jobs.

I guess I don't feel that strongly about this, but the additional
complexity of the other solutions (like running a "find" command in
.m2, or changing artifactID) seems like a disadvantage compared to
just using multiple containers.  And there may be other race
conditions here that we're not aware of... like a TOCTOU between
checking for a jar in .m2 and downloading it, for example.  The
Dockerized solution skips all those potential failure modes and
complexity.

cheers,
Colin


>
> https://builds.apache.org/computer/
>
> Like I said before: I don't think we need one mvn repo/build. All we need is 
> a unique artifact version tag on generated files. Ivy builds do that for you, 
> maven requires the build version in all the POMs to have a -SNAPSHOT tag, 
> which tells it to poll the remote repos for updates every day.
>
> We can build local hadoop releases with whatever version number we desire, 
> simply by using "mvn version:set" to update the version before the build. Do 
> that and you can share the same repo, with different artifacts generated and 
> referenced on every build. We don't need to play with >1 repo, which can be 
> pretty expensive. A du -h ~/.m2 tells me I have an 11GB local cache.
>
>
>> On 26 Sep 2015, at 06:43, Vinayakumar B  wrote:
>>
>> Thanks Andrew,
>>
>> May be we can try making it to 1 exec, and try for sometime. i think also
>> need to check what other jobs, hadoop ecosystem jobs, run in Hadoop nodes.
>> As HADOOP-11984 and HDFS-9139 are on the way to reduce build time
>> dramatically by enabling parallel tests, HDFS and COMMON precommit builds
>> will not block other builds for much time.
>>
>> To check, I dont have access to jenkins configuration. If I can get the
>> access I can reduce it myself and verify.
>>
>>
>> -Vinay
>>
>> On Sat, Sep 26, 2015 at 7:49 AM, Andrew Wang 
>> wrote:
>>
>>> Thanks for checking Vinay. As a temporary workaround, could we reduce the #
>>> of execs per node to 1? Our build queues are pretty short right now, so I
>>> don't think it would be too bad.
>>>
>>> Best,
>>> Andrew
>>>
>>> On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B 
>>> wrote:
>>>
 In case if we are going to have separate repo for each executor,

 I have checked, each jenkins node is allocated 2 executors. so we only
>>> need
 to create one more replica.

 Regards,
 Vinay

 On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran 
 wrote:

>
>> On 22 Sep 2015, at 16:39, Colin P. McCabe 
>>> wrote:
>>
>>> ANNOUNCEMENT: new patches which contain hard-coded ports in test
>>> runs
> will henceforth be reverted. Jenkins matters more than the 30s of your
 time
> it takes to use the free port finder methods. Same for any hard code
 paths
> in filesystems.
>>
>> +1.  Can you add this to HowToContribute on the wiki?  Or should we
>> vote on it first?
>
> I don't think we need to vote on it: hard code ports should be
>>> something
> we veto on patches anyway.
>
> In https://issues.apache.org/jira/browse/HADOOP-12143 I propose
>>> having a
> better style guide in the docs.
>
>
>

>>>
>


Re: Local repo sharing for maven builds

2015-09-29 Thread Steve Loughran

> On 28 Sep 2015, at 10:05, Vinayakumar B  wrote:
> 
> Setting the version to unique value sounds reasonable.
> 
> Is there anyway in mvn to clean such artifacts installed.. as part of
> cleanup in the same build instead of nightly cleanup?
> 


Well, there's a dependency:purge-local-repository maven thing, that could maybe 
be set up to delete the stuff, but unless you can restrict to only the local 
build number, it's going to stamp on other builds.

http://maven.apache.org/plugins/maven-dependency-plugin/purge-local-repository-mojo.html

There's a jenkins explicit plugin

https://wiki.jenkins-ci.org/display/JENKINS/Maven+Repo+Cleaner+Plugin

Andrew Bayer is about to -> cloudbees so maybe he may review this -I'll ask him 
if I see him at the apachecon coffee break.

Otherwise, well, bash and a complex enough "find ~/m2/repository" path could 
possibly do it

> -Vinay
> On Sep 28, 2015 1:22 PM, "Steve Loughran"  wrote:
> 
>> 
>> the jenkins machines are shared across multiple projects; cut the
>> executors to 1/node and then everyone's performance drops, including the
>> time to complete of all jenkins patches, which is one of the goals.
>> 
>> https://builds.apache.org/computer/
>> 
>> Like I said before: I don't think we need one mvn repo/build. All we need
>> is a unique artifact version tag on generated files. Ivy builds do that for
>> you, maven requires the build version in all the POMs to have a -SNAPSHOT
>> tag, which tells it to poll the remote repos for updates every day.
>> 
>> We can build local hadoop releases with whatever version number we desire,
>> simply by using "mvn version:set" to update the version before the build.
>> Do that and you can share the same repo, with different artifacts generated
>> and referenced on every build. We don't need to play with >1 repo, which
>> can be pretty expensive. A du -h ~/.m2 tells me I have an 11GB local cache.
>> 
>> 
>>> On 26 Sep 2015, at 06:43, Vinayakumar B  wrote:
>>> 
>>> Thanks Andrew,
>>> 
>>> May be we can try making it to 1 exec, and try for sometime. i think also
>>> need to check what other jobs, hadoop ecosystem jobs, run in Hadoop
>> nodes.
>>> As HADOOP-11984 and HDFS-9139 are on the way to reduce build time
>>> dramatically by enabling parallel tests, HDFS and COMMON precommit builds
>>> will not block other builds for much time.
>>> 
>>> To check, I dont have access to jenkins configuration. If I can get the
>>> access I can reduce it myself and verify.
>>> 
>>> 
>>> -Vinay
>>> 
>>> On Sat, Sep 26, 2015 at 7:49 AM, Andrew Wang 
>>> wrote:
>>> 
 Thanks for checking Vinay. As a temporary workaround, could we reduce
>> the #
 of execs per node to 1? Our build queues are pretty short right now, so
>> I
 don't think it would be too bad.
 
 Best,
 Andrew
 
 On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B <
>> vinayakum...@apache.org>
 wrote:
 
> In case if we are going to have separate repo for each executor,
> 
> I have checked, each jenkins node is allocated 2 executors. so we only
 need
> to create one more replica.
> 
> Regards,
> Vinay
> 
> On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran <
>> ste...@hortonworks.com>
> wrote:
> 
>> 
>>> On 22 Sep 2015, at 16:39, Colin P. McCabe 
 wrote:
>>> 
 ANNOUNCEMENT: new patches which contain hard-coded ports in test
 runs
>> will henceforth be reverted. Jenkins matters more than the 30s of your
> time
>> it takes to use the free port finder methods. Same for any hard code
> paths
>> in filesystems.
>>> 
>>> +1.  Can you add this to HowToContribute on the wiki?  Or should we
>>> vote on it first?
>> 
>> I don't think we need to vote on it: hard code ports should be
 something
>> we veto on patches anyway.
>> 
>> In https://issues.apache.org/jira/browse/HADOOP-12143 I propose
 having a
>> better style guide in the docs.
>> 
>> 
>> 
> 
 
>> 
>> 



Re: Local repo sharing for maven builds

2015-09-28 Thread Steve Loughran

the jenkins machines are shared across multiple projects; cut the executors to 
1/node and then everyone's performance drops, including the time to complete of 
all jenkins patches, which is one of the goals.

https://builds.apache.org/computer/

Like I said before: I don't think we need one mvn repo/build. All we need is a 
unique artifact version tag on generated files. Ivy builds do that for you, 
maven requires the build version in all the POMs to have a -SNAPSHOT tag, which 
tells it to poll the remote repos for updates every day.

We can build local hadoop releases with whatever version number we desire, 
simply by using "mvn version:set" to update the version before the build. Do 
that and you can share the same repo, with different artifacts generated and 
referenced on every build. We don't need to play with >1 repo, which can be 
pretty expensive. A du -h ~/.m2 tells me I have an 11GB local cache.


> On 26 Sep 2015, at 06:43, Vinayakumar B  wrote:
> 
> Thanks Andrew,
> 
> May be we can try making it to 1 exec, and try for sometime. i think also
> need to check what other jobs, hadoop ecosystem jobs, run in Hadoop nodes.
> As HADOOP-11984 and HDFS-9139 are on the way to reduce build time
> dramatically by enabling parallel tests, HDFS and COMMON precommit builds
> will not block other builds for much time.
> 
> To check, I dont have access to jenkins configuration. If I can get the
> access I can reduce it myself and verify.
> 
> 
> -Vinay
> 
> On Sat, Sep 26, 2015 at 7:49 AM, Andrew Wang 
> wrote:
> 
>> Thanks for checking Vinay. As a temporary workaround, could we reduce the #
>> of execs per node to 1? Our build queues are pretty short right now, so I
>> don't think it would be too bad.
>> 
>> Best,
>> Andrew
>> 
>> On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B 
>> wrote:
>> 
>>> In case if we are going to have separate repo for each executor,
>>> 
>>> I have checked, each jenkins node is allocated 2 executors. so we only
>> need
>>> to create one more replica.
>>> 
>>> Regards,
>>> Vinay
>>> 
>>> On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran 
>>> wrote:
>>> 
 
> On 22 Sep 2015, at 16:39, Colin P. McCabe 
>> wrote:
> 
>> ANNOUNCEMENT: new patches which contain hard-coded ports in test
>> runs
 will henceforth be reverted. Jenkins matters more than the 30s of your
>>> time
 it takes to use the free port finder methods. Same for any hard code
>>> paths
 in filesystems.
> 
> +1.  Can you add this to HowToContribute on the wiki?  Or should we
> vote on it first?
 
 I don't think we need to vote on it: hard code ports should be
>> something
 we veto on patches anyway.
 
 In https://issues.apache.org/jira/browse/HADOOP-12143 I propose
>> having a
 better style guide in the docs.
 
 
 
>>> 
>> 



Re: Local repo sharing for maven builds

2015-09-28 Thread Andrew Wang
I think the right route is to file an INFRA JIRA with this request. Not
entirely sure since at one point the Hadoop build infra was separately
managed by Yahoo, but I think as of late it's under Apache administration.

Best,
Andrew

On Fri, Sep 25, 2015 at 9:43 PM, Vinayakumar B 
wrote:

> Thanks Andrew,
>
> May be we can try making it to 1 exec, and try for sometime. i think also
> need to check what other jobs, hadoop ecosystem jobs, run in Hadoop nodes.
> As HADOOP-11984 and HDFS-9139 are on the way to reduce build time
> dramatically by enabling parallel tests, HDFS and COMMON precommit builds
> will not block other builds for much time.
>
> To check, I dont have access to jenkins configuration. If I can get the
> access I can reduce it myself and verify.
>
>
> -Vinay
>
> On Sat, Sep 26, 2015 at 7:49 AM, Andrew Wang 
> wrote:
>
> > Thanks for checking Vinay. As a temporary workaround, could we reduce
> the #
> > of execs per node to 1? Our build queues are pretty short right now, so I
> > don't think it would be too bad.
> >
> > Best,
> > Andrew
> >
> > On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B  >
> > wrote:
> >
> > > In case if we are going to have separate repo for each executor,
> > >
> > > I have checked, each jenkins node is allocated 2 executors. so we only
> > need
> > > to create one more replica.
> > >
> > > Regards,
> > > Vinay
> > >
> > > On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran <
> ste...@hortonworks.com>
> > > wrote:
> > >
> > > >
> > > > > On 22 Sep 2015, at 16:39, Colin P. McCabe 
> > wrote:
> > > > >
> > > > >> ANNOUNCEMENT: new patches which contain hard-coded ports in test
> > runs
> > > > will henceforth be reverted. Jenkins matters more than the 30s of
> your
> > > time
> > > > it takes to use the free port finder methods. Same for any hard code
> > > paths
> > > > in filesystems.
> > > > >
> > > > > +1.  Can you add this to HowToContribute on the wiki?  Or should we
> > > > > vote on it first?
> > > >
> > > > I don't think we need to vote on it: hard code ports should be
> > something
> > > > we veto on patches anyway.
> > > >
> > > > In https://issues.apache.org/jira/browse/HADOOP-12143 I propose
> > having a
> > > > better style guide in the docs.
> > > >
> > > >
> > > >
> > >
> >
>


Re: Local repo sharing for maven builds

2015-09-28 Thread Vinayakumar B
Setting the version to unique value sounds reasonable.

Is there anyway in mvn to clean such artifacts installed.. as part of
cleanup in the same build instead of nightly cleanup?

-Vinay
On Sep 28, 2015 1:22 PM, "Steve Loughran"  wrote:

>
> the jenkins machines are shared across multiple projects; cut the
> executors to 1/node and then everyone's performance drops, including the
> time to complete of all jenkins patches, which is one of the goals.
>
> https://builds.apache.org/computer/
>
> Like I said before: I don't think we need one mvn repo/build. All we need
> is a unique artifact version tag on generated files. Ivy builds do that for
> you, maven requires the build version in all the POMs to have a -SNAPSHOT
> tag, which tells it to poll the remote repos for updates every day.
>
> We can build local hadoop releases with whatever version number we desire,
> simply by using "mvn version:set" to update the version before the build.
> Do that and you can share the same repo, with different artifacts generated
> and referenced on every build. We don't need to play with >1 repo, which
> can be pretty expensive. A du -h ~/.m2 tells me I have an 11GB local cache.
>
>
> > On 26 Sep 2015, at 06:43, Vinayakumar B  wrote:
> >
> > Thanks Andrew,
> >
> > May be we can try making it to 1 exec, and try for sometime. i think also
> > need to check what other jobs, hadoop ecosystem jobs, run in Hadoop
> nodes.
> > As HADOOP-11984 and HDFS-9139 are on the way to reduce build time
> > dramatically by enabling parallel tests, HDFS and COMMON precommit builds
> > will not block other builds for much time.
> >
> > To check, I dont have access to jenkins configuration. If I can get the
> > access I can reduce it myself and verify.
> >
> >
> > -Vinay
> >
> > On Sat, Sep 26, 2015 at 7:49 AM, Andrew Wang 
> > wrote:
> >
> >> Thanks for checking Vinay. As a temporary workaround, could we reduce
> the #
> >> of execs per node to 1? Our build queues are pretty short right now, so
> I
> >> don't think it would be too bad.
> >>
> >> Best,
> >> Andrew
> >>
> >> On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B <
> vinayakum...@apache.org>
> >> wrote:
> >>
> >>> In case if we are going to have separate repo for each executor,
> >>>
> >>> I have checked, each jenkins node is allocated 2 executors. so we only
> >> need
> >>> to create one more replica.
> >>>
> >>> Regards,
> >>> Vinay
> >>>
> >>> On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran <
> ste...@hortonworks.com>
> >>> wrote:
> >>>
> 
> > On 22 Sep 2015, at 16:39, Colin P. McCabe 
> >> wrote:
> >
> >> ANNOUNCEMENT: new patches which contain hard-coded ports in test
> >> runs
>  will henceforth be reverted. Jenkins matters more than the 30s of your
> >>> time
>  it takes to use the free port finder methods. Same for any hard code
> >>> paths
>  in filesystems.
> >
> > +1.  Can you add this to HowToContribute on the wiki?  Or should we
> > vote on it first?
> 
>  I don't think we need to vote on it: hard code ports should be
> >> something
>  we veto on patches anyway.
> 
>  In https://issues.apache.org/jira/browse/HADOOP-12143 I propose
> >> having a
>  better style guide in the docs.
> 
> 
> 
> >>>
> >>
>
>


Re: Local repo sharing for maven builds

2015-09-25 Thread Vinayakumar B
Thanks Andrew,

May be we can try making it to 1 exec, and try for sometime. i think also
need to check what other jobs, hadoop ecosystem jobs, run in Hadoop nodes.
As HADOOP-11984 and HDFS-9139 are on the way to reduce build time
dramatically by enabling parallel tests, HDFS and COMMON precommit builds
will not block other builds for much time.

To check, I dont have access to jenkins configuration. If I can get the
access I can reduce it myself and verify.


-Vinay

On Sat, Sep 26, 2015 at 7:49 AM, Andrew Wang 
wrote:

> Thanks for checking Vinay. As a temporary workaround, could we reduce the #
> of execs per node to 1? Our build queues are pretty short right now, so I
> don't think it would be too bad.
>
> Best,
> Andrew
>
> On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B 
> wrote:
>
> > In case if we are going to have separate repo for each executor,
> >
> > I have checked, each jenkins node is allocated 2 executors. so we only
> need
> > to create one more replica.
> >
> > Regards,
> > Vinay
> >
> > On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran 
> > wrote:
> >
> > >
> > > > On 22 Sep 2015, at 16:39, Colin P. McCabe 
> wrote:
> > > >
> > > >> ANNOUNCEMENT: new patches which contain hard-coded ports in test
> runs
> > > will henceforth be reverted. Jenkins matters more than the 30s of your
> > time
> > > it takes to use the free port finder methods. Same for any hard code
> > paths
> > > in filesystems.
> > > >
> > > > +1.  Can you add this to HowToContribute on the wiki?  Or should we
> > > > vote on it first?
> > >
> > > I don't think we need to vote on it: hard code ports should be
> something
> > > we veto on patches anyway.
> > >
> > > In https://issues.apache.org/jira/browse/HADOOP-12143 I propose
> having a
> > > better style guide in the docs.
> > >
> > >
> > >
> >
>


Re: Local repo sharing for maven builds

2015-09-25 Thread Andrew Wang
Thanks for checking Vinay. As a temporary workaround, could we reduce the #
of execs per node to 1? Our build queues are pretty short right now, so I
don't think it would be too bad.

Best,
Andrew

On Wed, Sep 23, 2015 at 12:18 PM, Vinayakumar B 
wrote:

> In case if we are going to have separate repo for each executor,
>
> I have checked, each jenkins node is allocated 2 executors. so we only need
> to create one more replica.
>
> Regards,
> Vinay
>
> On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran 
> wrote:
>
> >
> > > On 22 Sep 2015, at 16:39, Colin P. McCabe  wrote:
> > >
> > >> ANNOUNCEMENT: new patches which contain hard-coded ports in test runs
> > will henceforth be reverted. Jenkins matters more than the 30s of your
> time
> > it takes to use the free port finder methods. Same for any hard code
> paths
> > in filesystems.
> > >
> > > +1.  Can you add this to HowToContribute on the wiki?  Or should we
> > > vote on it first?
> >
> > I don't think we need to vote on it: hard code ports should be something
> > we veto on patches anyway.
> >
> > In https://issues.apache.org/jira/browse/HADOOP-12143 I propose having a
> > better style guide in the docs.
> >
> >
> >
>


Re: Local repo sharing for maven builds

2015-09-23 Thread Steve Loughran

> On 22 Sep 2015, at 16:39, Colin P. McCabe  wrote:
> 
>> ANNOUNCEMENT: new patches which contain hard-coded ports in test runs will 
>> henceforth be reverted. Jenkins matters more than the 30s of your time it 
>> takes to use the free port finder methods. Same for any hard code paths in 
>> filesystems.
> 
> +1.  Can you add this to HowToContribute on the wiki?  Or should we
> vote on it first?

I don't think we need to vote on it: hard code ports should be something we 
veto on patches anyway. 

In https://issues.apache.org/jira/browse/HADOOP-12143 I propose having a better 
style guide in the docs.




Re: Local repo sharing for maven builds

2015-09-23 Thread Vinayakumar B
In case if we are going to have separate repo for each executor,

I have checked, each jenkins node is allocated 2 executors. so we only need
to create one more replica.

Regards,
Vinay

On Wed, Sep 23, 2015 at 7:33 PM, Steve Loughran 
wrote:

>
> > On 22 Sep 2015, at 16:39, Colin P. McCabe  wrote:
> >
> >> ANNOUNCEMENT: new patches which contain hard-coded ports in test runs
> will henceforth be reverted. Jenkins matters more than the 30s of your time
> it takes to use the free port finder methods. Same for any hard code paths
> in filesystems.
> >
> > +1.  Can you add this to HowToContribute on the wiki?  Or should we
> > vote on it first?
>
> I don't think we need to vote on it: hard code ports should be something
> we veto on patches anyway.
>
> In https://issues.apache.org/jira/browse/HADOOP-12143 I propose having a
> better style guide in the docs.
>
>
>


Re: Local repo sharing for maven builds

2015-09-22 Thread Andrew Wang
>
>
> Did anyone address Andrew's proposal to have one private repo per
> Jenkins executor?  That seems like the simplest approach to me.  It
> seems like that would only generate more network traffic in the case
> where a dependency changes, which should be relatively rare.
>
> We're blocked on YETUS-4, and then a corresponding Yetus release, and then
onboarding Hadoop to Yetus.

Alternatively we can hack up test-patch.sh ourselves, since honestly I
think the above will take at least a month. Would love to be proven wrong
though.


Re: Local repo sharing for maven builds

2015-09-22 Thread Vinayakumar B
On Tue, Sep 22, 2015 at 9:09 PM, Colin P. McCabe  wrote:

> On Mon, Sep 21, 2015 at 4:08 AM, Steve Loughran 
> wrote:
> >
> >> On 19 Sep 2015, at 04:42, Allen Wittenauer  wrote:
> >>
> >> a) Multi-module patches are always troublesome because it makes the
> test system do significantly more work.  For Yetus, we've pared it down as
> far as we can go to get *some* speed increases, but if a patch does
> something like hit every pom.xml file, there's nothing that can be done to
> make it better other than splitting up the patch.
> >>
> >> b) It's worth noting that it happens more often to HDFS patches because
> HDFS unit tests take too damn long.  Some individual tests take 10 minutes!
> They invariably collide with the various full builds (NOT pre commit! Those
> other things that Steve pointed out that we're ignoring).  While Yetus has
> support for running unit tests in parallel, Hadoop does not.
> >
> >
> > I think the main thing I've been complaining about is how we ignore
> failing scheduled Jenkins runs; its been so unreliable that we all ignore
> the constant background noise of jenkins failures. That's compounded by how
> some test runs (hello Yarn-precommit!) send jenkins mails to the dev- list.
> (I've turned that off now: if you get jenkins failures on yarn-dev then its
> from the regular ones)
>
> Yes, we need to get really repeatable builds.  It is a big problem
> that we can't right now!
>
> Yes, Keeping jenkins more happy is the need of hour now.

>
> >>
> >> c) mvn install is pretty much required for a not insignificant amount
> of multi-module patches, esp if they hit hadoop-common.  For a large chunk
> of "oh just make it one patch", it's effectively a death sentence on the
> Jenkins side.
> >
> > The race conditions have existed for a long, long time. It only surfaces
> when you have a patch that spans artifacts which is one of: (1)
> incompatible across builds (2) needs to be synced across builds to work. If
> things still linked up, you'd have the race *but you wouldn't notice*. It's
> only the artifact-spanning patches which surface.
> >
> > YARN has had this for a while, but it's builds are shorter, it's HDFS
> that's the problem for the reasons AW's noted
> > -theres' now >1 JAR
> > -it takes a long time to build and test, host conflict is inevitable.
> >
> >
> > There is one tactic not yet looked at: every build to set a hadoop
> version, e.g instead of all precommits being hadoop-3.0.0-SNAPSHOT, they
> could be hadoop-3.0.0-JIRA-4313-SNAPSHOT. No conflict, just the need to
> schedule a run that cleans up the m2 repo every night. If timestamped
> version numbers are used hadoop-3.0.0-2015-09-21-11:38 then the job can
> make better decisions about what to purge. Test runs could even rm their
> own artifacts after, perhaps.
> >
> > I think this would be the best way to isolate —no need for private
> repos, with the followon need to download the entire repo on every run,
> 100% isolation.
>
> Did anyone address Andrew's proposal to have one private repo per
> Jenkins executor?  That seems like the simplest approach to me.  It
> seems like that would only generate more network traffic in the case
> where a dependency changes, which should be relatively rare.
>
> Yes, I too think this is the best and simple approach we can do right now.
As mentioned by brahma, initial downloads can be avoided by replicating
existing local repo. And everything should work just fine after that.


> It would be nice to combine this with Dockerization so that we can
> finally stop worrying about rogue build machines that lack all the
> dependencies, or chasing down infra whenever a new dependency is
> added.
>

Yes, its a nice feature to have. Looking forward for yetus to complete this
soon.


> >
> > The other issue with race conditions is port assignments, too much code
> with hard coded ports. —there's been slow work on that, with Brahma Reddy
> Battula deserving special mention here. But its almost a losing battle,
> chasing where the next hard-coded port goes in, and again, leads to
> unreliable test runs that everyone ignores.
> >
> >
> > ANNOUNCEMENT: new patches which contain hard-coded ports in test runs
> will henceforth be reverted. Jenkins matters more than the 30s of your time
> it takes to use the free port finder methods. Same for any hard code paths
> in filesystems.
>
> +1.  Can you add this to HowToContribute on the wiki?  Or should we
> vote on it first?
>
>
I think, this is must be one of the basic rule/guidelines of writing tests
in any project.


> >
> >
> >>
> >> d) I'm a big fan of d.
> >>
> >> e) File a bug against Yetus and we'll add the ability to set
> ant/gradle/maven args from the command line.  I thought I had it in there
> when I rewrote the support for multiple build tools, gradle, etc, but I
> clearly dropped it on the floor.
> >
> > people won't do that. Switching to per-run hadoop version numbers should
> 

Re: Local repo sharing for maven builds

2015-09-22 Thread Allen Wittenauer
There are multiple problems with just spamming test patch with local repos.  
I've done a not insignificant amount of investigation in this space and there 
reasons why I didn't just slam it in even though I've been aware of the issue 
for a very long time. There are specific reasons why I want to tie this to 
Docker, at least for the Apache Jenkins runs. (No I'm not going to go into that 
now.)

 I'm on my way back to SJC and will likely have code for Yetus tomorrow 
afternoon.

Sent from my phone

On Sep 22, 2015, at 11:55 AM, Andrew Wang  wrote:

>> 
>> 
>> Did anyone address Andrew's proposal to have one private repo per
>> Jenkins executor?  That seems like the simplest approach to me.  It
>> seems like that would only generate more network traffic in the case
>> where a dependency changes, which should be relatively rare.
>> 
>> We're blocked on YETUS-4, and then a corresponding Yetus release, and then
> onboarding Hadoop to Yetus.
> 
> Alternatively we can hack up test-patch.sh ourselves, since honestly I
> think the above will take at least a month. Would love to be proven wrong
> though.


RE: Local repo sharing for maven builds

2015-09-22 Thread Brahma Reddy Battula
After using timestamped jars, hadoop-hdfs module might still continue to use 
earlier timestamped jars (correct) and may complete run.But later modules might 
refer to updated jars which are from some other build.

I think download of entire local repo is not required in every build.It only 
needs one time. Which also could be avoided if we can do replica with executor 
#.

Only downloads required when dependencies which are updated in pom.xml.
If we have the separate local repo for each executor, anyway all hadoop jars 
will be freshly installed for every build.
So no conflict could occur.

>>>>ANNOUNCEMENT: new patches which contain hard-coded ports in test runs will 
>>>>henceforth be reverted. Jenkins matters more than the 30s of your time it 
>>>>takes to use the free port finder methods. Same for any hard code paths in 
>>>>filesystems.

Good Idea.


Thanks & Regards
 Brahma Reddy Battula

From: Steve Loughran [ste...@hortonworks.com]
Sent: Monday, September 21, 2015 4:38 PM
To: common-dev@hadoop.apache.org
Subject: Re: Local repo sharing for maven builds

> On 19 Sep 2015, at 04:42, Allen Wittenauer <a...@altiscale.com> wrote:
>
> a) Multi-module patches are always troublesome because it makes the test 
> system do significantly more work.  For Yetus, we've pared it down as far as 
> we can go to get *some* speed increases, but if a patch does something like 
> hit every pom.xml file, there's nothing that can be done to make it better 
> other than splitting up the patch.
>
> b) It's worth noting that it happens more often to HDFS patches because HDFS 
> unit tests take too damn long.  Some individual tests take 10 minutes! They 
> invariably collide with the various full builds (NOT pre commit! Those other 
> things that Steve pointed out that we're ignoring).  While Yetus has support 
> for running unit tests in parallel, Hadoop does not.


I think the main thing I've been complaining about is how we ignore failing 
scheduled Jenkins runs; its been so unreliable that we all ignore the constant 
background noise of jenkins failures. That's compounded by how some test runs 
(hello Yarn-precommit!) send jenkins mails to the dev- list. (I've turned that 
off now: if you get jenkins failures on yarn-dev then its from the regular ones)

>
> c) mvn install is pretty much required for a not insignificant amount of 
> multi-module patches, esp if they hit hadoop-common.  For a large chunk of 
> "oh just make it one patch", it's effectively a death sentence on the Jenkins 
> side.

The race conditions have existed for a long, long time. It only surfaces when 
you have a patch that spans artifacts which is one of: (1) incompatible across 
builds (2) needs to be synced across builds to work. If things still linked up, 
you'd have the race *but you wouldn't notice*. It's only the artifact-spanning 
patches which surface.

YARN has had this for a while, but it's builds are shorter, it's HDFS that's 
the problem for the reasons AW's noted
-theres' now >1 JAR
-it takes a long time to build and test, host conflict is inevitable.


There is one tactic not yet looked at: every build to set a hadoop version, e.g 
instead of all precommits being hadoop-3.0.0-SNAPSHOT, they could be 
hadoop-3.0.0-JIRA-4313-SNAPSHOT. No conflict, just the need to schedule a run 
that cleans up the m2 repo every night. If timestamped version numbers are used 
hadoop-3.0.0-2015-09-21-11:38 then the job can make better decisions about what 
to purge. Test runs could even rm their own artifacts after, perhaps.

I think this would be the best way to isolate —no need for private repos, with 
the followon need to download the entire repo on every run, 100% isolation.

The other issue with race conditions is port assignments, too much code with 
hard coded ports. —there's been slow work on that, with Brahma Reddy Battula 
deserving special mention here. But its almost a losing battle, chasing where 
the next hard-coded port goes in, and again, leads to unreliable test runs that 
everyone ignores.


ANNOUNCEMENT: new patches which contain hard-coded ports in test runs will 
henceforth be reverted. Jenkins matters more than the 30s of your time it takes 
to use the free port finder methods. Same for any hard code paths in 
filesystems.


>
> d) I'm a big fan of d.
>
> e) File a bug against Yetus and we'll add the ability to set ant/gradle/maven 
> args from the command line.  I thought I had it in there when I rewrote the 
> support for multiple build tools, gradle, etc, but I clearly dropped it on 
> the floor.

people won't do that. Switching to per-run hadoop version numbers should 
suffice for artifact dependencies, leaving only ports and paths.
>
> f) Any time you "give the option to the patch submitter", you generate a not 
> insignif

Re: Local repo sharing for maven builds

2015-09-22 Thread Steve Loughran

> On 22 Sep 2015, at 12:16, Brahma Reddy Battula 
>  wrote:
> 
> After using timestamped jars, hadoop-hdfs module might still continue to use 
> earlier timestamped jars (correct) and may complete run.But later modules 
> might refer to updated jars which are from some other build.


why? 

If I do a build with a forced mvn versions set first, 

mvn versions:set -DnewVersion=3.0.0.20120922155143

then maven will go through all the poms and set the version.

the main source of trouble there would be any patch to a pom whose diff was 
close enough to the version value that the patch wouldn't apply


Re: Local repo sharing for maven builds

2015-09-21 Thread Steve Loughran

> On 19 Sep 2015, at 04:42, Allen Wittenauer  wrote:
> 
> a) Multi-module patches are always troublesome because it makes the test 
> system do significantly more work.  For Yetus, we've pared it down as far as 
> we can go to get *some* speed increases, but if a patch does something like 
> hit every pom.xml file, there's nothing that can be done to make it better 
> other than splitting up the patch.
> 
> b) It's worth noting that it happens more often to HDFS patches because HDFS 
> unit tests take too damn long.  Some individual tests take 10 minutes! They 
> invariably collide with the various full builds (NOT pre commit! Those other 
> things that Steve pointed out that we're ignoring).  While Yetus has support 
> for running unit tests in parallel, Hadoop does not.  


I think the main thing I've been complaining about is how we ignore failing 
scheduled Jenkins runs; its been so unreliable that we all ignore the constant 
background noise of jenkins failures. That's compounded by how some test runs 
(hello Yarn-precommit!) send jenkins mails to the dev- list. (I've turned that 
off now: if you get jenkins failures on yarn-dev then its from the regular ones)

> 
> c) mvn install is pretty much required for a not insignificant amount of 
> multi-module patches, esp if they hit hadoop-common.  For a large chunk of 
> "oh just make it one patch", it's effectively a death sentence on the Jenkins 
> side.

The race conditions have existed for a long, long time. It only surfaces when 
you have a patch that spans artifacts which is one of: (1) incompatible across 
builds (2) needs to be synced across builds to work. If things still linked up, 
you'd have the race *but you wouldn't notice*. It's only the artifact-spanning 
patches which surface.

YARN has had this for a while, but it's builds are shorter, it's HDFS that's 
the problem for the reasons AW's noted
-theres' now >1 JAR
-it takes a long time to build and test, host conflict is inevitable.


There is one tactic not yet looked at: every build to set a hadoop version, e.g 
instead of all precommits being hadoop-3.0.0-SNAPSHOT, they could be 
hadoop-3.0.0-JIRA-4313-SNAPSHOT. No conflict, just the need to schedule a run 
that cleans up the m2 repo every night. If timestamped version numbers are used 
hadoop-3.0.0-2015-09-21-11:38 then the job can make better decisions about what 
to purge. Test runs could even rm their own artifacts after, perhaps.

I think this would be the best way to isolate —no need for private repos, with 
the followon need to download the entire repo on every run, 100% isolation.

The other issue with race conditions is port assignments, too much code with 
hard coded ports. —there's been slow work on that, with Brahma Reddy Battula 
deserving special mention here. But its almost a losing battle, chasing where 
the next hard-coded port goes in, and again, leads to unreliable test runs that 
everyone ignores.


ANNOUNCEMENT: new patches which contain hard-coded ports in test runs will 
henceforth be reverted. Jenkins matters more than the 30s of your time it takes 
to use the free port finder methods. Same for any hard code paths in 
filesystems.


> 
> d) I'm a big fan of d. 
> 
> e) File a bug against Yetus and we'll add the ability to set ant/gradle/maven 
> args from the command line.  I thought I had it in there when I rewrote the 
> support for multiple build tools, gradle, etc, but I clearly dropped it on 
> the floor.

people won't do that. Switching to per-run hadoop version numbers should 
suffice for artifact dependencies, leaving only ports and paths.
> 
> f) Any time you "give the option to the patch submitter", you generate a not 
> insignificant amount of work on the test infrastructure to determine intent 
> because it effectively means implementing some parsing of a comment.  It's 
> not particularly easy because humans rarely follow the rules.  Just see how 
> well we are at following the Hadoop Compatibility Guidelines. Har har.  No 
> really: people still struggle with filling in JIRA headers correctly and 
> naming patches to trigger the appropriate branch for the test.

where's that documented BTW? I did try looking for it at the weekend..


> 
> g) It's worth noting that Hadoop trunk is *not* using the latest test-patch 
> code.  So there are some significant improvements on the way as soon as we 
> get a release out the door.
> 
> 


well get on with it then :)

I'm going to be at apachecon Data EU next week -who else will be. Maybe we 
could make it a goal of the conference to come out of the week with jenkins 
building reliably. I've been looking at it at weekends but don't have time in 
the week.




Re: Local repo sharing for maven builds

2015-09-20 Thread Josh Elser

Andrew Wang wrote:

Theoretically, we should be able to run unittests without a full `mvn
install` right? The "test" phase comes before "package" or "install", so I
figured it only needed class files. Maybe the multi-module-ness screws this
up.


Unless something weird is configured in the poms (which is often a smell 
on its own), the reactor (I think is the right Maven bit) is smart 
enough to pull the right code for multi-module builds.


AFAIK, you should be able to run all unit tests with a patch (hitting 
multiple modules or not) without installing all of the artifacts (e.g. 
using the package lifecycle phase).


If this isn't the case, I'd call that a build bug.


Re: Local repo sharing for maven builds

2015-09-18 Thread ecki
You can use one per build processor, that reduces concurrent updates but still 
keeps the cache function. And then try to avoid using install.

-- 
http://bernd.eckenfels.net

-Original Message-
From: Andrew Wang <andrew.w...@cloudera.com>
To: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>
Cc: Andrew Bayer <andrew.ba...@gmail.com>, Sangjin Lee <sj...@twitter.com>, Lei 
Xu <l...@cloudera.com>, infrastruct...@apache.org
Sent: Fr., 18 Sep. 2015 20:42
Subject: Re: Local repo sharing for maven builds

I think each job should use a maven.repo.local within its workspace like
abayer said. This means lots of downloading, but it's isolated.

If we care about download time, we could also bootstrap with a tarred
.m2/repository after we've run a `mvn compile`, so before it installs the
hadoop artifacts.

On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma <min...@twitter.com.invalid>
wrote:

> +hadoop common dev. Any suggestions?
>
>
> On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer <andrew.ba...@gmail.com>
> wrote:
>
> > You can change your maven call to use a different repository - I believe
> > you do that with -Dmaven.repository.local=path/to/repo
> > On Sep 18, 2015 19:39, "Ming Ma" <min...@twitter.com> wrote:
> >
> >> Hi,
> >>
> >> We are seeing some strange behaviors in HDFS precommit build. It seems
> >> like it is caused by the local repo on the same machine being used by
> >> different concurrent jobs which can cause issues.
> >>
> >> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> >> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> >> SNAPSHOT.jar. HDFS-9004 adds some new method to
> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> >> In the precommit build for HDFS-9004, unit tests for
> "hadoop-hdfs-project/hdfs"
> >> complain the method isn't defined
> >> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
> >> Interestingly sometimes it just works fine
> >> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
> >>
> >> So we are suspecting that there is another job running at the same time
> >> that published different version of
> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
> >> which doesn't have the new methods defined to the local repo which is
> >> shared by all jobs on that machine.
> >>
> >> If the above analysis is correct, what is the best way to fix the issue
> >> so that different jobs can use their own maven local repo for build and
> >> test?
> >>
> >> Thanks.
> >>
> >> Ming
> >>
> >
>


Re: Local repo sharing for maven builds

2015-09-18 Thread Andrew Wang
Sangjin, you should have access to the precommit jobs if you log in with
your Apache credentials, even as a branch committer.

https://builds.apache.org/job/PreCommit-HDFS-Build/configure

The actual maven invocation is managed by test-patch.sh though.
test-patch.sh has a MAVEN_ARGS which looks like what we want, but I don't
think we can just set it before calling test-patch, since it'd get squashed
by setup_defaults.

Allen/Chris/Yetus folks, any guidance here?

Thanks,
Andrew

On Fri, Sep 18, 2015 at 11:55 AM, <e...@zusammenkunft.net> wrote:

> You can use one per build processor, that reduces concurrent updates but
> still keeps the cache function. And then try to avoid using install.
>
> --
> http://bernd.eckenfels.net
>
> -Original Message-
> From: Andrew Wang <andrew.w...@cloudera.com>
> To: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>
> Cc: Andrew Bayer <andrew.ba...@gmail.com>, Sangjin Lee <sj...@twitter.com>,
> Lei Xu <l...@cloudera.com>, infrastruct...@apache.org
> Sent: Fr., 18 Sep. 2015 20:42
> Subject: Re: Local repo sharing for maven builds
>
> I think each job should use a maven.repo.local within its workspace like
> abayer said. This means lots of downloading, but it's isolated.
>
> If we care about download time, we could also bootstrap with a tarred
> .m2/repository after we've run a `mvn compile`, so before it installs the
> hadoop artifacts.
>
> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma <min...@twitter.com.invalid>
> wrote:
>
> > +hadoop common dev. Any suggestions?
> >
> >
> > On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer <andrew.ba...@gmail.com>
> > wrote:
> >
> > > You can change your maven call to use a different repository - I
> believe
> > > you do that with -Dmaven.repository.local=path/to/repo
> > > On Sep 18, 2015 19:39, "Ming Ma" <min...@twitter.com> wrote:
> > >
> > >> Hi,
> > >>
> > >> We are seeing some strange behaviors in HDFS precommit build. It seems
> > >> like it is caused by the local repo on the same machine being used by
> > >> different concurrent jobs which can cause issues.
> > >>
> > >> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> > >> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> > >> SNAPSHOT.jar. HDFS-9004 adds some new method to
> > hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> > >> In the precommit build for HDFS-9004, unit tests for
> > "hadoop-hdfs-project/hdfs"
> > >> complain the method isn't defined
> > >> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
> > >> Interestingly sometimes it just works fine
> > >> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
> > >>
> > >> So we are suspecting that there is another job running at the same
> time
> > >> that published different version of
> > hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
> > >> which doesn't have the new methods defined to the local repo which is
> > >> shared by all jobs on that machine.
> > >>
> > >> If the above analysis is correct, what is the best way to fix the
> issue
> > >> so that different jobs can use their own maven local repo for build
> and
> > >> test?
> > >>
> > >> Thanks.
> > >>
> > >> Ming
> > >>
> > >
> >
>


Re: Local repo sharing for maven builds

2015-09-18 Thread Andrew Wang
I think each job should use a maven.repo.local within its workspace like
abayer said. This means lots of downloading, but it's isolated.

If we care about download time, we could also bootstrap with a tarred
.m2/repository after we've run a `mvn compile`, so before it installs the
hadoop artifacts.

On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma 
wrote:

> +hadoop common dev. Any suggestions?
>
>
> On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer 
> wrote:
>
> > You can change your maven call to use a different repository - I believe
> > you do that with -Dmaven.repository.local=path/to/repo
> > On Sep 18, 2015 19:39, "Ming Ma"  wrote:
> >
> >> Hi,
> >>
> >> We are seeing some strange behaviors in HDFS precommit build. It seems
> >> like it is caused by the local repo on the same machine being used by
> >> different concurrent jobs which can cause issues.
> >>
> >> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> >> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> >> SNAPSHOT.jar. HDFS-9004 adds some new method to
> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> >> In the precommit build for HDFS-9004, unit tests for
> "hadoop-hdfs-project/hdfs"
> >> complain the method isn't defined
> >> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
> >> Interestingly sometimes it just works fine
> >> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
> >>
> >> So we are suspecting that there is another job running at the same time
> >> that published different version of
> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
> >> which doesn't have the new methods defined to the local repo which is
> >> shared by all jobs on that machine.
> >>
> >> If the above analysis is correct, what is the best way to fix the issue
> >> so that different jobs can use their own maven local repo for build and
> >> test?
> >>
> >> Thanks.
> >>
> >> Ming
> >>
> >
>


Re: Local repo sharing for maven builds

2015-09-18 Thread Ming Ma
+hadoop common dev. Any suggestions?


On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer 
wrote:

> You can change your maven call to use a different repository - I believe
> you do that with -Dmaven.repository.local=path/to/repo
> On Sep 18, 2015 19:39, "Ming Ma"  wrote:
>
>> Hi,
>>
>> We are seeing some strange behaviors in HDFS precommit build. It seems
>> like it is caused by the local repo on the same machine being used by
>> different concurrent jobs which can cause issues.
>>
>> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
>> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
>> SNAPSHOT.jar. HDFS-9004 adds some new method to 
>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
>> In the precommit build for HDFS-9004, unit tests for 
>> "hadoop-hdfs-project/hdfs"
>> complain the method isn't defined
>> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
>> Interestingly sometimes it just works fine
>> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
>>
>> So we are suspecting that there is another job running at the same time
>> that published different version of hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
>> which doesn't have the new methods defined to the local repo which is
>> shared by all jobs on that machine.
>>
>> If the above analysis is correct, what is the best way to fix the issue
>> so that different jobs can use their own maven local repo for build and
>> test?
>>
>> Thanks.
>>
>> Ming
>>
>


Re: Local repo sharing for maven builds

2015-09-18 Thread Ming Ma
The increase of frequency might have been due to the refactor of
hadoop-hdfs-client-*.jar
out of the main
hadoop-hdfs-*.jar. I don't have the oveall metrics of how often this
happens when anyone changes protobuf. But based on HDFS-9004, 4 of 5 runs
have this issue, which is a lot for any patch that changes APIs. This isn't
limited to HDFS. There are cases YARN API changes causing MR unit tests to
fail.

So far, the work around I use is to keep resubmitting the build until it
succeed. Another approach we can consider is to provide an option for the
patch submitter to use its local repo when it submits the patch. In that
way, the majority of patches can still use the shared local repo.

On Fri, Sep 18, 2015 at 3:14 PM, Andrew Wang <andrew.w...@cloudera.com>
wrote:

> Okay, some browsing of Jenkins docs [1] says that we could key the
> maven.repo.local off of $EXECUTOR_NUMBER to do per-executor repos like
> Bernd recommended, but that still requires some hook into test-patch.sh.
>
> Regarding install, I thought all we needed to install was
> hadoop-maven-plugins, but we do more than that now in test-patch.sh. Not
> sure if we can reduce that.
>
> [1]
>
> https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-JenkinsSetEnvironmentVariables
>
> On Fri, Sep 18, 2015 at 2:42 PM, Allen Wittenauer <a...@altiscale.com>
> wrote:
>
> >
> > The collisions have been happening for about a year now.   The frequency
> > is increasing, but not enough to be particularly worrisome. (So I'm
> > slightly amused that one blowing up is suddenly a major freakout.)
> >
> > Making changes to the configuration without knowing what one is doing is
> > probably a bad idea. For example, if people are removing the shared
> cache,
> > I hope they're also prepared for the bitching that is going to go with
> the
> > extremely significant slow down caused by downloading the java prereqs
> for
> > building for every test...
> >
> > As far as Yetus goes, we've got a JIRA open to provide for per-instance
> > caches when using the docker container code. I've got it in my head how I
> > think we can do it, but just haven't had a chance to code it.  So once
> that
> > gets written up + turning on containers should make the problem go away
> > without any significant impact on test time.  Of course, that won't help
> > the scheduled builds but those happen at an even smaller rate.
> >
> >
> > On Sep 18, 2015, at 12:19 PM, Andrew Wang <andrew.w...@cloudera.com>
> > wrote:
> >
> > > Sangjin, you should have access to the precommit jobs if you log in
> with
> > > your Apache credentials, even as a branch committer.
> > >
> > > https://builds.apache.org/job/PreCommit-HDFS-Build/configure
> > >
> > > The actual maven invocation is managed by test-patch.sh though.
> > > test-patch.sh has a MAVEN_ARGS which looks like what we want, but I
> don't
> > > think we can just set it before calling test-patch, since it'd get
> > squashed
> > > by setup_defaults.
> > >
> > > Allen/Chris/Yetus folks, any guidance here?
> > >
> > > Thanks,
> > > Andrew
> > >
> > > On Fri, Sep 18, 2015 at 11:55 AM, <e...@zusammenkunft.net> wrote:
> > >
> > >> You can use one per build processor, that reduces concurrent updates
> but
> > >> still keeps the cache function. And then try to avoid using install.
> > >>
> > >> --
> > >> http://bernd.eckenfels.net
> > >>
> > >> -Original Message-
> > >> From: Andrew Wang <andrew.w...@cloudera.com>
> > >> To: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>
> > >> Cc: Andrew Bayer <andrew.ba...@gmail.com>, Sangjin Lee <
> > sj...@twitter.com>,
> > >> Lei Xu <l...@cloudera.com>, infrastruct...@apache.org
> > >> Sent: Fr., 18 Sep. 2015 20:42
> > >> Subject: Re: Local repo sharing for maven builds
> > >>
> > >> I think each job should use a maven.repo.local within its workspace
> like
> > >> abayer said. This means lots of downloading, but it's isolated.
> > >>
> > >> If we care about download time, we could also bootstrap with a tarred
> > >> .m2/repository after we've run a `mvn compile`, so before it installs
> > the
> > >> hadoop artifacts.
> > >>
> > >> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma <min...@twitter.com.invalid
> >
> > >> wrote:
> > >>
> &

Re: Local repo sharing for maven builds

2015-09-18 Thread Roman Shaposhnik
On Fri, Sep 18, 2015 at 2:42 PM, Allen Wittenauer  wrote:
> As far as Yetus goes, we've got a JIRA open to provide for per-instance 
> caches when
> using the docker container code. I've got it in my head how I think we can do 
> it, but just
> haven't had a chance to code it.  So once that gets written up + turning on 
> containers
> should make the problem go away without any significant impact on test time.
> Of course, that won't help the scheduled builds but those happen at an even 
> smaller rate.

I'm about to start doing quite a bit of dockerized builds on ASF
Jenkins and any best
practices around caching packages and Maven repos would be greatly appreciated.

If nothing else, that'll reduce the I/O load on ASF infra.

Thanks,
Roman.


Re: Local repo sharing for maven builds

2015-09-18 Thread Allen Wittenauer
 Sangjin, you should have access to the precommit jobs if you log in
>> with
>>>> your Apache credentials, even as a branch committer.
>>>> 
>>>> https://builds.apache.org/job/PreCommit-HDFS-Build/configure
>>>> 
>>>> The actual maven invocation is managed by test-patch.sh though.
>>>> test-patch.sh has a MAVEN_ARGS which looks like what we want, but I
>> don't
>>>> think we can just set it before calling test-patch, since it'd get
>>> squashed
>>>> by setup_defaults.
>>>> 
>>>> Allen/Chris/Yetus folks, any guidance here?
>>>> 
>>>> Thanks,
>>>> Andrew
>>>> 
>>>> On Fri, Sep 18, 2015 at 11:55 AM, <e...@zusammenkunft.net> wrote:
>>>> 
>>>>> You can use one per build processor, that reduces concurrent updates
>> but
>>>>> still keeps the cache function. And then try to avoid using install.
>>>>> 
>>>>> --
>>>>> http://bernd.eckenfels.net
>>>>> 
>>>>> -Original Message-
>>>>> From: Andrew Wang <andrew.w...@cloudera.com>
>>>>> To: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>
>>>>> Cc: Andrew Bayer <andrew.ba...@gmail.com>, Sangjin Lee <
>>> sj...@twitter.com>,
>>>>> Lei Xu <l...@cloudera.com>, infrastruct...@apache.org
>>>>> Sent: Fr., 18 Sep. 2015 20:42
>>>>> Subject: Re: Local repo sharing for maven builds
>>>>> 
>>>>> I think each job should use a maven.repo.local within its workspace
>> like
>>>>> abayer said. This means lots of downloading, but it's isolated.
>>>>> 
>>>>> If we care about download time, we could also bootstrap with a tarred
>>>>> .m2/repository after we've run a `mvn compile`, so before it installs
>>> the
>>>>> hadoop artifacts.
>>>>> 
>>>>> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma <min...@twitter.com.invalid
>>> 
>>>>> wrote:
>>>>> 
>>>>>> +hadoop common dev. Any suggestions?
>>>>>> 
>>>>>> 
>>>>>> On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer <
>> andrew.ba...@gmail.com
>>>> 
>>>>>> wrote:
>>>>>> 
>>>>>>> You can change your maven call to use a different repository - I
>>>>> believe
>>>>>>> you do that with -Dmaven.repository.local=path/to/repo
>>>>>>> On Sep 18, 2015 19:39, "Ming Ma" <min...@twitter.com> wrote:
>>>>>>> 
>>>>>>>> Hi,
>>>>>>>> 
>>>>>>>> We are seeing some strange behaviors in HDFS precommit build. It
>>> seems
>>>>>>>> like it is caused by the local repo on the same machine being used
>> by
>>>>>>>> different concurrent jobs which can cause issues.
>>>>>>>> 
>>>>>>>> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
>>>>>>>> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
>>>>>>>> SNAPSHOT.jar. HDFS-9004 adds some new method to
>>>>>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
>>>>>>>> In the precommit build for HDFS-9004, unit tests for
>>>>>> "hadoop-hdfs-project/hdfs"
>>>>>>>> complain the method isn't defined
>>>>>>>> 
>> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/
>>> .
>>>>>>>> Interestingly sometimes it just works fine
>>>>>>>> 
>> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/
>>> .
>>>>>>>> 
>>>>>>>> So we are suspecting that there is another job running at the same
>>>>> time
>>>>>>>> that published different version of
>>>>>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
>>>>>>>> which doesn't have the new methods defined to the local repo which
>> is
>>>>>>>> shared by all jobs on that machine.
>>>>>>>> 
>>>>>>>> If the above analysis is correct, what is the best way to fix the
>>>>> issue
>>>>>>>> so that different jobs can use their own maven local repo for build
>>>>> and
>>>>>>>> test?
>>>>>>>> 
>>>>>>>> Thanks.
>>>>>>>> 
>>>>>>>> Ming
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>> 
>>> 
>> 



Re: Local repo sharing for maven builds

2015-09-18 Thread Andrew Wang
I just filed YETUS-4 for supporting additional maven args.

https://issues.apache.org/jira/browse/YETUS-4

Theoretically, we should be able to run unittests without a full `mvn
install` right? The "test" phase comes before "package" or "install", so I
figured it only needed class files. Maybe the multi-module-ness screws this
up.

On Fri, Sep 18, 2015 at 9:23 PM, Roman Shaposhnik 
wrote:

> On Fri, Sep 18, 2015 at 2:42 PM, Allen Wittenauer 
> wrote:
> > As far as Yetus goes, we've got a JIRA open to provide for per-instance
> caches when
> > using the docker container code. I've got it in my head how I think we
> can do it, but just
> > haven't had a chance to code it.  So once that gets written up + turning
> on containers
> > should make the problem go away without any significant impact on test
> time.
> > Of course, that won't help the scheduled builds but those happen at an
> even smaller rate.
>
> I'm about to start doing quite a bit of dockerized builds on ASF
> Jenkins and any best
> practices around caching packages and Maven repos would be greatly
> appreciated.
>
> If nothing else, that'll reduce the I/O load on ASF infra.
>
> Thanks,
> Roman.
>


Re: Local repo sharing for maven builds

2015-09-18 Thread Andrew Wang
Okay, some browsing of Jenkins docs [1] says that we could key the
maven.repo.local off of $EXECUTOR_NUMBER to do per-executor repos like
Bernd recommended, but that still requires some hook into test-patch.sh.

Regarding install, I thought all we needed to install was
hadoop-maven-plugins, but we do more than that now in test-patch.sh. Not
sure if we can reduce that.

[1]
https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-JenkinsSetEnvironmentVariables

On Fri, Sep 18, 2015 at 2:42 PM, Allen Wittenauer <a...@altiscale.com> wrote:

>
> The collisions have been happening for about a year now.   The frequency
> is increasing, but not enough to be particularly worrisome. (So I'm
> slightly amused that one blowing up is suddenly a major freakout.)
>
> Making changes to the configuration without knowing what one is doing is
> probably a bad idea. For example, if people are removing the shared cache,
> I hope they're also prepared for the bitching that is going to go with the
> extremely significant slow down caused by downloading the java prereqs for
> building for every test...
>
> As far as Yetus goes, we've got a JIRA open to provide for per-instance
> caches when using the docker container code. I've got it in my head how I
> think we can do it, but just haven't had a chance to code it.  So once that
> gets written up + turning on containers should make the problem go away
> without any significant impact on test time.  Of course, that won't help
> the scheduled builds but those happen at an even smaller rate.
>
>
> On Sep 18, 2015, at 12:19 PM, Andrew Wang <andrew.w...@cloudera.com>
> wrote:
>
> > Sangjin, you should have access to the precommit jobs if you log in with
> > your Apache credentials, even as a branch committer.
> >
> > https://builds.apache.org/job/PreCommit-HDFS-Build/configure
> >
> > The actual maven invocation is managed by test-patch.sh though.
> > test-patch.sh has a MAVEN_ARGS which looks like what we want, but I don't
> > think we can just set it before calling test-patch, since it'd get
> squashed
> > by setup_defaults.
> >
> > Allen/Chris/Yetus folks, any guidance here?
> >
> > Thanks,
> > Andrew
> >
> > On Fri, Sep 18, 2015 at 11:55 AM, <e...@zusammenkunft.net> wrote:
> >
> >> You can use one per build processor, that reduces concurrent updates but
> >> still keeps the cache function. And then try to avoid using install.
> >>
> >> --
> >> http://bernd.eckenfels.net
> >>
> >> -Original Message-
> >> From: Andrew Wang <andrew.w...@cloudera.com>
> >> To: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>
> >> Cc: Andrew Bayer <andrew.ba...@gmail.com>, Sangjin Lee <
> sj...@twitter.com>,
> >> Lei Xu <l...@cloudera.com>, infrastruct...@apache.org
> >> Sent: Fr., 18 Sep. 2015 20:42
> >> Subject: Re: Local repo sharing for maven builds
> >>
> >> I think each job should use a maven.repo.local within its workspace like
> >> abayer said. This means lots of downloading, but it's isolated.
> >>
> >> If we care about download time, we could also bootstrap with a tarred
> >> .m2/repository after we've run a `mvn compile`, so before it installs
> the
> >> hadoop artifacts.
> >>
> >> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma <min...@twitter.com.invalid>
> >> wrote:
> >>
> >>> +hadoop common dev. Any suggestions?
> >>>
> >>>
> >>> On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer <andrew.ba...@gmail.com
> >
> >>> wrote:
> >>>
> >>>> You can change your maven call to use a different repository - I
> >> believe
> >>>> you do that with -Dmaven.repository.local=path/to/repo
> >>>> On Sep 18, 2015 19:39, "Ming Ma" <min...@twitter.com> wrote:
> >>>>
> >>>>> Hi,
> >>>>>
> >>>>> We are seeing some strange behaviors in HDFS precommit build. It
> seems
> >>>>> like it is caused by the local repo on the same machine being used by
> >>>>> different concurrent jobs which can cause issues.
> >>>>>
> >>>>> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> >>>>> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> >>>>> SNAPSHOT.jar. HDFS-9004 adds some new method to
> >>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.

Re: Local repo sharing for maven builds

2015-09-18 Thread Allen Wittenauer

The collisions have been happening for about a year now.   The frequency is 
increasing, but not enough to be particularly worrisome. (So I'm slightly 
amused that one blowing up is suddenly a major freakout.) 

Making changes to the configuration without knowing what one is doing is 
probably a bad idea. For example, if people are removing the shared cache, I 
hope they're also prepared for the bitching that is going to go with the 
extremely significant slow down caused by downloading the java prereqs for 
building for every test...

As far as Yetus goes, we've got a JIRA open to provide for per-instance caches 
when using the docker container code. I've got it in my head how I think we can 
do it, but just haven't had a chance to code it.  So once that gets written up 
+ turning on containers should make the problem go away without any significant 
impact on test time.  Of course, that won't help the scheduled builds but those 
happen at an even smaller rate.


On Sep 18, 2015, at 12:19 PM, Andrew Wang <andrew.w...@cloudera.com> wrote:

> Sangjin, you should have access to the precommit jobs if you log in with
> your Apache credentials, even as a branch committer.
> 
> https://builds.apache.org/job/PreCommit-HDFS-Build/configure
> 
> The actual maven invocation is managed by test-patch.sh though.
> test-patch.sh has a MAVEN_ARGS which looks like what we want, but I don't
> think we can just set it before calling test-patch, since it'd get squashed
> by setup_defaults.
> 
> Allen/Chris/Yetus folks, any guidance here?
> 
> Thanks,
> Andrew
> 
> On Fri, Sep 18, 2015 at 11:55 AM, <e...@zusammenkunft.net> wrote:
> 
>> You can use one per build processor, that reduces concurrent updates but
>> still keeps the cache function. And then try to avoid using install.
>> 
>> --
>> http://bernd.eckenfels.net
>> 
>> -Original Message-
>> From: Andrew Wang <andrew.w...@cloudera.com>
>> To: "common-dev@hadoop.apache.org" <common-dev@hadoop.apache.org>
>> Cc: Andrew Bayer <andrew.ba...@gmail.com>, Sangjin Lee <sj...@twitter.com>,
>> Lei Xu <l...@cloudera.com>, infrastruct...@apache.org
>> Sent: Fr., 18 Sep. 2015 20:42
>> Subject: Re: Local repo sharing for maven builds
>> 
>> I think each job should use a maven.repo.local within its workspace like
>> abayer said. This means lots of downloading, but it's isolated.
>> 
>> If we care about download time, we could also bootstrap with a tarred
>> .m2/repository after we've run a `mvn compile`, so before it installs the
>> hadoop artifacts.
>> 
>> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma <min...@twitter.com.invalid>
>> wrote:
>> 
>>> +hadoop common dev. Any suggestions?
>>> 
>>> 
>>> On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer <andrew.ba...@gmail.com>
>>> wrote:
>>> 
>>>> You can change your maven call to use a different repository - I
>> believe
>>>> you do that with -Dmaven.repository.local=path/to/repo
>>>> On Sep 18, 2015 19:39, "Ming Ma" <min...@twitter.com> wrote:
>>>> 
>>>>> Hi,
>>>>> 
>>>>> We are seeing some strange behaviors in HDFS precommit build. It seems
>>>>> like it is caused by the local repo on the same machine being used by
>>>>> different concurrent jobs which can cause issues.
>>>>> 
>>>>> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
>>>>> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
>>>>> SNAPSHOT.jar. HDFS-9004 adds some new method to
>>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
>>>>> In the precommit build for HDFS-9004, unit tests for
>>> "hadoop-hdfs-project/hdfs"
>>>>> complain the method isn't defined
>>>>> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
>>>>> Interestingly sometimes it just works fine
>>>>> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
>>>>> 
>>>>> So we are suspecting that there is another job running at the same
>> time
>>>>> that published different version of
>>> hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
>>>>> which doesn't have the new methods defined to the local repo which is
>>>>> shared by all jobs on that machine.
>>>>> 
>>>>> If the above analysis is correct, what is the best way to fix the
>> issue
>>>>> so that different jobs can use their own maven local repo for build
>> and
>>>>> test?
>>>>> 
>>>>> Thanks.
>>>>> 
>>>>> Ming
>>>>> 
>>>> 
>>> 
>> 



Re: Local repo sharing for maven builds

2015-09-18 Thread Sangjin Lee
Are we using maven.repo.local in our pre-commit or commit jobs? We cannot
see the configuration of these jenkins jobs.

On Fri, Sep 18, 2015 at 11:41 AM, Andrew Wang 
wrote:

> I think each job should use a maven.repo.local within its workspace like
> abayer said. This means lots of downloading, but it's isolated.
>
> If we care about download time, we could also bootstrap with a tarred
> .m2/repository after we've run a `mvn compile`, so before it installs the
> hadoop artifacts.
>
> On Fri, Sep 18, 2015 at 11:02 AM, Ming Ma 
> wrote:
>
> > +hadoop common dev. Any suggestions?
> >
> >
> > On Fri, Sep 18, 2015 at 10:41 AM, Andrew Bayer 
> > wrote:
> >
> > > You can change your maven call to use a different repository - I
> believe
> > > you do that with -Dmaven.repository.local=path/to/repo
> > > On Sep 18, 2015 19:39, "Ming Ma"  wrote:
> > >
> > >> Hi,
> > >>
> > >> We are seeing some strange behaviors in HDFS precommit build. It seems
> > >> like it is caused by the local repo on the same machine being used by
> > >> different concurrent jobs which can cause issues.
> > >>
> > >> In HDFS, the build and test of "hadoop-hdfs-project/hdfs" depend on
> > >> "hadoop-hdfs-project/hdfs-client"'s  hadoop-hdfs-client-3.0.0-
> > >> SNAPSHOT.jar. HDFS-9004 adds some new method to
> > hadoop-hdfs-client-3.0.0-SNAPSHOT.jar.
> > >> In the precommit build for HDFS-9004, unit tests for
> > "hadoop-hdfs-project/hdfs"
> > >> complain the method isn't defined
> > >> https://builds.apache.org/job/PreCommit-HDFS-Build/12522/testReport/.
> > >> Interestingly sometimes it just works fine
> > >> https://builds.apache.org/job/PreCommit-HDFS-Build/12507/testReport/.
> > >>
> > >> So we are suspecting that there is another job running at the same
> time
> > >> that published different version of
> > hadoop-hdfs-client-3.0.0-SNAPSHOT.jar
> > >> which doesn't have the new methods defined to the local repo which is
> > >> shared by all jobs on that machine.
> > >>
> > >> If the above analysis is correct, what is the best way to fix the
> issue
> > >> so that different jobs can use their own maven local repo for build
> and
> > >> test?
> > >>
> > >> Thanks.
> > >>
> > >> Ming
> > >>
> > >
> >
>