[GitHub] [hadoop-thirdparty] Apache9 commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-09-27 Thread GitBox
Apache9 commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] 
Create hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#discussion_r329297329
 
 

 ##
 File path: pom.xml
 ##
 @@ -0,0 +1,466 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  4.0.0
+  org.apache.hadoop.thirdparty
+  hadoop-thirdparty
+  1.0.0-SNAPSHOT
+  
+org.apache
+apache
+21
+
+
+  
+  Apache Hadoop Third-party Libs
+  pom
+  
+Packaging of relocated (renamed, shaded) third-party libraries used by 
Hadoop.
+  
+
+  
+
+  ${distMgmtStagingId}
+  ${distMgmtStagingName}
+  ${distMgmtStagingUrl}
+
+
+  ${distMgmtSnapshotsId}
+  ${distMgmtSnapshotsName}
+  ${distMgmtSnapshotsUrl}
+
+
+  apache.website
+  
scpexe://people.apache.org/www/hadoop.apache.org/docs/rthirdparty-${project.version}
+
+  
+
+  
+
+  ${distMgmtSnapshotsId}
+  ${distMgmtSnapshotsName}
+  ${distMgmtSnapshotsUrl}
+
+
+  repository.jboss.org
+  https://repository.jboss.org/nexus/content/groups/public/
+  
+false
+  
+
+  
+
+  
+
+  Apache License, Version 2.0
+  https://www.apache.org/licenses/LICENSE-2.0.txt
+
+  
+
+  
+
+3.2.0
+
+1.8
+
+apache.snapshots.https
+Apache Development Snapshot 
Repository
+
https://repository.apache.org/content/repositories/snapshots
+apache.staging.https
+Apache Release Distribution 
Repository
+
https://repository.apache.org/service/local/staging/deploy/maven2
+
+
+UTF-8
+UTF-8
+
+
+org.apache.hadoop.thirdparty
+3.7.1
+
+
+2.8.1
+3.6
+1.5
+1.7
+2.5
+3.0.2
+3.0.0-M1
+3.0.1
+1.5
+0.12
+2.4
+2.5.0
+3.0.0
+8.19
+1.4.3
+1.3.1
+  
+
+  
+Apache Software Foundation
+https://www.apache.org
+  
+
+  
+hadoop-shaded-protobuf
+  
+
+  
+
+  
+
+  org.apache.maven.plugins
+  maven-dependency-plugin
+  ${maven-dependency-plugin.version}
+
+
+  org.apache.maven.plugins
+  maven-enforcer-plugin
+  ${maven-enforcer-plugin.version}
+  
+
+  
+[3.0.2,)
+  
+  
+[1.8,)
+  
+
+  
+
+
+  org.apache.maven.plugins
+  maven-assembly-plugin
+  ${maven-assembly-plugin.version}
+
+
+  org.apache.maven.plugins
+  maven-deploy-plugin
+  ${maven-deploy-plugin.version}
+
+
+  org.apache.rat
+  apache-rat-plugin
+  ${apache-rat-plugin.version}
+
+
+  org.apache.maven.plugins
+  maven-antrun-plugin
+  ${maven-antrun-plugin.version}
+
+
+  org.codehaus.mojo
+  exec-maven-plugin
+  ${exec-maven-plugin.version}
+
+
+  org.apache.maven.plugins
+  maven-site-plugin
+  ${maven-site-plugin.version}
+  
+
+  org.apache.maven.wagon
+  wagon-ssh
+  ${wagon-ssh.version}
+
+
+  org.apache.maven.doxia
+  doxia-module-markdown
+  1.8
+
+  
+
+
+  org.apache.felix
+  maven-bundle-plugin
+  ${maven-bundle-plugin.version}
+
+
 
 Review comment:
   Do we need checkstyle? We do not have any hand written java files in the 
project...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] Apache9 commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-09-27 Thread GitBox
Apache9 commented on a change in pull request #1: HADOOP-16595. [pb-upgrade] 
Create hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1#discussion_r329297473
 
 

 ##
 File path: hadoop-shaded-protobuf/pom.xml
 ##
 @@ -0,0 +1,120 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  
+hadoop-thirdparty
+org.apache.hadoop.thirdparty
+1.0.0-SNAPSHOT
+..
+  
+  4.0.0
+  hadoop-shaded-protobuf
+  Apache Hadoop shaded Protobuf
+  jar
+
+  
+  
+
+  
+
+  com.google.protobuf
+  protobuf-java
+  ${protobuf.version}
+
+  
+
+  
+
+  
+org.apache.maven.plugins
+maven-shade-plugin
+
+  true
+
+
+  
+org.apache.hadoop
+hadoop-maven-plugins
+${hadoop.version}
+  
+
+
+  
+shade-protobuf
+package
+
+  shade
+
+
+  
+
+  com.google.protobuf:protobuf-java
+
+  
+  
+
+  com.google.protobuf:*
+  
+**/*
+  
+
+  
+  
+
+  com/google/protobuf
+  
${shaded.prefix}.com.google.protobuf
+
+
+  google/
+  ${shaded.prefix}.google.
+  
+**/*.proto
+  
+
+  
+  
+
 
 Review comment:
   https://issues.apache.org/jira/browse/MSHADE-182 has been resolved for 
years, so we do not need this any more?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] About creation of Hadoop Thirdparty repository for shaded artifacts

2019-09-27 Thread Duo Zhang
For HBase we have a separated repo for hbase-thirdparty

https://github.com/apache/hbase-thirdparty

We will publish the artifacts to nexus so we do not need to include
binaries in our git repo, just add a dependency in the pom.

https://mvnrepository.com/artifact/org.apache.hbase.thirdparty/hbase-shaded-protobuf


And it has its own release cycles, only when there are special requirements
or we want to upgrade some of the dependencies. This is the vote thread for
the newest release, where we want to provide a shaded gson for jdk7.

https://lists.apache.org/thread.html/f12c589baabbc79c7fb2843422d4590bea982cd102e2bd9d21e9884b@%3Cdev.hbase.apache.org%3E


Thanks.

Vinayakumar B  于2019年9月28日周六 上午1:28写道:

> Please find replies inline.
>
> -Vinay
>
> On Fri, Sep 27, 2019 at 10:21 PM Owen O'Malley 
> wrote:
>
> > I'm very unhappy with this direction. In particular, I don't think git is
> > a good place for distribution of binary artifacts. Furthermore, the PMC
> > shouldn't be releasing anything without a release vote.
> >
> >
> Proposed solution doesnt release any binaries in git. Its actually a
> complete sub-project which follows entire release process, including VOTE
> in public. I have mentioned already that release process is similar to
> hadoop.
> To be specific, using the (almost) same script used in hadoop to generate
> artifacts, sign and deploy to staging repository. Please let me know If I
> am conveying anything wrong.
>
>
> > I'd propose that we make a third party module that contains the *source*
> > of the pom files to build the relocated jars. This should absolutely be
> > treated as a last resort for the mostly Google projects that regularly
> > break binary compatibility (eg. Protobuf & Guava).
> >
> >
> Same has been implemented in the PR
> https://github.com/apache/hadoop-thirdparty/pull/1. Please check and let
> me
> know If I misunderstood. Yes, this is the last option we have AFAIK.
>
>
> > In terms of naming, I'd propose something like:
> >
> > org.apache.hadoop.thirdparty.protobuf2_5
> > org.apache.hadoop.thirdparty.guava28
> >
> > In particular, I think we absolutely need to include the version of the
> > underlying project. On the other hand, since we should not be shading
> > *everything* we can drop the leading com.google.
> >
> >
> IMO, This naming convention is easy for identifying the underlying project,
> but  it will be difficult to maintain going forward if underlying project
> versions changes. Since thirdparty module have its own releases, each of
> those release can be mapped to specific version of underlying project. Even
> the binary artifact can include a MANIFEST with underlying project details
> as per Steve's suggestion on HADOOP-13363.
> That said, if you still prefer to have project number in artifact id, it
> can be done.
>
> The Hadoop project can make releases of  the thirdparty module:
> >
> > 
> >   org.apache.hadoop
> >   hadoop-thirdparty-protobuf25
> >   1.0
> > 
> >
> >
> Note that the version has to be the hadoop thirdparty release number, which
> > is part of why you need to have the underlying version in the artifact
> > name. These we can push to maven central as new releases from Hadoop.
> >
> >
> Exactly, same has been implemented in the PR. hadoop-thirdparty module have
> its own releases. But in HADOOP Jira, thirdparty versions can be
> differentiated using prefix "thirdparty-".
>
> Same solution is being followed in HBase. May be people involved in HBase
> can add some points here.
>
> Thoughts?
> >
> > .. Owen
> >
> > On Fri, Sep 27, 2019 at 8:38 AM Vinayakumar B 
> > wrote:
> >
> >> Hi All,
> >>
> >>I wanted to discuss about the separate repo for thirdparty
> dependencies
> >> which we need to shaded and include in Hadoop component's jars.
> >>
> >>Apologies for the big text ahead, but this needs clear explanation!!
> >>
> >>Right now most needed such dependency is protobuf. Protobuf
> dependency
> >> was not upgraded from 2.5.0 onwards with the fear that downstream
> builds,
> >> which depends on transitive dependency protobuf coming from hadoop's
> jars,
> >> may fail with the upgrade. Apparently protobuf does not guarantee source
> >> compatibility, though it guarantees wire compatibility between versions.
> >> Because of this behavior, version upgrade may cause breakage in known
> and
> >> unknown (private?) downstreams.
> >>
> >>So to tackle this, we came up the following proposal in HADOOP-13363.
> >>
> >>Luckily, As far as I know, no APIs, either public to user or between
> >> Hadoop processes, is not directly using protobuf classes in signatures.
> >> (If
> >> any exist, please let us know).
> >>
> >>Proposal:
> >>
> >>
> >>1. Create a artifact(s) which contains shaded dependencies. All such
> >> shading/relocation will be with known prefix
> >> **org.apache.hadoop.thirdparty.**.
> >>2. Right now protobuf jar (ex:
> o.a.h.thirdparty:hadoop-shaded-protobuf)
> >> to start with, all 

Re: [DISCUSS] About creation of Hadoop Thirdparty repository for shaded artifacts

2019-09-27 Thread Vinayakumar B
Please find replies inline.

-Vinay

On Fri, Sep 27, 2019 at 10:21 PM Owen O'Malley 
wrote:

> I'm very unhappy with this direction. In particular, I don't think git is
> a good place for distribution of binary artifacts. Furthermore, the PMC
> shouldn't be releasing anything without a release vote.
>
>
Proposed solution doesnt release any binaries in git. Its actually a
complete sub-project which follows entire release process, including VOTE
in public. I have mentioned already that release process is similar to
hadoop.
To be specific, using the (almost) same script used in hadoop to generate
artifacts, sign and deploy to staging repository. Please let me know If I
am conveying anything wrong.


> I'd propose that we make a third party module that contains the *source*
> of the pom files to build the relocated jars. This should absolutely be
> treated as a last resort for the mostly Google projects that regularly
> break binary compatibility (eg. Protobuf & Guava).
>
>
Same has been implemented in the PR
https://github.com/apache/hadoop-thirdparty/pull/1. Please check and let me
know If I misunderstood. Yes, this is the last option we have AFAIK.


> In terms of naming, I'd propose something like:
>
> org.apache.hadoop.thirdparty.protobuf2_5
> org.apache.hadoop.thirdparty.guava28
>
> In particular, I think we absolutely need to include the version of the
> underlying project. On the other hand, since we should not be shading
> *everything* we can drop the leading com.google.
>
>
IMO, This naming convention is easy for identifying the underlying project,
but  it will be difficult to maintain going forward if underlying project
versions changes. Since thirdparty module have its own releases, each of
those release can be mapped to specific version of underlying project. Even
the binary artifact can include a MANIFEST with underlying project details
as per Steve's suggestion on HADOOP-13363.
That said, if you still prefer to have project number in artifact id, it
can be done.

The Hadoop project can make releases of  the thirdparty module:
>
> 
>   org.apache.hadoop
>   hadoop-thirdparty-protobuf25
>   1.0
> 
>
>
Note that the version has to be the hadoop thirdparty release number, which
> is part of why you need to have the underlying version in the artifact
> name. These we can push to maven central as new releases from Hadoop.
>
>
Exactly, same has been implemented in the PR. hadoop-thirdparty module have
its own releases. But in HADOOP Jira, thirdparty versions can be
differentiated using prefix "thirdparty-".

Same solution is being followed in HBase. May be people involved in HBase
can add some points here.

Thoughts?
>
> .. Owen
>
> On Fri, Sep 27, 2019 at 8:38 AM Vinayakumar B 
> wrote:
>
>> Hi All,
>>
>>I wanted to discuss about the separate repo for thirdparty dependencies
>> which we need to shaded and include in Hadoop component's jars.
>>
>>Apologies for the big text ahead, but this needs clear explanation!!
>>
>>Right now most needed such dependency is protobuf. Protobuf dependency
>> was not upgraded from 2.5.0 onwards with the fear that downstream builds,
>> which depends on transitive dependency protobuf coming from hadoop's jars,
>> may fail with the upgrade. Apparently protobuf does not guarantee source
>> compatibility, though it guarantees wire compatibility between versions.
>> Because of this behavior, version upgrade may cause breakage in known and
>> unknown (private?) downstreams.
>>
>>So to tackle this, we came up the following proposal in HADOOP-13363.
>>
>>Luckily, As far as I know, no APIs, either public to user or between
>> Hadoop processes, is not directly using protobuf classes in signatures.
>> (If
>> any exist, please let us know).
>>
>>Proposal:
>>
>>
>>1. Create a artifact(s) which contains shaded dependencies. All such
>> shading/relocation will be with known prefix
>> **org.apache.hadoop.thirdparty.**.
>>2. Right now protobuf jar (ex: o.a.h.thirdparty:hadoop-shaded-protobuf)
>> to start with, all **com.google.protobuf** classes will be relocated as
>> **org.apache.hadoop.thirdparty.com.google.protobuf**.
>>3. Hadoop modules, which needs protobuf as dependency, will add this
>> shaded artifact as dependency (ex:
>> o.a.h.thirdparty:hadoop-shaded-protobuf).
>>4. All previous usages of "com.google.protobuf" will be relocated to
>> "org.apache.hadoop.thirdparty.com.google.protobuf" in the code and will be
>> committed. Please note, this replacement is One-Time directly in source
>> code, NOT during compile and package.
>>5. Once all usages of "com.google.protobuf" is relocated, then hadoop
>> dont care about which version of original  "protobuf-java" is in
>> dependency.
>>6. Just keep "protobuf-java:2.5.0" in dependency tree not to break the
>> downstreams. But hadoop will be originally using the latest protobuf
>> present in "o.a.h.thirdparty:hadoop-shaded-protobuf".
>>
>>7. Coming back to separate repo, 

[jira] [Created] (HADOOP-16613) contentType for fake directory files

2019-09-27 Thread Jose Torres (Jira)
Jose Torres created HADOOP-16613:


 Summary: contentType for fake directory files
 Key: HADOOP-16613
 URL: https://issues.apache.org/jira/browse/HADOOP-16613
 Project: Hadoop Common
  Issue Type: Bug
  Components: hadoop-aws
Reporter: Jose Torres


S3AFileSystem doesn't set a contentType for fake directory files, causing it to 
be inferred as "application/octet-stream", but fake directory files created 
through the S3 web console have content type "application/x-directory". We may 
want to adopt the web console behavior as a standard.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] About creation of Hadoop Thirdparty repository for shaded artifacts

2019-09-27 Thread Owen O'Malley
I'm very unhappy with this direction. In particular, I don't think git is a
good place for distribution of binary artifacts. Furthermore, the PMC
shouldn't be releasing anything without a release vote.

I'd propose that we make a third party module that contains the *source* of
the pom files to build the relocated jars. This should absolutely be
treated as a last resort for the mostly Google projects that regularly
break binary compatibility (eg. Protobuf & Guava).

In terms of naming, I'd propose something like:

org.apache.hadoop.thirdparty.protobuf2_5
org.apache.hadoop.thirdparty.guava28

In particular, I think we absolutely need to include the version of the
underlying project. On the other hand, since we should not be shading
*everything* we can drop the leading com.google.

The Hadoop project can make releases of  the thirdparty module:


  org.apache.hadoop
  hadoop-thirdparty-protobuf25
  1.0


Note that the version has to be the hadoop thirdparty release number, which
is part of why you need to have the underlying version in the artifact
name. These we can push to maven central as new releases from Hadoop.

Thoughts?

.. Owen

On Fri, Sep 27, 2019 at 8:38 AM Vinayakumar B 
wrote:

> Hi All,
>
>I wanted to discuss about the separate repo for thirdparty dependencies
> which we need to shaded and include in Hadoop component's jars.
>
>Apologies for the big text ahead, but this needs clear explanation!!
>
>Right now most needed such dependency is protobuf. Protobuf dependency
> was not upgraded from 2.5.0 onwards with the fear that downstream builds,
> which depends on transitive dependency protobuf coming from hadoop's jars,
> may fail with the upgrade. Apparently protobuf does not guarantee source
> compatibility, though it guarantees wire compatibility between versions.
> Because of this behavior, version upgrade may cause breakage in known and
> unknown (private?) downstreams.
>
>So to tackle this, we came up the following proposal in HADOOP-13363.
>
>Luckily, As far as I know, no APIs, either public to user or between
> Hadoop processes, is not directly using protobuf classes in signatures. (If
> any exist, please let us know).
>
>Proposal:
>
>
>1. Create a artifact(s) which contains shaded dependencies. All such
> shading/relocation will be with known prefix
> **org.apache.hadoop.thirdparty.**.
>2. Right now protobuf jar (ex: o.a.h.thirdparty:hadoop-shaded-protobuf)
> to start with, all **com.google.protobuf** classes will be relocated as
> **org.apache.hadoop.thirdparty.com.google.protobuf**.
>3. Hadoop modules, which needs protobuf as dependency, will add this
> shaded artifact as dependency (ex:
> o.a.h.thirdparty:hadoop-shaded-protobuf).
>4. All previous usages of "com.google.protobuf" will be relocated to
> "org.apache.hadoop.thirdparty.com.google.protobuf" in the code and will be
> committed. Please note, this replacement is One-Time directly in source
> code, NOT during compile and package.
>5. Once all usages of "com.google.protobuf" is relocated, then hadoop
> dont care about which version of original  "protobuf-java" is in
> dependency.
>6. Just keep "protobuf-java:2.5.0" in dependency tree not to break the
> downstreams. But hadoop will be originally using the latest protobuf
> present in "o.a.h.thirdparty:hadoop-shaded-protobuf".
>
>7. Coming back to separate repo, Following are most appropriate reasons
> of keeping shaded dependency artifact in separate repo instead of
> submodule.
>
>   7a. These artifacts need not be built all the time. It needs to be
> built only when there is a change in the dependency version or the build
> process.
>   7b. If added as "submodule in Hadoop repo", maven-shade-plugin:shade
> will execute only in package phase. That means, "mvn compile" or "mvn
> test-compile" will not be failed as this artifact will not have relocated
> classes, instead it will have original classes, resulting in compilation
> failure. Workaround, build thirdparty submodule first and exclude
> "thirdparty" submodule in other executions. This will be a complex process
> compared to keeping in a separate repo.
>
>   7c. Separate repo, will be a subproject of Hadoop, using the same
> HADOOP jira project, with different versioning prefixed with "thirdparty-"
> (ex: thirdparty-1.0.0).
>   7d. Separate will have same release process as Hadoop.
>
>
> HADOOP-13363 (https://issues.apache.org/jira/browse/HADOOP-13363) is
> an
> umbrella jira tracking the changes to protobuf upgrade.
>
> PR (https://github.com/apache/hadoop-thirdparty/pull/1) has been
> raised
> for separate repo creation in (HADOOP-16595 (
> https://issues.apache.org/jira/browse/HADOOP-16595)
>
> Please provide your inputs for the proposal and review the PR to
> proceed with the proposal.
>
>
>-Thanks,
> Vinay
>
> On Fri, Sep 27, 2019 at 11:54 AM Vinod Kumar Vavilapalli <
> vino...@apache.org>
> wrote:
>
> > Moving the 

Daily Builds Getting Aborted Due To Timeout

2019-09-27 Thread Ayush Saxena
Hi All,
Just to bring into notice the hadoop daily builds are getting aborted due
to timeout( Configured to be 900 Minutes).

> Build timed out (after 900 minutes). Marking the build as aborted.
> Build was aborted
> [CHECKSTYLE] Skipping publisher since build result is ABORTED
> [FINDBUGS] Skipping publisher since build result is ABORTED
>
> Recording test results
> No emails were triggered.
> Finished: ABORTED
>
>
Reference :
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1271/

Checked with the infra team, The only resolution told was to increase the
configured time of 900 mins or make the build take less time.

Someone with access to change the config can probably increase the time.
(Probably people in https://whimsy.apache.org/roster/group/hudson-jobadmin
have access)
*Link To Change Configured Time* :
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/configure



-Ayush


[DISCUSS] About creation of Hadoop Thirdparty repository for shaded artifacts

2019-09-27 Thread Vinayakumar B
Hi All,

   I wanted to discuss about the separate repo for thirdparty dependencies
which we need to shaded and include in Hadoop component's jars.

   Apologies for the big text ahead, but this needs clear explanation!!

   Right now most needed such dependency is protobuf. Protobuf dependency
was not upgraded from 2.5.0 onwards with the fear that downstream builds,
which depends on transitive dependency protobuf coming from hadoop's jars,
may fail with the upgrade. Apparently protobuf does not guarantee source
compatibility, though it guarantees wire compatibility between versions.
Because of this behavior, version upgrade may cause breakage in known and
unknown (private?) downstreams.

   So to tackle this, we came up the following proposal in HADOOP-13363.

   Luckily, As far as I know, no APIs, either public to user or between
Hadoop processes, is not directly using protobuf classes in signatures. (If
any exist, please let us know).

   Proposal:
   

   1. Create a artifact(s) which contains shaded dependencies. All such
shading/relocation will be with known prefix
**org.apache.hadoop.thirdparty.**.
   2. Right now protobuf jar (ex: o.a.h.thirdparty:hadoop-shaded-protobuf)
to start with, all **com.google.protobuf** classes will be relocated as
**org.apache.hadoop.thirdparty.com.google.protobuf**.
   3. Hadoop modules, which needs protobuf as dependency, will add this
shaded artifact as dependency (ex: o.a.h.thirdparty:hadoop-shaded-protobuf).
   4. All previous usages of "com.google.protobuf" will be relocated to
"org.apache.hadoop.thirdparty.com.google.protobuf" in the code and will be
committed. Please note, this replacement is One-Time directly in source
code, NOT during compile and package.
   5. Once all usages of "com.google.protobuf" is relocated, then hadoop
dont care about which version of original  "protobuf-java" is in dependency.
   6. Just keep "protobuf-java:2.5.0" in dependency tree not to break the
downstreams. But hadoop will be originally using the latest protobuf
present in "o.a.h.thirdparty:hadoop-shaded-protobuf".

   7. Coming back to separate repo, Following are most appropriate reasons
of keeping shaded dependency artifact in separate repo instead of submodule.

  7a. These artifacts need not be built all the time. It needs to be
built only when there is a change in the dependency version or the build
process.
  7b. If added as "submodule in Hadoop repo", maven-shade-plugin:shade
will execute only in package phase. That means, "mvn compile" or "mvn
test-compile" will not be failed as this artifact will not have relocated
classes, instead it will have original classes, resulting in compilation
failure. Workaround, build thirdparty submodule first and exclude
"thirdparty" submodule in other executions. This will be a complex process
compared to keeping in a separate repo.

  7c. Separate repo, will be a subproject of Hadoop, using the same
HADOOP jira project, with different versioning prefixed with "thirdparty-"
(ex: thirdparty-1.0.0).
  7d. Separate will have same release process as Hadoop.


HADOOP-13363 (https://issues.apache.org/jira/browse/HADOOP-13363) is an
umbrella jira tracking the changes to protobuf upgrade.

PR (https://github.com/apache/hadoop-thirdparty/pull/1) has been raised
for separate repo creation in (HADOOP-16595 (
https://issues.apache.org/jira/browse/HADOOP-16595)

Please provide your inputs for the proposal and review the PR to
proceed with the proposal.


   -Thanks,
Vinay

On Fri, Sep 27, 2019 at 11:54 AM Vinod Kumar Vavilapalli 
wrote:

> Moving the thread to the dev lists.
>
> Thanks
> +Vinod
>
> > On Sep 23, 2019, at 11:43 PM, Vinayakumar B 
> wrote:
> >
> > Thanks Marton,
> >
> > Current created 'hadoop-thirdparty' repo is empty right now.
> > Whether to use that repo  for shaded artifact or not will be monitored in
> > HADOOP-13363 umbrella jira. Please feel free to join the discussion.
> >
> > There is no existing codebase is being moved out of hadoop repo. So I
> think
> > right now we are good to go.
> >
> > -Vinay
> >
> > On Mon, Sep 23, 2019 at 11:38 PM Marton Elek  wrote:
> >
> >>
> >> I am not sure if it's defined when is a vote required.
> >>
> >> https://www.apache.org/foundation/voting.html
> >>
> >> Personally I think it's a big enough change to send a notification to
> the
> >> dev lists with a 'lazy consensus'  closure
> >>
> >> Marton
> >>
> >> On 2019/09/23 17:46:37, Vinayakumar B  wrote:
> >>> Hi,
> >>>
> >>> As discussed in HADOOP-13363, protobuf 3.x jar (and may be more in
> >> future)
> >>> will be kept as a shaded artifact in a separate repo, which will be
> >>> referred as dependency in hadoop modules.  This approach avoids shading
> >> of
> >>> every submodule during build.
> >>>
> >>> So question is does any VOTE required before asking to create a git
> repo?
> >>>
> >>> On selfserve platform https://gitbox.apache.org/setup/newrepo.html
> >>> I can access see that, requester 

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-27 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/

[Sep 26, 2019 11:06:35 PM] (cliang) HDFS-14785. [SBN read] Change client 
logging to be less aggressive.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/457/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   

Re: Release plans for Hadoop 2.10.0

2019-09-27 Thread Viral Bajaria
Hi Wei-Chiu,

Just got a chance to pull the jstack on a datanode that's showing high
ReadBlockOpAvgTime and I found a few different scenarios:

Lot of DataXceiver threads in the following state for a while

   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x7f9ad9953f68> (a
java.util.concurrent.locks.ReentrantLock$FairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at
java.util.concurrent.locks.ReentrantLock$FairSync.lock(ReentrantLock.java:224)
at
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at
org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock.java:100)
at
org.apache.hadoop.util.AutoCloseableLock.acquire(AutoCloseableLock.java:67)
at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.acquireDatasetLock(FsDatasetImpl.java:3383)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:255)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:593)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:145)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:100)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:288)
at java.lang.Thread.run(Thread.java:748)

For some reason I can't find any thread that has locked 0x7f9ad9953f68
and so it's hard to make a conclusive statement on why the thread can't
acquire the lock.

On 2nd attempt, I am able to find a bunch of threads BLOCKED on
MetricsRegistry:

   java.lang.Thread.State: BLOCKED (on object monitor)
at
org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java)
- waiting to lock <0x7f9ad80a2e08> (a
org.apache.hadoop.metrics2.lib.MetricsRegistry)
at
org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:80)
at
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
at
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:183)
at
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:107)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
at
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
at
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
at
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
at
javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:834)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$$Lambda$14/515379090.run(Unknown
Source)
at java.security.AccessController.doPrivileged(Native Method)
at
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Thanks,
Viral


On Thu, Sep 26, 2019 at 8:18 PM Viral Bajaria 
wrote:

> Hi Wei-Chiu,
>
> I came across HDFS-14476 when searching if anyone else is seeing the same
> issues as us. I didn't see it merged and 

Re: Does VOTE necessary to create a child repo?

2019-09-27 Thread Vinod Kumar Vavilapalli
Moving the thread to the dev lists.

Thanks
+Vinod

> On Sep 23, 2019, at 11:43 PM, Vinayakumar B  wrote:
> 
> Thanks Marton,
> 
> Current created 'hadoop-thirdparty' repo is empty right now.
> Whether to use that repo  for shaded artifact or not will be monitored in
> HADOOP-13363 umbrella jira. Please feel free to join the discussion.
> 
> There is no existing codebase is being moved out of hadoop repo. So I think
> right now we are good to go.
> 
> -Vinay
> 
> On Mon, Sep 23, 2019 at 11:38 PM Marton Elek  wrote:
> 
>> 
>> I am not sure if it's defined when is a vote required.
>> 
>> https://www.apache.org/foundation/voting.html
>> 
>> Personally I think it's a big enough change to send a notification to the
>> dev lists with a 'lazy consensus'  closure
>> 
>> Marton
>> 
>> On 2019/09/23 17:46:37, Vinayakumar B  wrote:
>>> Hi,
>>> 
>>> As discussed in HADOOP-13363, protobuf 3.x jar (and may be more in
>> future)
>>> will be kept as a shaded artifact in a separate repo, which will be
>>> referred as dependency in hadoop modules.  This approach avoids shading
>> of
>>> every submodule during build.
>>> 
>>> So question is does any VOTE required before asking to create a git repo?
>>> 
>>> On selfserve platform https://gitbox.apache.org/setup/newrepo.html
>>> I can access see that, requester should be PMC.
>>> 
>>> Wanted to confirm here first.
>>> 
>>> -Vinay
>>> 
>> 
>> -
>> To unsubscribe, e-mail: private-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: private-h...@hadoop.apache.org
>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[GitHub] [hadoop-thirdparty] vinayakumarb opened a new pull request #1: HADOOP-16595. [pb-upgrade] Create hadoop-thirdparty artifact to have shaded protobuf

2019-09-27 Thread GitBox
vinayakumarb opened a new pull request #1: HADOOP-16595. [pb-upgrade] Create 
hadoop-thirdparty artifact to have shaded protobuf
URL: https://github.com/apache/hadoop-thirdparty/pull/1
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org