[jira] [Created] (HADOOP-14905) Fix javadocs issues in Hadoop HDFS-NFS

2017-09-22 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HADOOP-14905:
--

 Summary: Fix javadocs issues in Hadoop HDFS-NFS
 Key: HADOOP-14905
 URL: https://issues.apache.org/jira/browse/HADOOP-14905
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


Fix the following javadoc issue in Apache Hadoop HDFS-NFS

{code}
 2266 5 warnings
 2267 [WARNING] Javadoc Warnings
 2268 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java:92:
 warning: no @param for childNum
 2269 [WARNING] public static long getDirSize(int childNum) {
 2270 [WARNING] ^
 2271 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java:92:
 warning: no @return
 2272 [WARNING] public static long getDirSize(int childNum) {
 2273 [WARNING] ^
 2274 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java:126:
 warning: no @param for channel
 2275 [WARNING] public static void writeChannel(Channel channel, XDR out, int 
xid) {
 2276 [WARNING] ^
 2277 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java:126:
 warning: no @param for out
 2278 [WARNING] public static void writeChannel(Channel channel, XDR out, int 
xid) {
 2279 [WARNING] ^
 2280 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java:126:
 warning: no @param for xid
 2281 [WARNING] public static void writeChannel(Channel channel, XDR out, int 
xid) {
 2282 [WARNING] ^
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14904) Fix javadocs issues in Hadoop HDFS

2017-09-22 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HADOOP-14904:
--

 Summary: Fix javadocs issues in Hadoop HDFS 
 Key: HADOOP-14904
 URL: https://issues.apache.org/jira/browse/HADOOP-14904
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
Priority: Minor


Fix the following javadocs issue in Hadoop HDFS 

1445 [INFO] 

 1446 [INFO] Building Apache Hadoop HDFS 3.1.0-SNAPSHOT
 1447 [INFO] 



{code}
 1537 ExcludePrivateAnnotationsStandardDoclet
 1538 9 warnings
 1539 [WARNING] Javadoc Warnings
 1540 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see   cannot be used in inline documentation.  It can only 
be used in the following types of documentation: overview, package, 
class/interface, constructor, field, method.
 1541 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see   cannot be used in inline documentation.  It can only 
be used in the following types of documentation: overview, package, 
class/interface, constructor, field, method.
 1542 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see  : reference not found: FSNamesystem
 1543 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see  : reference not found: EditLog
 1544 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see   cannot be used in inline documentation.  It can only 
be used in the following types of documentation: overview, package, 
class/interface, constructor, field, method.
 1545 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see   cannot be used in inline documentation.  It can only 
be used in the following types of documentation: overview, package, 
class/interface, constructor, field, method.
 1546 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ShortCircuitRegistry.java:82:
 warning - Tag @link  : reference not found: DfsClientShmManager
 1548 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AbstractFuture.java:1274:
 warning - Tag @  link: reference not found: CallerRunsPolicy
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[Update] Apache Hadoop 2.8.2 Release Status

2017-09-22 Thread Junping Du
Hi folks,
 I would like to give you a quick update on 2.8.2 release status:

- First release candidate (RC0) is published over the last weekend, but several 
docker container blockers (bugs, documents, etc.)
 are reported so we decided to cancel the RC0 for vote.

- New coming release blockers (for docker container support) are YARN-7034 
(just committed), YARN-6623, YARN-6930 and YARN-7230. 
Shane, Miklos and Varun are actively working on this. Appreciate the effort 
here!

- I will kick off new release candidate (RC1) once these blockers are resolved.

To all committers, branch-2.8.2 is still open for blocker/critical issues 
landing, but for major/minor/trivial issues, please commit to branch-2.8 and 
marked the fixed version as 2.8.3.

Thanks all for heads up. Have a good weekend!


Thanks,

Junping


From: Junping Du 
Sent: Tuesday, September 5, 2017 2:57 PM
To: larry mccay; Steve Loughran
Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: Apache Hadoop 2.8.2 Release Plan

I assume the quiet over the holiday means we agreed to move forward without 
taking HADOOP-14439 into 2.8.2.
There is a new release building (docker based) issue could be related to 
HADOOP-14474 where we removed oracle java 7 installer due to recent download 
address/contract change by Oracle. The build refuse to work - report as 
JAVA_HOME issue, but hard coded my local java home in create-release or 
Dockerfile doesn't help so we may need to add java 7 installation back (no 
matter Oracle JDK 7 or openJDK 7).
Filed HADOOP-14842 with more details to track as blocker for 2.8.2.

Thanks,

Junping

From: Junping Du 
Sent: Friday, September 1, 2017 12:37 PM
To: larry mccay; Steve Loughran
Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: Apache Hadoop 2.8.2 Release Plan

This issue (HADOOP-14439) is out of my radar given it is marked as Minor 
priority. If my understanding is correct, here is a trade-off between security 
and backward compatibility. IMO, priority of security is generally higher than 
backward compatibility especially 2.8.0 is still non-production release.
I think we should skip this for 2.8.2 in case it doesn't break compatibility 
from 2.7.x. Thoughts?

Thanks,

Junping

From: larry mccay 
Sent: Friday, September 1, 2017 10:55 AM
To: Steve Loughran
Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: Apache Hadoop 2.8.2 Release Plan

If we do "fix" this in 2.8.2 we should seriously consider not doing so in
3.0.
This is a very poor practice.

I can see an argument for backward compatibility in 2.8.x line though.

On Fri, Sep 1, 2017 at 1:41 PM, Steve Loughran 
wrote:

> One thing we need to consider is
>
> HADOOP-14439: regression: secret stripping from S3x URIs breaks some
> downstream code
>
> Hadoop 2.8 has a best-effort attempt to strip out secrets from the
> toString() value of an s3a or s3n path where someone has embedded them in
> the URI; this has caused problems in some uses, specifically: when people
> use secrets this way (bad) and assume that you can round trip paths to
> string and back
>
> Should we fix this? If so, Hadoop 2.8.2 is the time to do it
>
>
> > On 1 Sep 2017, at 11:14, Junping Du  wrote:
> >
> > HADOOP-14814 get committed and HADOOP-9747 get push out to 2.8.3, so we
> are clean on blocker/critical issues now.
> > I finish practice of going through JACC report and no more incompatible
> public API changes get found between 2.8.2 and 2.7.4. Also I check commit
> history and fixed 10+ commits which are missing from branch-2.8.2 for some
> reason. So, the current branch-2.8.2 should be good to go for RC stage, and
> I will kick off our first RC tomorrow.
> > In the meanwhile, please don't land any commits to branch-2.8.2 since
> now. If some issues really belong to blocker, please ping me on the JIRA
> before doing any commits. branch-2.8 is still open for landing. Thanks for
> your cooperation!
> >
> >
> > Thanks,
> >
> > Junping
> >
> > 
> > From: Junping Du 
> > Sent: Wednesday, August 30, 2017 12:35 AM
> > To: Brahma Reddy Battula; common-dev@hadoop.apache.org;
> hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org;
> yarn-...@hadoop.apache.org
> > Subject: Re: Apache Hadoop 2.8.2 Release Plan
> >
> > Thanks Brahma for comment on this thread. To be clear, I always update
> branch version just before RC kicking off.
> >
> > For 2.8.2 release, I don't have plan to involve big top or other
> third-party test tools. As always, we will rely on test/verify efforts from
> community especially from large deployed production cluster - as far as I
> know,  there are already several co

[jira] [Reopened] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reopened HADOOP-14901:
-

Patch for branch-2

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch, HADOOP-14901-brnach-2.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-14655:
--

Reverted this JIRA from trunk and branch-3.0 per Marton's instructions.

> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



2017-09-22 Hadoop 3 release status update

2017-09-22 Thread Andrew Wang
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3+release+status+updates

2017-09-22

We've had some late breaking blockers related to Docker support that are
delaying the release. We're on a day-by-day slip at this point.



Highlights:

   - I did a successful test create-release earlier this week.

Red flags:

   - Docker work resulted in some last minute blockers

Previously tracked beta1 blockers that have been resolved or dropped:

   - HADOOP-14771 
(hadoop-client
   does not include hadoop-yarn-client): Dropped this from the blocker list as
   it's mainly for documentation purposes
   - HDFS-12247 (Rename AddECPolicyResponse to
   AddErasureCodingPolicyResponse) was committed.

beta1 blockers:

   - YARN-6623  (Add
   support to turn off launching privileged containers in the
   container-executor): This is a newly escalated blocker related to the
   Docker work in YARN. Patch is up but we're still waiting on a commit.
   - HADOOP-14897  (Loosen
   compatibility guidelines for native dependencies): Raised by Chris Douglas,
   Daniel will post a patch soon.

beta1 features:

   - Erasure coding
  - Resolved last must-do for beta1!
  - People are looking more at the flaky tests and nice-to-haves
  - Eddy continues to make improvements to block reconstruction
  codepaths
   - Addressing incompatible changes (YARN-6142 and HDFS-11096)
   - Ray has gone through almost all the YARN protos and thinks we're okay
  to move forwards.
  - I think we'll move forward without this committed, given that Sean
  has run it successfully.
   - Classpath isolation (HADOOP-11656)
  - HADOOP-13917
 (Ensure
  nightly builds run the integration tests for the shaded client):
Sean wants
  to get this in before beta1 if there's time, it's already
catching issues.
  Relies on YETUS-543 which I reviewed, waiting on Allen.
  - HADOOP-14771 might be squeezed in if there's time.
   - Compat guide (HADOOP-13714
   )
  - HADOOP-14897 Above mentioned blocker filed by Chris Douglas.
   - TSv2 alpha 2
   - This was merged, no problems thus far [image: (smile)]

GA features:

   - Resource profiles (Wangda Tan)
  - Merge vote was sent out. Since branch-3.0 has been cut, this can be
  merged to trunk (3.1.0) and then backported once we've completed testing.
   - HDFS router-based federation (Chris Douglas)
   - This is like YARN federation, very separate and doesn't add new APIs,
  run in production at MSFT.
  - If it passes Cloudera internal integration testing, I'm fine
  putting this in for GA.
   - API-based scheduler configuration (Jonathan Hung)
  - Jonathan mentioned that his main goal is to get this in for 2.9.0,
  which seems likely to go out after 3.0.0 GA since there hasn't been any
  serious release planning yet. Jonathan said that delaying this
until 3.1.0
  is fine.
   - YARN native services
  - Still not 100% clear when this will land.


[jira] [Created] (HADOOP-14903) Add json-smart explicitly to pom.xml

2017-09-22 Thread Ray Chiang (JIRA)
Ray Chiang created HADOOP-14903:
---

 Summary: Add json-smart explicitly to pom.xml
 Key: HADOOP-14903
 URL: https://issues.apache.org/jira/browse/HADOOP-14903
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Ray Chiang
Assignee: Ray Chiang


With the library update in HADOOP-14799, maven knows how to pull in 
net.minidev:json-smart for tests, but not for packaging.  This needs to be 
added to the main project pom in order to avoid this warning:

{noformat}
[WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
dependency information available
{noformat}

This is pulled in from a few places:

{noformat}
[INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
[INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
[INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
[INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile

[INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
[INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
[INFO] |  |+- 
com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
[INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated

2017-09-22 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14902:
---

 Summary: LoadGenerator#genFile write close timing is incorrectly 
calculated
 Key: HADOOP-14902
 URL: https://issues.apache.org/jira/browse/HADOOP-14902
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.4.0
Reporter: Jason Lowe


LoadGenerator#genFile's write close timing code looks like the following:
{code}
startTime = Time.now();
executionTime[WRITE_CLOSE] += (Time.now() - startTime);
{code}

That code will generate a zero (or near zero) write close timing since it isn't 
actually closing the file in-between timestamp lookups.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1

2017-09-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-14799.
--
Resolution: Fixed

Let's re-resolve and track the follow-on work in another JIRA. Thanks Ray, 
Steve.

> Update nimbus-jose-jwt to 4.41.1
> 
>
> Key: HADOOP-14799
> URL: https://issues.apache.org/jira/browse/HADOOP-14799
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch, 
> HADOOP-14799.003.patch
>
>
> Update the dependency
> com.nimbusds:nimbus-jose-jwt:3.9
> to the latest (4.41.1)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-14901:
---

 Summary: ReuseObjectMapper in Hadoop Common
 Key: HADOOP-14901
 URL: https://issues.apache.org/jira/browse/HADOOP-14901
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
Priority: Minor


It is recommended to reuse ObjectMapper, if possible, for better performance. 
We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
some places: they are straightforward and thread safe.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14900) Errors in trunk with early versions of Java 8

2017-09-22 Thread Ray Chiang (JIRA)
Ray Chiang created HADOOP-14900:
---

 Summary: Errors in trunk with early versions of Java 8
 Key: HADOOP-14900
 URL: https://issues.apache.org/jira/browse/HADOOP-14900
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0-beta1
Reporter: Ray Chiang
Assignee: Ray Chiang


Just to document the issue in case other developers run into this.  Compiling 
trunk with jdk 1.8u05 gives the following errors when compiling:

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-aws: Compilation failure: Compilation 
failure:
[ERROR] 
/root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[45,5]
 reference to intercept is ambiguous
[ERROR]   both method 
intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
 in org.apache.hadoop.test.LambdaTestUtils and method 
intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
 in org.apache.hadoop.test.LambdaTestUtils match
[ERROR] 
/root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[69,5]
 reference to intercept is ambiguous
[ERROR]   both method 
intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
 in org.apache.hadoop.test.LambdaTestUtils and method 
intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
 in org.apache.hadoop.test.LambdaTestUtils match
[ERROR] 
/root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[94,5]
 reference to intercept is ambiguous
[ERROR]   both method 
intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
 in org.apache.hadoop.test.LambdaTestUtils and method 
intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
 in org.apache.hadoop.test.LambdaTestUtils match
[ERROR] 
/root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[120,5]
 reference to intercept is ambiguous
[ERROR]   both method 
intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
 in org.apache.hadoop.test.LambdaTestUtils and method 
intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
 in org.apache.hadoop.test.LambdaTestUtils match
{noformat}

Based on my testing jdk 1.8u92 doesn't produce this error.

I don't think this issue needs to be fixed in the code, but documenting it in 
JIRA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] moving to Apache Yetus Audience Annotations

2017-09-22 Thread Andrew Wang
Yea, unfortunately I'd say backburner it. This would have been perfect
during alpha.

On Fri, Sep 22, 2017 at 11:14 AM, Sean Busbey  wrote:

> I'd refer to it as an incompatible change; we expressly label the
> annotations as IA.Public.
>
> If you think it's too late to get in for 3.0, I can make a jira and put it
> on the back burner for when trunk goes to 4.0?
>
> On Fri, Sep 22, 2017 at 12:49 PM, Andrew Wang 
> wrote:
>
>> Is this itself an incompatible change? I imagine the bytecode will be
>> different.
>>
>> I think we're too late to do this for beta1 given that I want to cut an
>> RC0 today.
>>
>> On Fri, Sep 22, 2017 at 7:03 AM, Sean Busbey  wrote:
>>
>>> When Apache Yetus formed, it started with several key pieces of Hadoop
>>> that
>>> looked reusable. In addition to our contribution testing infra, the
>>> project
>>> also stood up a version of our audience annotations for delineating the
>>> public facing API[1].
>>>
>>> I recently got the Apache HBase community onto the Yetus version of those
>>> annotations rather than their internal fork of the Hadoop ones[2]. It
>>> wasn't pretty, mostly a lot of blind sed followed by spot checking and
>>> reliance on automated tests.
>>>
>>> What do folks think about making the jump ourselves? I'd be happy to work
>>> through things, either as one unreviewable monster or per-module
>>> transitions (though a piece-meal approach might complicate our javadoc
>>> situation).
>>>
>>>
>>> [1]: http://yetus.apache.org/documentation/0.5.0/interface-classi
>>> fication/
>>> [2]: https://issues.apache.org/jira/browse/HBASE-17823
>>>
>>> --
>>> busbey
>>>
>>
>>
>
>
> --
> busbey
>


Re: [DISCUSS] moving to Apache Yetus Audience Annotations

2017-09-22 Thread Sean Busbey
I'd refer to it as an incompatible change; we expressly label the
annotations as IA.Public.

If you think it's too late to get in for 3.0, I can make a jira and put it
on the back burner for when trunk goes to 4.0?

On Fri, Sep 22, 2017 at 12:49 PM, Andrew Wang 
wrote:

> Is this itself an incompatible change? I imagine the bytecode will be
> different.
>
> I think we're too late to do this for beta1 given that I want to cut an
> RC0 today.
>
> On Fri, Sep 22, 2017 at 7:03 AM, Sean Busbey  wrote:
>
>> When Apache Yetus formed, it started with several key pieces of Hadoop
>> that
>> looked reusable. In addition to our contribution testing infra, the
>> project
>> also stood up a version of our audience annotations for delineating the
>> public facing API[1].
>>
>> I recently got the Apache HBase community onto the Yetus version of those
>> annotations rather than their internal fork of the Hadoop ones[2]. It
>> wasn't pretty, mostly a lot of blind sed followed by spot checking and
>> reliance on automated tests.
>>
>> What do folks think about making the jump ourselves? I'd be happy to work
>> through things, either as one unreviewable monster or per-module
>> transitions (though a piece-meal approach might complicate our javadoc
>> situation).
>>
>>
>> [1]: http://yetus.apache.org/documentation/0.5.0/interface-classi
>> fication/
>> [2]: https://issues.apache.org/jira/browse/HBASE-17823
>>
>> --
>> busbey
>>
>
>


-- 
busbey


Re: [DISCUSS] moving to Apache Yetus Audience Annotations

2017-09-22 Thread Andrew Wang
Is this itself an incompatible change? I imagine the bytecode will be
different.

I think we're too late to do this for beta1 given that I want to cut an RC0
today.

On Fri, Sep 22, 2017 at 7:03 AM, Sean Busbey  wrote:

> When Apache Yetus formed, it started with several key pieces of Hadoop that
> looked reusable. In addition to our contribution testing infra, the project
> also stood up a version of our audience annotations for delineating the
> public facing API[1].
>
> I recently got the Apache HBase community onto the Yetus version of those
> annotations rather than their internal fork of the Hadoop ones[2]. It
> wasn't pretty, mostly a lot of blind sed followed by spot checking and
> reliance on automated tests.
>
> What do folks think about making the jump ourselves? I'd be happy to work
> through things, either as one unreviewable monster or per-module
> transitions (though a piece-meal approach might complicate our javadoc
> situation).
>
>
> [1]: http://yetus.apache.org/documentation/0.5.0/interface-classification/
> [2]: https://issues.apache.org/jira/browse/HBASE-17823
>
> --
> busbey
>


[DISCUSS] moving to Apache Yetus Audience Annotations

2017-09-22 Thread Sean Busbey
When Apache Yetus formed, it started with several key pieces of Hadoop that
looked reusable. In addition to our contribution testing infra, the project
also stood up a version of our audience annotations for delineating the
public facing API[1].

I recently got the Apache HBase community onto the Yetus version of those
annotations rather than their internal fork of the Hadoop ones[2]. It
wasn't pretty, mostly a lot of blind sed followed by spot checking and
reliance on automated tests.

What do folks think about making the jump ourselves? I'd be happy to work
through things, either as one unreviewable monster or per-module
transitions (though a piece-meal approach might complicate our javadoc
situation).


[1]: http://yetus.apache.org/documentation/0.5.0/interface-classification/
[2]: https://issues.apache.org/jira/browse/HBASE-17823

-- 
busbey


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-09-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/535/

[Sep 21, 2017 9:01:16 PM] (junping_du) YARN-7034. DefaultLinuxContainerRuntime 
and DockerLinuxContainerRuntime
[Sep 21, 2017 10:41:34 PM] (jlowe) YARN-4266. Allow users to enter containers 
as UID:GID pair instead of by
[Sep 22, 2017 2:37:04 AM] (szetszwo) HDFS-12507. StripedBlockUtil.java:694: 
warning - Tag @link: reference
[Sep 22, 2017 4:27:59 AM] (aajisaka) MAPREDUCE-6947. Moving logging APIs over 
to slf4j in
[Sep 22, 2017 6:07:59 AM] (aajisaka) MAPREDUCE-6966. DistSum should use 
Time.monotonicNow for measuring




-1 overall


The following subsystems voted -1:
docker


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
docker


Powered by Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-14899) Restrict Access to set stickybit operation when authorization is enabled in WASB

2017-09-22 Thread Kannapiran Srinivasan (JIRA)
Kannapiran Srinivasan created HADOOP-14899:
--

 Summary: Restrict Access to set stickybit operation when 
authorization is enabled in WASB
 Key: HADOOP-14899
 URL: https://issues.apache.org/jira/browse/HADOOP-14899
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/azure
Reporter: Kannapiran Srinivasan


In case of authorization enabled Wasb clusters, we need to restrict setting 
permissions on files or folders to owner or list of privileged users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] official docker image(s) for hadoop

2017-09-22 Thread Marton, Elek

Thanks all the feedbacks.

I created an issue:
https://issues.apache.org/jira/browse/HADOOP-14898

Let's continue the discussion there.

Thanks,
Marton

On 09/08/2017 02:45 PM, Marton, Elek wrote:


TL;DR: I propose to create official hadoop images and upload them to the 
dockerhub.


GOAL/SCOPE: I would like improve the existing documentation with 
easy-to-use docker based recipes to start hadoop clusters with various 
configuration.


The images also could be used to test experimental features. For example 
ozone could be tested easily with these compose file and configuration:


https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6

Or even the configuration could be included in the compose file:

https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml 



I would like to create separated example compose files for federation, 
ha, metrics usage, etc. to make it easier to try out and understand the 
features.


CONTEXT: There is an existing Jira 
https://issues.apache.org/jira/browse/HADOOP-13397
But it’s about a tool to generate production quality docker images 
(multiple types, in a flexible way). If no objections, I will create a 
separated issue to create simplified docker images for rapid prototyping 
and investigating new features. And register the branch to the dockerhub 
to create the images automatically.


MY BACKGROUND: I am working with docker based hadoop/spark clusters 
quite a while and run them succesfully in different environments 
(kubernetes, docker-swarm, nomad-based scheduling, etc.) My work is 
available from here: https://github.com/flokkr but they could handle 
more complex use cases (eg. instrumenting java processes with btrace, or 
read/reload configuration from consul).
  And IMHO in the official hadoop documentation it’s better to suggest 
to use official apache docker images and not external ones (which could 
be changed).


Please let me know if you have any comments.

Marton

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14898) Create official Docker images for development and testing features

2017-09-22 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-14898:
-

 Summary: Create official Docker images for development and testing 
features 
 Key: HADOOP-14898
 URL: https://issues.apache.org/jira/browse/HADOOP-14898
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Elek, Marton
Assignee: Elek, Marton


This is the original mail from the mailing list:

{code}
TL;DR: I propose to create official hadoop images and upload them to the 
dockerhub.

GOAL/SCOPE: I would like improve the existing documentation with easy-to-use 
docker based recipes to start hadoop clusters with various configuration.

The images also could be used to test experimental features. For example ozone 
could be tested easily with these compose file and configuration:

https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6

Or even the configuration could be included in the compose file:

https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml

I would like to create separated example compose files for federation, ha, 
metrics usage, etc. to make it easier to try out and understand the features.

CONTEXT: There is an existing Jira 
https://issues.apache.org/jira/browse/HADOOP-13397
But it’s about a tool to generate production quality docker images (multiple 
types, in a flexible way). If no objections, I will create a separated issue to 
create simplified docker images for rapid prototyping and investigating new 
features. And register the branch to the dockerhub to create the images 
automatically.

MY BACKGROUND: I am working with docker based hadoop/spark clusters quite a 
while and run them succesfully in different environments (kubernetes, 
docker-swarm, nomad-based scheduling, etc.) My work is available from here: 
https://github.com/flokkr but they could handle more complex use cases (eg. 
instrumenting java processes with btrace, or read/reload configuration from 
consul).
 And IMHO in the official hadoop documentation it’s better to suggest to use 
official apache docker images and not external ones (which could be changed).
{code}

The next list will enumerate the key decision points regarding to docker image 
creating

A. automated dockerhub build  / jenkins build

Docker images could be built on the dockerhub (a branch pattern should be 
defined for a github repository and the location of the Docker files) or could 
be built on a CI server and pushed.

The second one is more flexible (it's more easy to create matrix build, for 
example)
The first one had the advantage that we can get an additional flag on the 
dockerhub that the build is automated (and built from the source by the 
dockerhub).

The decision is easy as ASF supports the first approach: (see 
https://issues.apache.org/jira/browse/INFRA-12781?focusedCommentId=15824096&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15824096)

B. source: binary distribution or source build

The second question is about creating the docker image. One option is to build 
the software on the fly during the creation of the docker image the other one 
is to use the binary releases.

I suggest to use the second approach as:

1. In that case the hadoop:2.7.3 could contain exactly the same hadoop 
distrubution as the downloadable one

2. We don't need to add development tools to the image, the image could be more 
smaller (which is important as the goal for this image to getting started as 
fast as possible)

3. The docker definition will be more simple (and more easy to maintain)

Usually this approach is used in other projects (I checked Apache Zeppelin and 
Apache Nutch)

C. branch usage

Other question is the location of the Docker file. It could be on the official 
source-code branches (branch-2, trunk, etc.) or we can create separated 
branches for the dockerhub (eg. docker/2.7 docker/2.8 docker/3.0)

For the first approach it's easier to find the docker images, but it's less 
flexible. For example if we had a Dockerfile for on the source code it should 
be used for every release (for example the Docker file from the tag 
release-3.0.0 should be used for the 3.0 hadoop docker image). In that case the 
release process is much more harder: in case of a Dockerfile error (which could 
be test on dockerhub only after the taging), a new release should be added 
after fixing the Dockerfile.

Another problem is that with using tags it's not possible to improve the 
Dockerfiles. I can imagine that we would like to improve for example the 
hadoop:2.7 images (for example adding more smart startup scripts) with using 
exactly the same hadoop 2.7 distribution. 

Finally with tag based approach we can't create images for the older releases 
(2.8.1 for example)

So I suggest to create separated branches for the Dockerfiles.

D. Versions

We can create a separated branch for every version (2.7.1/2.7.2/2.7.3) or just 
for the main ver