[jira] [Updated] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable rather than hard coded to 2000 bytes

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-1381:

Release Note: Made sync interval of sequencefiles configurable and raised 
default from 2000 bytes to 100 kilobytes, to optimize for large files.  (was: 
Made sync interval of sequencefiles configurable and raised default from 100 
bytes to 100 kilobytes, to optimize for large files.)

> The distance between sync blocks in SequenceFiles should be configurable 
> rather than hard coded to 2000 bytes
> -
>
> Key: HADOOP-1381
> URL: https://issues.apache.org/jira/browse/HADOOP-1381
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.0.0-alpha
>Reporter: Owen O'Malley
>Assignee: Harsh J
> Attachments: HADOOP-1381.r1.diff, HADOOP-1381.r2.diff, 
> HADOOP-1381.r3.diff, HADOOP-1381.r4.diff, HADOOP-1381.r5.diff, 
> HADOOP-1381.r5.diff
>
>
> Currently SequenceFiles put in sync blocks every 2000 bytes. It would be much 
> better if it was configurable with a much higher default (1mb or so?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable rather than hard coded to 2000 bytes

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-1381:

Target Version/s:   (was: )
  Status: Patch Available  (was: Open)

> The distance between sync blocks in SequenceFiles should be configurable 
> rather than hard coded to 2000 bytes
> -
>
> Key: HADOOP-1381
> URL: https://issues.apache.org/jira/browse/HADOOP-1381
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.0.0-alpha
>Reporter: Owen O'Malley
>Assignee: Harsh J
> Attachments: HADOOP-1381.r1.diff, HADOOP-1381.r2.diff, 
> HADOOP-1381.r3.diff, HADOOP-1381.r4.diff, HADOOP-1381.r5.diff, 
> HADOOP-1381.r5.diff
>
>
> Currently SequenceFiles put in sync blocks every 2000 bytes. It would be much 
> better if it was configurable with a much higher default (1mb or so?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13747) Use LongAdder for more efficient metrics tracking

2016-10-26 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15610649#comment-15610649
 ] 

Erik Krogen commented on HADOOP-13747:
--

[~zhz] good point that this implementation will lose metrics from Thread A that 
were created in the period between the previous metric aggregation and the time 
Thread A dies. I think one question is if we want to replace the current 
{{MutableRates}} completely with my new implementation, or if we should leave 
the old one as well. Leaving both would allow a solution for shorter lived 
threads as well as a solution for long running threads with high contention, 
and we could explain the differences in the Javadoc, though at this time the 
original {{MutableRates}} would go completely unused so would be a bit of a 
code clutter.

[~andrew.wang], any thoughts on the patch or on this matter?

> Use LongAdder for more efficient metrics tracking
> -
>
> Key: HADOOP-13747
> URL: https://issues.apache.org/jira/browse/HADOOP-13747
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
> Attachments: HADOOP-13747.patch, benchmark_results
>
>
> Currently many metrics, including {{RpcMetrics}} and {{RpcDetailedMetrics}}, 
> use a synchronized counter to be updated by all handler threads (multiple 
> hundreds in large production clusters). As [~andrew.wang] suggested, it'd be 
> more efficient to use the [LongAdder | 
> http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166e/LongAdder.java?view=co]
>  library which dynamically create intermediate-result variables.
> Assigning to [~xkrogen] who has already done some investigation on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12216) Parse 'LogLevel' commandline using cli Options.

2016-10-26 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt resolved HADOOP-12216.
---
Resolution: Not A Problem

Not required to be fixed .. Will reopen if required

> Parse 'LogLevel' commandline using cli Options.
> ---
>
> Key: HADOOP-12216
> URL: https://issues.apache.org/jira/browse/HADOOP-12216
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15610403#comment-15610403
 ] 

Hadoop QA commented on HADOOP-12718:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12718 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835448/HADOOP-12718.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fd4f4068fdf3 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 22ff0ef |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10904/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10904/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, 

[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9

2016-10-26 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15610338#comment-15610338
 ] 

Robert Kanter commented on HADOOP-10075:


The test failures are unrelated.  [~raviprak], how did your testing go?

> Update jetty dependency to version 9
> 
>
> Key: HADOOP-10075
> URL: https://issues.apache.org/jira/browse/HADOOP-10075
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Robert Rati
>Assignee: Robert Kanter
>Priority: Critical
> Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, 
> HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, 
> HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, 
> HADOOP-10075.010.patch, HADOOP-10075.011.patch, HADOOP-10075.patch
>
>
> Jetty6 is no longer maintained.  Update the dependency to jetty9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-10-26 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12718:

Attachment: HADOOP-12718.006.patch

Patch 006:
* Move canRead() check and throws ACE from {{RawLocalFileSystem#listStatus}} to 
{{FileUtil#list}} which already throws IOE
* All current callers of {{FileUtil#list}} can handle the change

> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, HADOOP-12718.005.patch, 
> HADOOP-12718.006.patch, TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-26 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15610298#comment-15610298
 ] 

Akira Ajisaka commented on HADOOP-13514:


Thanks [~jojochuang] for the review & commit!

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-10-26 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15610185#comment-15610185
 ] 

Lei (Eddy) Xu commented on HADOOP-13651:


Hi, [~liuml07] and [~fabbri]

[~fabbri] and I had an offline discussion today, the followings are my notes

* We agree that the atomic of creation / deletion / rename (between metadata 
store and s3) are difficult  to achieve, as S3 is always visible to some of the 
clients. We need more thoughts on these issues. My option is making sure 
namespace consistency first, but there are other understandable concerns. 
* For {{rename}}, it should not expect that {{rename(recursive=true)}} can be 
atomic. 
* {{LocalMetadataStore}} should not use {{LRU/MRU}}, which can not provide 
consistency due to that the metadata will be evicted. 
* The current test suites (unit tests and integration tests) can not 
reliability test eventual consistency. New tests cases are necessary before 
merging this branch to branch.
* This patch is large and many of the following works are depended to this 
patch. My CLI patch (HADOOP-13650) and DynamoDB metadata store (HADOOP-13449) 
for instance. 

As this is a feature branch, I'd give a pending +1 once these concerns are 
appropriately documented (i.e., adding {{TODO}}).  Thanks.



> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8299) ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException

2016-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609760#comment-15609760
 ] 

Hudson commented on HADOOP-8299:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10692 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10692/])
HADOOP-8299. ViewFileSystem link slash mount point crashes with (wang: rev 
22ff0eff4d58ac0beda7a5a3ae0e5d108da14f7f)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java


> ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException
> 
>
> Key: HADOOP-8299
> URL: https://issues.apache.org/jira/browse/HADOOP-8299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Eli Collins
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-8299.01.patch, HADOOP-8299.02.patch
>
>
> We currently assume [a typical viewfs client 
> configuration|https://issues.apache.org/jira/secure/attachment/12507504/viewfs_TypicalMountTable.png]
>  is a set of non-overlapping mounts. This means every time you want to add a 
> new top-level directory you need to update the client-side mountable config. 
> If users could specify a slash mount, and then add additional mounts as 
> necessary they could add a new top-level directory without updating all 
> client configs (as long as the new top-level directory was being created on 
> the NN the slash mount points to). This could be achieved by HADOOP-8298 
> (merge mounts, since we're effectively merging all new mount points with 
> slash) or having the notion of a "default NN" for a mount table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13764) WASB test runs leak storage containers.

2016-10-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609741#comment-15609741
 ] 

Chris Nauroth commented on HADOOP-13764:


Thank you to [~ste...@apache.org] for spotting the problem.  The allocation of 
the containers is done in {{AzureBlobStorageTestAccount}}, but they are never 
cleaned up.  Let's explore cleaning them up automatically, or possibly just 
reusing the same container each time if that's feasible.

> WASB test runs leak storage containers.
> ---
>
> Key: HADOOP-13764
> URL: https://issues.apache.org/jira/browse/HADOOP-13764
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Steve Loughran
>
> It appears that WASB test runs dynamically allocate a container within the 
> storage account, using a naming convention of "wasbtests--". 
>  These containers are not cleaned up automatically, so they remain in the 
> storage account indefinitely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13764) WASB test runs leak storage containers.

2016-10-26 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-13764:
--

 Summary: WASB test runs leak storage containers.
 Key: HADOOP-13764
 URL: https://issues.apache.org/jira/browse/HADOOP-13764
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Reporter: Steve Loughran


It appears that WASB test runs dynamically allocate a container within the 
storage account, using a naming convention of "wasbtests--".  
These containers are not cleaned up automatically, so they remain in the 
storage account indefinitely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13764) WASB test runs leak storage containers.

2016-10-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13764:
---
Target Version/s: 2.8.0  (was: 2.8.0, 3.0.0-alpha2)

> WASB test runs leak storage containers.
> ---
>
> Key: HADOOP-13764
> URL: https://issues.apache.org/jira/browse/HADOOP-13764
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Steve Loughran
>
> It appears that WASB test runs dynamically allocate a container within the 
> storage account, using a naming convention of "wasbtests--". 
>  These containers are not cleaned up automatically, so they remain in the 
> storage account indefinitely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8299) ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException

2016-10-26 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-8299:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed to trunk, thanks Manoj for the patch!

> ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException
> 
>
> Key: HADOOP-8299
> URL: https://issues.apache.org/jira/browse/HADOOP-8299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Eli Collins
>Assignee: Manoj Govindassamy
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-8299.01.patch, HADOOP-8299.02.patch
>
>
> We currently assume [a typical viewfs client 
> configuration|https://issues.apache.org/jira/secure/attachment/12507504/viewfs_TypicalMountTable.png]
>  is a set of non-overlapping mounts. This means every time you want to add a 
> new top-level directory you need to update the client-side mountable config. 
> If users could specify a slash mount, and then add additional mounts as 
> necessary they could add a new top-level directory without updating all 
> client configs (as long as the new top-level directory was being created on 
> the NN the slash mount points to). This could be achieved by HADOOP-8298 
> (merge mounts, since we're effectively merging all new mount points with 
> slash) or having the notion of a "default NN" for a mount table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609711#comment-15609711
 ] 

Hudson commented on HADOOP-13514:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10691 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10691/])
Addendum patch for HADOOP-13514 Upgrade maven surefire plugin to 2.19.1. 
(weichiu: rev e48b592f8ba1d8a89587f2c4403d861f2d015a9a)
* (edit) hadoop-project/pom.xml


> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13759) Split SFTP FileSystem into its own artifact

2016-10-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609700#comment-15609700
 ] 

Andrew Wang commented on HADOOP-13759:
--

I'd prefer hadoop-tools, not sure why a separate cloud-storage module would be 
preferable.

> Split SFTP FileSystem into its own artifact
> ---
>
> Key: HADOOP-13759
> URL: https://issues.apache.org/jira/browse/HADOOP-13759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yuanbo Liu
>
> As discussed on HADOOP-13696, if we split the SFTP FileSystem into its own 
> artifact, we can save a jsch dependency in Hadoop Common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8299) ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException

2016-10-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609703#comment-15609703
 ] 

Andrew Wang commented on HADOOP-8299:
-

LGTM will commit shortly

> ViewFileSystem link slash mount point crashes with IndexOutOfBoundsException
> 
>
> Key: HADOOP-8299
> URL: https://issues.apache.org/jira/browse/HADOOP-8299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Eli Collins
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-8299.01.patch, HADOOP-8299.02.patch
>
>
> We currently assume [a typical viewfs client 
> configuration|https://issues.apache.org/jira/secure/attachment/12507504/viewfs_TypicalMountTable.png]
>  is a set of non-overlapping mounts. This means every time you want to add a 
> new top-level directory you need to update the client-side mountable config. 
> If users could specify a slash mount, and then add additional mounts as 
> necessary they could add a new top-level directory without updating all 
> client configs (as long as the new top-level directory was being created on 
> the NN the slash mount points to). This could be achieved by HADOOP-8298 
> (merge mounts, since we're effectively merging all new mount points with 
> slash) or having the notion of a "default NN" for a mount table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13514:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed Akira's patch to trunk, branch-2 and branch-2.8. Thank you for root 
causing the issue immediately!

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13763) KMS REST API Documentation Decrypt URL typo

2016-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609480#comment-15609480
 ] 

Hadoop QA commented on HADOOP-13763:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13763 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835403/HADOOP-13763.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 6fcef2ceedce 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f511cc8 |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10903/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> KMS REST API Documentation Decrypt URL typo
> ---
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.0, 2.7.1, 2.7.2, 2.6.3, 2.7.3, 2.6.4, 2.6.5, 
> 3.0.0-alpha1
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13763.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13763) KMS REST API Documentation

2016-10-26 Thread Jeffrey E Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey E  Rodriguez updated HADOOP-13763:
--
Status: Patch Available  (was: Open)

> KMS REST API Documentation
> --
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha1, 2.6.5, 2.6.4, 2.7.3, 2.6.3, 2.7.2, 2.7.1, 
> 2.7.0
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13763.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13763) KMS REST API Documentation

2016-10-26 Thread Jeffrey E Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey E  Rodriguez updated HADOOP-13763:
--
Attachment: HADOOP-13763.patch

This patches documentation typo

> KMS REST API Documentation
> --
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.0, 2.7.1, 2.7.2, 2.6.3, 2.7.3, 2.6.4, 2.6.5, 
> 3.0.0-alpha1
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13763.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13763) KMS REST API Documentation Decrypt URL typo

2016-10-26 Thread Jeffrey E Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey E  Rodriguez updated HADOOP-13763:
--
Summary: KMS REST API Documentation Decrypt URL typo  (was: KMS REST API 
Documentation)

> KMS REST API Documentation Decrypt URL typo
> ---
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.0, 2.7.1, 2.7.2, 2.6.3, 2.7.3, 2.6.4, 2.6.5, 
> 3.0.0-alpha1
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13763.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13763) KMS REST API Documentation

2016-10-26 Thread Jeffrey E Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey E  Rodriguez updated HADOOP-13763:
--
Status: Open  (was: Patch Available)

> KMS REST API Documentation
> --
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha1, 2.6.5, 2.6.4, 2.7.3, 2.6.3, 2.7.2, 2.7.1, 
> 2.7.0
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13763) KMS REST API Documentation

2016-10-26 Thread Jeffrey E Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey E  Rodriguez updated HADOOP-13763:
--
Status: Patch Available  (was: Open)

> KMS REST API Documentation
> --
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha1, 2.6.5, 2.6.4, 2.7.3, 2.6.3, 2.7.2, 2.7.1, 
> 2.7.0
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13763) KMS REST API Documentation

2016-10-26 Thread Jeffrey E Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609401#comment-15609401
 ] 

Jeffrey E  Rodriguez commented on HADOOP-13763:
---

I volunteer to fix this Jira.
Thanks
Jeff Rodriguez

> KMS REST API Documentation
> --
>
> Key: HADOOP-13763
> URL: https://issues.apache.org/jira/browse/HADOOP-13763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.0, 2.7.1, 2.7.2, 2.6.3, 2.7.3, 2.6.4, 2.6.5, 
> 3.0.0-alpha1
> Environment: All- This is a KMS REST API documentation typo
>Reporter: Jeffrey E  Rodriguez
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Decrypt Encrypted Key URL REST definition has a typo:
> reads as:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
> should be:
> POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13763) KMS REST API Documentation

2016-10-26 Thread Jeffrey E Rodriguez (JIRA)
Jeffrey E  Rodriguez created HADOOP-13763:
-

 Summary: KMS REST API Documentation
 Key: HADOOP-13763
 URL: https://issues.apache.org/jira/browse/HADOOP-13763
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 3.0.0-alpha1, 2.6.5, 2.6.4, 2.7.3, 2.6.3, 2.7.2, 2.7.1, 
2.7.0
 Environment: All- This is a KMS REST API documentation typo
Reporter: Jeffrey E  Rodriguez
Priority: Minor
 Fix For: 3.0.0-alpha1


Decrypt Encrypted Key URL REST definition has a typo:
reads as:
POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
should be:
POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-26 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609369#comment-15609369
 ] 

Wei-Chiu Chuang commented on HADOOP-13514:
--

+1. My local test run passed almost all but 5 tests. The 5 failed tests appears 
flaky and unrelated to the addendum patch.

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609291#comment-15609291
 ] 

Hadoop QA commented on HADOOP-13037:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 51 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  7m 
17s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m 17s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-tools_hadoop-azure-datalake generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
26s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 34s{color} 
| {color:red} hadoop-azure-datalake in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.adl.TestAdlRead |
|   | hadoop.fs.adl.live.TestAdlSupportedCharsetInPath |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835383/HADOOP-13037-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 3287813d38c9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1f8490a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10902/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 

[jira] [Commented] (HADOOP-13747) Use LongAdder for more efficient metrics tracking

2016-10-26 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609257#comment-15609257
 ] 

Zhe Zhang commented on HADOOP-13747:


Thanks Erik! It seems once again the discussion is leading to another JIRA 
(convert {{MutableRates}} to aggregate on read) :)

Benchmark results look good. I imagine the benefits of this optimization will 
be more significant when the number of thread increases -- e.g. 256 as used in 
some production clusters.

{{MutableRatesWithAggregation}} LGTM overall. The only structural concern I 
have is the assumption of long-lived threads. Right now {{MutableRates}} is 
only used by detailed RPC metrics so the assumption still holds. But it might 
limit its applicability as a general-purpose metrics class. I'm happy to have 
other people's opinions on this as well (whether we foresee any short-lived 
threads using {{MutableRates}}).

If we do want to support short-lived threads, an alternative is to use a 
similar idea as {{LongAdder}}, and use a set of variables to hold {{}} tuples. On snapshotting, apply this "log entries" one by one.


> Use LongAdder for more efficient metrics tracking
> -
>
> Key: HADOOP-13747
> URL: https://issues.apache.org/jira/browse/HADOOP-13747
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
> Attachments: HADOOP-13747.patch, benchmark_results
>
>
> Currently many metrics, including {{RpcMetrics}} and {{RpcDetailedMetrics}}, 
> use a synchronized counter to be updated by all handler threads (multiple 
> hundreds in large production clusters). As [~andrew.wang] suggested, it'd be 
> more efficient to use the [LongAdder | 
> http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166e/LongAdder.java?view=co]
>  library which dynamically create intermediate-result variables.
> Assigning to [~xkrogen] who has already done some investigation on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13762) S3A: Set thread names with more specific information about the call.

2016-10-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15609219#comment-15609219
 ] 

Chris Nauroth commented on HADOOP-13762:


See HDFS-11063 for a similar proposal I've created on the HDFS NameNode side.

For example, upon entering a public S3A method, we could append useful 
information to the thread name, such as the user, the start time of the 
operation and the file path argument.  This information would be visible in the 
thread names when running {{jstack}}.  That way, we could see that not only is 
a thread spending a long time in a {{globStatus}} call, but also which user is 
making the call, which path is referenced, and the start time.

This proposal would get trickier in combination with some of the plans around 
asynchronous and parallel execution.  We'd need to pass along that contextual 
information to all of the threads that make up the high-level operation.

It's important that after the S3A operation completes, we restore the prior 
value of the thread name.  S3A is called from user applications that own the 
lifecycle of the thread.  If applications have set meaningful information into 
the thread name already, then we don't want that to remain changed after the 
thread exits the S3A code.

> S3A: Set thread names with more specific information about the call.
> 
>
> Key: HADOOP-13762
> URL: https://issues.apache.org/jira/browse/HADOOP-13762
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>
> Running {{jstack}} on a hung process and reading the stack traces is a 
> helpful way to determine exactly what code in the process is stuck.  This 
> would be even more helpful if we included more descriptive information about 
> the specific file system method call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13762) S3A: Set thread names with more specific information about the call.

2016-10-26 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-13762:
--

 Summary: S3A: Set thread names with more specific information 
about the call.
 Key: HADOOP-13762
 URL: https://issues.apache.org/jira/browse/HADOOP-13762
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Chris Nauroth


Running {{jstack}} on a hung process and reading the stack traces is a helpful 
way to determine exactly what code in the process is stuck.  This would be even 
more helpful if we included more descriptive information about the specific 
file system method call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-26 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Attachment: HADOOP-13037-002.patch

Packaged AdlPermission.java within patch.

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-26 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Attachment: (was: HADOOP-13037-002.patch)

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608982#comment-15608982
 ] 

Hadoop QA commented on HADOOP-13037:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 51 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  6m 
47s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 47s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-tools_hadoop-azure-datalake generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m  
0s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 18s{color} 
| {color:red} hadoop-azure-datalake in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13037 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12835364/HADOOP-13037-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 40a63bc10527 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e90af4a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10901/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-azure-datalake.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10901/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 

[jira] [Commented] (HADOOP-13502) Split fs.contract.is-blobstore flag into more descriptive flags for use by contract tests.

2016-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608964#comment-15608964
 ] 

Hudson commented on HADOOP-13502:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10688 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10688/])
HADOOP-13502. Split fs.contract.is-blobstore flag into more descriptive 
(cnauroth: rev 1f8490a5bacd98d0d371447ada3b31f93ca40a4e)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractCreateTest.java
* (edit) hadoop-tools/hadoop-aws/src/test/resources/contract/s3a.xml
* (edit) hadoop-common-project/hadoop-common/src/test/resources/contract/ftp.xml
* (edit) hadoop-tools/hadoop-aws/src/test/resources/contract/s3n.xml
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractOptions.java
* (edit) .gitignore
* (edit) hadoop-tools/hadoop-openstack/src/test/resources/contract/swift.xml


> Split fs.contract.is-blobstore flag into more descriptive flags for use by 
> contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13502-branch-2.001.patch, 
> HADOOP-13502-branch-2.002.patch, HADOOP-13502-branch-2.003.patch, 
> HADOOP-13502-branch-2.004.patch, HADOOP-13502-trunk.004.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13502) Split fs.contract.is-blobstore flag into more descriptive flags for use by contract tests.

2016-10-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608939#comment-15608939
 ] 

Chris Nauroth commented on HADOOP-13502:


Xiaoyu, thank you for your code review too.

> Split fs.contract.is-blobstore flag into more descriptive flags for use by 
> contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13502-branch-2.001.patch, 
> HADOOP-13502-branch-2.002.patch, HADOOP-13502-branch-2.003.patch, 
> HADOOP-13502-branch-2.004.patch, HADOOP-13502-trunk.004.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13502) Split fs.contract.is-blobstore flag into more descriptive flags for use by contract tests.

2016-10-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13502:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.0.0-alpha2
  2.8.0
Target Version/s: 2.8.0  (was: 2.9.0)
  Status: Resolved  (was: Patch Available)

Steve, thank you for your review.  I committed this to trunk, branch-2 and 
branch-2.8 after completing another test run.

> Split fs.contract.is-blobstore flag into more descriptive flags for use by 
> contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13502-branch-2.001.patch, 
> HADOOP-13502-branch-2.002.patch, HADOOP-13502-branch-2.003.patch, 
> HADOOP-13502-branch-2.004.patch, HADOOP-13502-trunk.004.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13597) Switch KMS to use Jetty

2016-10-26 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608924#comment-15608924
 ] 

John Zhuge commented on HADOOP-13597:
-

Starting work based on Robert's patch 011 for HADOOP-10075.

Will use embedded Jetty. Migrate all configs to a single kms-site.xml, 
including Tomcat port and SSL settings currently in server.xml and 
ssl-server.xml.

Will use HttpServer2.

> Switch KMS to use Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13597) Switch KMS to use Jetty

2016-10-26 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13597 started by John Zhuge.
---
> Switch KMS to use Jetty
> ---
>
> Key: HADOOP-13597
> URL: https://issues.apache.org/jira/browse/HADOOP-13597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have change client code that much. It would require 
> more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608857#comment-15608857
 ] 

Hudson commented on HADOOP-13614:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10687 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10687/])
HADOOP-13614. Purge some superfluous/obsolete S3 FS tests that are (cnauroth: 
rev 9cad3e235026dbe4658705ca85d263d0edf14521)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/S3AScaleTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmPropagation.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryption.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/yarn/ITestS3AMiniYarnCluster.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AInputStreamPerformance.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/AbstractSTestS3AHugeFiles.java
* (edit) hadoop-tools/hadoop-aws/pom.xml
* (delete) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockingThreadPool.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADirectoryPerformance.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADeleteManyFiles.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFailureHandling.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesClassicOutput.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/yarn/ITestS3A.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileOperationCost.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/S3AContract.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlocksize.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3ADeleteFilesOneByOne.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3ATestBase.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestFSMainOperationsLocalFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileSystemContract.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java


> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, 
> HADOOP-13614-branch-2-004.patch, HADOOP-13614-branch-2-005.patch, 
> HADOOP-13614-branch-2-006.patch, HADOOP-13614-branch-2-007.patch, 
> HADOOP-13614-branch-2-008.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13502) Split fs.contract.is-blobstore flag into more descriptive flags for use by contract tests.

2016-10-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13502:
---
Summary: Split fs.contract.is-blobstore flag into more descriptive flags 
for use by contract tests.  (was: Rename/split fs.contract.is-blobstore flag 
used by contract tests.)

> Split fs.contract.is-blobstore flag into more descriptive flags for use by 
> contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13502-branch-2.001.patch, 
> HADOOP-13502-branch-2.002.patch, HADOOP-13502-branch-2.003.patch, 
> HADOOP-13502-branch-2.004.patch, HADOOP-13502-trunk.004.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13758) HADOOP-13614 pre-commit testing JIRA

2016-10-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13758:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

I have +1'd the patch on HADOOP-13614 and committed it.  I'm resolving this as 
a duplicate.

> HADOOP-13614 pre-commit testing JIRA
> 
>
> Key: HADOOP-13758
> URL: https://issues.apache.org/jira/browse/HADOOP-13758
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Steve Loughran
> Attachments: HADOOP-13758-branch-2.001.patch, 
> HADOOP-13758-branch-2.002.patch
>
>
> We've hit a situation where pre-commit testing on HADOOP-13614 is blocked due 
> to some confusion in the interactions between patch file attachments and 
> GitHub pull requests.  This is a fresh JIRA issue intended to help facilitate 
> pre-commit testing for HADOOP-13614.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-10-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-13614.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0

I have committed this to trunk, branch-2 and branch-2.8 after completing 
pre-commit on HADOOP-13758.  Steve, thank you for the contribution.

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, 
> HADOOP-13614-branch-2-004.patch, HADOOP-13614-branch-2-005.patch, 
> HADOOP-13614-branch-2-006.patch, HADOOP-13614-branch-2-007.patch, 
> HADOOP-13614-branch-2-008.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-10-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608807#comment-15608807
 ] 

Chris Nauroth edited comment on HADOOP-13614 at 10/26/16 3:40 PM:
--

+1

I have committed this to trunk, branch-2 and branch-2.8 after completing 
pre-commit on HADOOP-13758.  Steve, thank you for the contribution.


was (Author: cnauroth):
I have committed this to trunk, branch-2 and branch-2.8 after completing 
pre-commit on HADOOP-13758.  Steve, thank you for the contribution.

> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, 
> HADOOP-13614-branch-2-004.patch, HADOOP-13614-branch-2-005.patch, 
> HADOOP-13614-branch-2-006.patch, HADOOP-13614-branch-2-007.patch, 
> HADOOP-13614-branch-2-008.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-26 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Status: Patch Available  (was: Open)

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-26 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Attachment: HADOOP-13037-002.patch

Resubmit hadoop patch to trigger jenkins build.

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-26 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Attachment: (was: HADOOP-13037-002.patch)

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop

2016-10-26 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13037:
---
Status: Open  (was: Patch Available)

Reverting patch since Jenkins Hadoop QA build did not trigger on the patch.

> Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
> --
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/azure, tools
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0
>
> Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch, 
> HADOOP-13037-002.patch
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13742) Expose "NumOpenConnectionsPerUser" as a metric

2016-10-26 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608723#comment-15608723
 ] 

Brahma Reddy Battula commented on HADOOP-13742:
---

can somebody review this..?

> Expose "NumOpenConnectionsPerUser" as a metric
> --
>
> Key: HADOOP-13742
> URL: https://issues.apache.org/jira/browse/HADOOP-13742
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-13742-002.patch, HADOOP-13742.patch
>
>
> To track user level connections( How many connections for each user) in busy 
> cluster where so many connections to server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8134) DNS claims to return a hostname but returns a PTR record in some cases

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-8134.
-
Resolution: Not A Problem
  Assignee: (was: Harsh J)

This hasn't proven as a problem in late. Closing as stale.

> DNS claims to return a hostname but returns a PTR record in some cases
> --
>
> Key: HADOOP-8134
> URL: https://issues.apache.org/jira/browse/HADOOP-8134
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Priority: Minor
>
> Per Shrijeet on HBASE-4109:
> {quote}
> If you are using an interface anything other than 'default' (literally that 
> keyword) DNS.java's getDefaultHost will return a string which will have a 
> trailing period at the end. It seems javadoc of reverseDns in DNS.java (see 
> below) is conflicting with what that function is actually doing. 
> It is returning a PTR record while claims it returns a hostname. The PTR 
> record always has period at the end , RFC: 
> http://irbs.net/bog-4.9.5/bog47.html
> We make call to DNS.getDefaultHost at more than one places and treat that as 
> actual hostname.
> Quoting HRegionServer for example
> String machineName = DNS.getDefaultHost(conf.get(
> "hbase.regionserver.dns.interface", "default"), conf.get(
> "hbase.regionserver.dns.nameserver", "default"));
> We may want to sanitize the string returned from DNS class. Or better we can 
> take a path of overhauling the way we do DNS name matching all over.
> {quote}
> While HBase has worked around the issue, we should fix the methods that 
> aren't doing what they've intended.
> 1. We fix the method. This may be an 'incompatible change'. But I do not know 
> who outside of us uses DNS classes.
> 2. We fix HDFS's DN at the calling end, cause that is affected by the 
> trailing period in its reporting back to the NN as well (Just affects NN->DN 
> weblinks, non critical).
> For 2, we can close this and open a HDFS JIRA.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable rather than hard coded to 2000 bytes

2016-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608637#comment-15608637
 ] 

ASF GitHub Bot commented on HADOOP-1381:


GitHub user QwertyManiac opened a pull request:

https://github.com/apache/hadoop/pull/147

HADOOP-1381. The distance between sync blocks in SequenceFiles should…

… be configurable rather than hard coded to 2000 bytes.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/QwertyManiac/hadoop HADOOP-1381

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/147.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #147


commit dbfd4090c2d97f6dfd984c3d77ed9b78b7ea1a93
Author: Harsh J 
Date:   2016-10-26T14:34:33Z

HADOOP-1381. The distance between sync blocks in SequenceFiles should be 
configurable rather than hard coded to 2000 bytes.




> The distance between sync blocks in SequenceFiles should be configurable 
> rather than hard coded to 2000 bytes
> -
>
> Key: HADOOP-1381
> URL: https://issues.apache.org/jira/browse/HADOOP-1381
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.0.0-alpha
>Reporter: Owen O'Malley
>Assignee: Harsh J
> Attachments: HADOOP-1381.r1.diff, HADOOP-1381.r2.diff, 
> HADOOP-1381.r3.diff, HADOOP-1381.r4.diff, HADOOP-1381.r5.diff, 
> HADOOP-1381.r5.diff
>
>
> Currently SequenceFiles put in sync blocks every 2000 bytes. It would be much 
> better if it was configurable with a much higher default (1mb or so?).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9424) The "hadoop jar" invocation should include the passed jar on the classpath as a whole

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9424:

  Resolution: Invalid
Target Version/s:   (was: )
  Status: Resolved  (was: Patch Available)

Looks like this proposed approach does not appear to be entirely desirable.

> The "hadoop jar" invocation should include the passed jar on the classpath as 
> a whole
> -
>
> Key: HADOOP-9424
> URL: https://issues.apache.org/jira/browse/HADOOP-9424
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.0.3-alpha
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9424.patch
>
>
> When you have a case such as this:
> {{X.jar -> Classes = Main, Foo}}
> {{Y.jar -> Classes = Bar}}
> With implementation details such as:
> * Main references Bar and invokes a public, static method on it.
> * Bar does a class lookup to find Foo (Class.forName("Foo")).
> Then when you do a {{HADOOP_CLASSPATH=Y.jar hadoop jar X.jar Main}}, the 
> Bar's method fails with a ClassNotFound exception cause of the way RunJar 
> runs.
> RunJar extracts the passed jar and includes its contents on the ClassLoader 
> of its current thread but the {{Class.forName(…)}} call from another class 
> does not check that class loader and hence cannot find the class as its not 
> on any classpath it is aware of.
> The script of "hadoop jar" should ideally include the passed jar argument to 
> the CLASSPATH before RunJar is invoked, for this above case to pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-26 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608556#comment-15608556
 ] 

Akira Ajisaka commented on HADOOP-13514:


bq. this property is also used in hadoop-yarn-registry/pom.xml so we probably 
want to update that as well I guess?
Actually we don't need to update this because hadoop-yarn-registry is a 
submodule of hadoop-project and the property is inherited.

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2016-10-26 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608364#comment-15608364
 ] 

Wei-Chiu Chuang commented on HADOOP-13514:
--

Hi [~ajisakaa] thanks for finding the root cause. I am running your patch 
against all tests locally.
However, a quick search of "" suggests this property is 
also used in hadoop-yarn-registry/pom.xml so we probably want to update that as 
well I guess?

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13514-addendum.01.patch, HADOOP-13514.002.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13659) Upgrade jaxb-api version

2016-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15608244#comment-15608244
 ] 

Hudson commented on HADOOP-13659:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10684 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10684/])
HADOOP-13659. Upgrade jaxb-api version. Contributed by Sean Mackrory. (weichiu: 
rev 24a83febea4bef4d52f1ab952138d2fff0fa2445)
* (edit) hadoop-project/pom.xml


> Upgrade jaxb-api version
> 
>
> Key: HADOOP-13659
> URL: https://issues.apache.org/jira/browse/HADOOP-13659
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13659.001.patch, HADOOP-13659.002.patch
>
>
> We're currently pulling in version 2.2.2 - I think we should upgrade to the 
> latest 2.2.12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13659) Upgrade jaxb-api version

2016-10-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13659:
-
Release Note: Bump the version of third party dependency jaxb-api to 2.2.11.

> Upgrade jaxb-api version
> 
>
> Key: HADOOP-13659
> URL: https://issues.apache.org/jira/browse/HADOOP-13659
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13659.001.patch, HADOOP-13659.002.patch
>
>
> We're currently pulling in version 2.2.2 - I think we should upgrade to the 
> latest 2.2.12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13659) Upgrade jaxb-api version

2016-10-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13659:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed the patch 002 to trunk. Thanks to [~mackrorysd] for the patch and 
[~steve_l] for the comment!

> Upgrade jaxb-api version
> 
>
> Key: HADOOP-13659
> URL: https://issues.apache.org/jira/browse/HADOOP-13659
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13659.001.patch, HADOOP-13659.002.patch
>
>
> We're currently pulling in version 2.2.2 - I think we should upgrade to the 
> latest 2.2.12.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2016-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607863#comment-15607863
 ] 

Hadoop QA commented on HADOOP-6801:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 4 new + 432 unchanged - 0 fixed = 436 total (was 432) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
35s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-6801 |
| GITHUB PR | https://github.com/apache/hadoop/pull/146 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 5a8fa2cd5995 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44fdf00 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10900/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10900/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10900/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
> still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
> 

[jira] [Updated] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-26 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-11798:
---
Release Note: This provides a native implementation of XOR codec by 
leveraging Intel ISA-L library function to achieve a better performance. 

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285, 3.0.0-alpha2
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, 
> HADOOP-11798-v3.patch, HADOOP-11798-v4.patch, HADOOP-11798-v5.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes

2016-10-26 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607835#comment-15607835
 ] 

SammiChen commented on HADOOP-11798:


Thanks [~jojochuang] so much for reviewing and commit the patch! Sure, I will 
add the release note. 

> Native raw erasure coder in XOR codes
> -
>
> Key: HADOOP-11798
> URL: https://issues.apache.org/jira/browse/HADOOP-11798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-must-do
> Fix For: HDFS-7285, 3.0.0-alpha2
>
> Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, 
> HADOOP-11798-v3.patch, HADOOP-11798-v4.patch, HADOOP-11798-v5.patch
>
>
> Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to 
> recover only one erased block which is in most often case. It can also be 
> used in HitchHiker coder. Therefore a native implementation of it would be 
> deserved for performance gain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2016-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607801#comment-15607801
 ] 

Hadoop QA commented on HADOOP-12549:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 30s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestSaslRPC |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-12549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770469/HADOOP-12549.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c8e12d481d9c 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44fdf00 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10899/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10899/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10899/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
>  

[jira] [Commented] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2016-10-26 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607784#comment-15607784
 ] 

Harsh J commented on HADOOP-6801:
-

[~cnauroth] - Thanks for reviewing! I added the ordering test as well. Updated 
patch is in the PR.

> io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
> still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
> ---
>
> Key: HADOOP-6801
> URL: https://issues.apache.org/jira/browse/HADOOP-6801
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Erik Steffl
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-6801.05.patch, HADOOP-6801.r1.diff, 
> HADOOP-6801.r2.diff, HADOOP-6801.r3.diff, HADOOP-6801.r4.diff, 
> HADOOP-6801.r5.diff
>
>
> Following configuration keys in CommonConfigurationKeysPublic.java (former 
> CommonConfigurationKeys.java):
> public static final String  IO_SORT_MB_KEY = "io.sort.mb";
> public static final String  IO_SORT_FACTOR_KEY = "io.sort.factor";
> are partially moved:
>   - they were renamed to mapreduce.task.io.sort.mb and 
> mapreduce.task.io.sort.factor respectively
>   - they were moved to mapreduce project, documented in mapred-default.xml
> However:
>   - they are still listed in CommonConfigurationKeysPublic.java as quoted 
> above
>   - strings "io.sort.mb" and "io.sort.factor" are used in SequenceFile.java 
> in Hadoop Common project
> Not sure what the solution is, these constants should probably be removed 
> from CommonConfigurationKeysPublic.java but I am not sure what's the best 
> solution for SequenceFile.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2016-10-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607782#comment-15607782
 ] 

ASF GitHub Bot commented on HADOOP-6801:


GitHub user QwertyManiac opened a pull request:

https://github.com/apache/hadoop/pull/146

HADOOP-6801. io.sort.mb and io.sort.factor were renamed and moved to …

…mapreduce but are still in CommonConfigurationKeysPublic.java and used in 
SequenceFile.java

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/QwertyManiac/hadoop HADOOP-6801

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/146.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #146


commit d776f3a3eca7fa821fb7373ad91703b1b04cc9a7
Author: Harsh J 
Date:   2016-10-26T07:51:51Z

HADOOP-6801. io.sort.mb and io.sort.factor were renamed and moved to 
mapreduce but are still in CommonConfigurationKeysPublic.java and used in 
SequenceFile.java




> io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
> still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
> ---
>
> Key: HADOOP-6801
> URL: https://issues.apache.org/jira/browse/HADOOP-6801
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Erik Steffl
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-6801.05.patch, HADOOP-6801.r1.diff, 
> HADOOP-6801.r2.diff, HADOOP-6801.r3.diff, HADOOP-6801.r4.diff, 
> HADOOP-6801.r5.diff
>
>
> Following configuration keys in CommonConfigurationKeysPublic.java (former 
> CommonConfigurationKeys.java):
> public static final String  IO_SORT_MB_KEY = "io.sort.mb";
> public static final String  IO_SORT_FACTOR_KEY = "io.sort.factor";
> are partially moved:
>   - they were renamed to mapreduce.task.io.sort.mb and 
> mapreduce.task.io.sort.factor respectively
>   - they were moved to mapreduce project, documented in mapred-default.xml
> However:
>   - they are still listed in CommonConfigurationKeysPublic.java as quoted 
> above
>   - strings "io.sort.mb" and "io.sort.factor" are used in SequenceFile.java 
> in Hadoop Common project
> Not sure what the solution is, these constants should probably be removed 
> from CommonConfigurationKeysPublic.java but I am not sure what's the best 
> solution for SequenceFile.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7505) EOFException in RPC stack should have a nicer error message

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-7505.
-
Resolution: Duplicate
  Assignee: (was: Harsh J)

This seems to be taken care (in part) via HADOOP-7346

> EOFException in RPC stack should have a nicer error message
> ---
>
> Key: HADOOP-7505
> URL: https://issues.apache.org/jira/browse/HADOOP-7505
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>Priority: Minor
>
> Lots of user logs involve a user running mismatched versions, and for some 
> reason or another, they get EOFException instead of a proper version mismatch 
> exception. We should be able to catch this at appropriate points, and have a 
> nicer exception message explaining that it's a possible version mismatch, or 
> that they're trying to connect to the incorrect port.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8579) Websites for HDFS and MapReduce both send users to video training resource which is non-public

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-8579.
-
Resolution: Not A Problem
  Assignee: (was: Harsh J)

This does not appear to be a problem after the project re-merge.

> Websites for HDFS and MapReduce both send users to video training resource 
> which is non-public
> --
>
> Key: HADOOP-8579
> URL: https://issues.apache.org/jira/browse/HADOOP-8579
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: website
>Reporter: David L. Willson
>Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Main pages for HDFS and MapReduce send new user to unavailable training 
> resource.
> These two pages:
> http://hadoop.apache.org/mapreduce/
> http://hadoop.apache.org/hdfs/
> Link to this page:
> http://vimeo.com/3584536
> That page is not public, and not shared to all registered Vimeo users, and I 
> see nothing indicating how to ask for access to the resource.
> Please make the vids public, or remove the link of disappointment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8863) Eclipse plugin may not be working on Juno due to changes in it

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-8863.
-
Resolution: Won't Fix
  Assignee: (was: Harsh J)

The eclipse plugin is formally out.

> Eclipse plugin may not be working on Juno due to changes in it
> --
>
> Key: HADOOP-8863
> URL: https://issues.apache.org/jira/browse/HADOOP-8863
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: contrib/eclipse-plugin
>Affects Versions: 1.2.0
>Reporter: Harsh J
>
> We need to debug/investigate why it is so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2016-10-26 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607688#comment-15607688
 ] 

Harsh J commented on HADOOP-12549:
--

[~yzhangal] or [~aw] - Could you help review this one? The change should help 
some limited form of clients that deal with multiple RM/KMS/etc. services (HDFS 
is already covered via a hdfs-default.xml change, for the most classic use-case 
of DistCp).

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13056) Print expected values when rejecting a server's determined principal

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-13056:
-
Target Version/s:   (was: 2.9.0)

> Print expected values when rejecting a server's determined principal
> 
>
> Key: HADOOP-13056
> URL: https://issues.apache.org/jira/browse/HADOOP-13056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Harsh J
>Priority: Trivial
> Attachments: HADOOP-13056.000.patch
>
>
> When an address-constructed service principal by a client does not match a 
> provided pattern or the configured principal property, the error is very 
> uninformative on what the specific cause is. Currently the only error printed 
> is, in both cases:
> {code}
>  java.lang.IllegalArgumentException: Server has invalid Kerberos principal: 
> hdfs/host.internal@REALM
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13056) Print expected values when rejecting a server's determined principal

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-13056:
-
Resolution: Duplicate
  Assignee: (was: Harsh J)
Status: Resolved  (was: Patch Available)

Thanks [~steve_l], sorry on delay. This seems to have been done (in messages 
alone at least) by HADOOP-13503. Closing out as dupe.

> Print expected values when rejecting a server's determined principal
> 
>
> Key: HADOOP-13056
> URL: https://issues.apache.org/jira/browse/HADOOP-13056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Harsh J
>Priority: Trivial
> Attachments: HADOOP-13056.000.patch
>
>
> When an address-constructed service principal by a client does not match a 
> provided pattern or the configured principal property, the error is very 
> uninformative on what the specific cause is. Currently the only error printed 
> is, in both cases:
> {code}
>  java.lang.IllegalArgumentException: Server has invalid Kerberos principal: 
> hdfs/host.internal@REALM
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13759) Split SFTP FileSystem into its own artifact

2016-10-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607638#comment-15607638
 ] 

Steve Loughran commented on HADOOP-13759:
-

either under hadoop-tools/, or the proposed new cloud-storage/ section. It 
doesn't quite fit in there, but its closer, isn't it?

> Split SFTP FileSystem into its own artifact
> ---
>
> Key: HADOOP-13759
> URL: https://issues.apache.org/jira/browse/HADOOP-13759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yuanbo Liu
>
> As discussed on HADOOP-13696, if we split the SFTP FileSystem into its own 
> artifact, we can save a jsch dependency in Hadoop Common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13759) Split SFTP FileSystem into its own artifact

2016-10-26 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-13759:
---

Assignee: Yuanbo Liu

> Split SFTP FileSystem into its own artifact
> ---
>
> Key: HADOOP-13759
> URL: https://issues.apache.org/jira/browse/HADOOP-13759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yuanbo Liu
>
> As discussed on HADOOP-13696, if we split the SFTP FileSystem into its own 
> artifact, we can save a jsch dependency in Hadoop Common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13694) Data transfer encryption with AES 192: Invalid key length.

2016-10-26 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15607544#comment-15607544
 ] 

Harsh J commented on HADOOP-13694:
--

[~hitliuyi] or [~jojochuang], could you help review this one? The patch extends 
the OpensslCipher implementation to cover AES-192, and adds tests for all 
available AES variants (existing tests just cover 128-bit).

> Data transfer encryption with AES 192: Invalid key length.
> --
>
> Key: HADOOP-13694
> URL: https://issues.apache.org/jira/browse/HADOOP-13694
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.2
> Environment: OS: Ubuntu 14.04
> /hadoop-2.7.2/bin$ uname -a
> Linux wkstn-kpalaniappan 3.13.0-79-generic #123-Ubuntu SMP Fri Feb 19 
> 14:27:58 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> /hadoop-2.7.2/bin$ java -version
> java version "1.7.0_95"
> OpenJDK Runtime Environment (IcedTea 2.6.4) (7u95-2.6.4-0ubuntu0.14.04.1)
> OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)
> Hadoop version: 2.7.2
>Reporter: Karthik Palaniappan
>Assignee: Harsh J
>
> Configuring aes 128 or aes 256 encryption 
> (dfs.encrypt.data.transfer.cipher.key.bitlength = [128, 256]) works perfectly 
> fine. Trying to use AES 192 generates this exception on the datanode:
> 16/02/29 17:34:10 ERROR datanode.DataNode: 
> wkstn-kpalaniappan:50010:DataXceiver error processing unknown operation  src: 
> /127.0.0.1:57237 dst: /127.0.0.1:50010
> java.lang.IllegalArgumentException: Invalid key length.
>   at org.apache.hadoop.crypto.OpensslCipher.init(Native Method)
>   at org.apache.hadoop.crypto.OpensslCipher.init(OpensslCipher.java:176)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:116)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:290)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:303)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:128)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:109)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:133)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:345)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:396)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getEncryptedStreams(SaslDataTransferServer.java:178)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:110)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:193)
>   at java.lang.Thread.run(Thread.java:745)
> And this exception on the client:
> /hadoop-2.7.2/bin$ ./hdfs dfs -copyFromLocal ~/.vimrc /vimrc
> 16/02/29 17:34:10 WARN hdfs.DFSClient: DataStreamer Exception
> java.lang.IllegalArgumentException: Invalid key length.
>   at org.apache.hadoop.crypto.OpensslCipher.init(Native Method)
>   at org.apache.hadoop.crypto.OpensslCipher.init(OpensslCipher.java:176)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:116)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:290)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:303)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:128)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:109)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:133)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:345)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
>   at 
> 

[jira] [Updated] (HADOOP-13694) Data transfer encryption with AES 192: Invalid key length.

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-13694:
-
Status: Patch Available  (was: Open)

> Data transfer encryption with AES 192: Invalid key length.
> --
>
> Key: HADOOP-13694
> URL: https://issues.apache.org/jira/browse/HADOOP-13694
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.2
> Environment: OS: Ubuntu 14.04
> /hadoop-2.7.2/bin$ uname -a
> Linux wkstn-kpalaniappan 3.13.0-79-generic #123-Ubuntu SMP Fri Feb 19 
> 14:27:58 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> /hadoop-2.7.2/bin$ java -version
> java version "1.7.0_95"
> OpenJDK Runtime Environment (IcedTea 2.6.4) (7u95-2.6.4-0ubuntu0.14.04.1)
> OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)
> Hadoop version: 2.7.2
>Reporter: Karthik Palaniappan
>Assignee: Harsh J
>
> Configuring aes 128 or aes 256 encryption 
> (dfs.encrypt.data.transfer.cipher.key.bitlength = [128, 256]) works perfectly 
> fine. Trying to use AES 192 generates this exception on the datanode:
> 16/02/29 17:34:10 ERROR datanode.DataNode: 
> wkstn-kpalaniappan:50010:DataXceiver error processing unknown operation  src: 
> /127.0.0.1:57237 dst: /127.0.0.1:50010
> java.lang.IllegalArgumentException: Invalid key length.
>   at org.apache.hadoop.crypto.OpensslCipher.init(Native Method)
>   at org.apache.hadoop.crypto.OpensslCipher.init(OpensslCipher.java:176)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:116)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:290)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:303)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:128)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:109)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:133)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:345)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:396)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getEncryptedStreams(SaslDataTransferServer.java:178)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:110)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:193)
>   at java.lang.Thread.run(Thread.java:745)
> And this exception on the client:
> /hadoop-2.7.2/bin$ ./hdfs dfs -copyFromLocal ~/.vimrc /vimrc
> 16/02/29 17:34:10 WARN hdfs.DFSClient: DataStreamer Exception
> java.lang.IllegalArgumentException: Invalid key length.
>   at org.apache.hadoop.crypto.OpensslCipher.init(Native Method)
>   at org.apache.hadoop.crypto.OpensslCipher.init(OpensslCipher.java:176)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:116)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:290)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:303)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:128)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:109)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:133)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:345)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1266)
>   at 
> 

[jira] [Updated] (HADOOP-13694) Data transfer encryption with AES 192: Invalid key length.

2016-10-26 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-13694:
-
Status: Open  (was: Patch Available)

> Data transfer encryption with AES 192: Invalid key length.
> --
>
> Key: HADOOP-13694
> URL: https://issues.apache.org/jira/browse/HADOOP-13694
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.2
> Environment: OS: Ubuntu 14.04
> /hadoop-2.7.2/bin$ uname -a
> Linux wkstn-kpalaniappan 3.13.0-79-generic #123-Ubuntu SMP Fri Feb 19 
> 14:27:58 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
> /hadoop-2.7.2/bin$ java -version
> java version "1.7.0_95"
> OpenJDK Runtime Environment (IcedTea 2.6.4) (7u95-2.6.4-0ubuntu0.14.04.1)
> OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)
> Hadoop version: 2.7.2
>Reporter: Karthik Palaniappan
>Assignee: Harsh J
>
> Configuring aes 128 or aes 256 encryption 
> (dfs.encrypt.data.transfer.cipher.key.bitlength = [128, 256]) works perfectly 
> fine. Trying to use AES 192 generates this exception on the datanode:
> 16/02/29 17:34:10 ERROR datanode.DataNode: 
> wkstn-kpalaniappan:50010:DataXceiver error processing unknown operation  src: 
> /127.0.0.1:57237 dst: /127.0.0.1:50010
> java.lang.IllegalArgumentException: Invalid key length.
>   at org.apache.hadoop.crypto.OpensslCipher.init(Native Method)
>   at org.apache.hadoop.crypto.OpensslCipher.init(OpensslCipher.java:176)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:116)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:290)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:303)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:128)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:109)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:133)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:345)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:396)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getEncryptedStreams(SaslDataTransferServer.java:178)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:110)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:193)
>   at java.lang.Thread.run(Thread.java:745)
> And this exception on the client:
> /hadoop-2.7.2/bin$ ./hdfs dfs -copyFromLocal ~/.vimrc /vimrc
> 16/02/29 17:34:10 WARN hdfs.DFSClient: DataStreamer Exception
> java.lang.IllegalArgumentException: Invalid key length.
>   at org.apache.hadoop.crypto.OpensslCipher.init(Native Method)
>   at org.apache.hadoop.crypto.OpensslCipher.init(OpensslCipher.java:176)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec$OpensslAesCtrCipher.init(OpensslAesCtrCryptoCodec.java:116)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.updateDecryptor(CryptoInputStream.java:290)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.resetStreamOffset(CryptoInputStream.java:303)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:128)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:109)
>   at 
> org.apache.hadoop.crypto.CryptoInputStream.(CryptoInputStream.java:133)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:345)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1266)
>   at 
>