[jira] [Comment Edited] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-09 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121086#comment-16121086
 ] 

Andras Bokor edited comment on HADOOP-14729 at 8/10/17 5:40 AM:


[~ajayydv],

It will not compile because {{TestMapFileOutputFormat}} (and some other 
classes) miss(es) the import of After annotation.
In addition, in your 2nd patch please use two spaces for indentation instead of 
tab.


was (Author: boky01):
[~ajayydv],

It will not compile because {{TestMapFileOutputFormat}} misses an import.
In your 2nd patch please use two spaces for indentation instead of tab.

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Yadav
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-09 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121086#comment-16121086
 ] 

Andras Bokor commented on HADOOP-14729:
---

[~ajayydv],

It will not compile because {{TestMapFileOutputFormat}} misses an import.
In your 2nd patch please use two spaces for indentation instead of tab.

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Yadav
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14752) TestCopyFromLocal#testCopyFromLocalWithThreads is fleaky

2017-08-09 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14752:
--
Status: Patch Available  (was: Open)

> TestCopyFromLocal#testCopyFromLocalWithThreads is fleaky
> 
>
> Key: HADOOP-14752
> URL: https://issues.apache.org/jira/browse/HADOOP-14752
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14752.01.patch
>
>
> In the test, we find out the number of threads using a random generator. When 
> the random number is 0/1 we call copyFromLocal with one thread that means 
> executor will not be used so the number of completed tasks will not equal the 
> number of generated files but zero.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14752) TestCopyFromLocal#testCopyFromLocalWithThreads is fleaky

2017-08-09 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14752:
--
Attachment: HADOOP-14752.01.patch

Added some additional improvements other than the test fix:
* assertEquals was called with wrong parameter order
* NAME field is needed for displaying warnings

> TestCopyFromLocal#testCopyFromLocalWithThreads is fleaky
> 
>
> Key: HADOOP-14752
> URL: https://issues.apache.org/jira/browse/HADOOP-14752
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14752.01.patch
>
>
> In the test, we find out the number of threads using a random generator. When 
> the random number is 0/1 we call copyFromLocal with one thread that means 
> executor will not be used so the number of completed tasks will not equal the 
> number of generated files but zero.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16121069#comment-16121069
 ] 

Mingliang Liu commented on HADOOP-14749:


+1. Nice work. Thanks [~ste...@apache.org].

Nits:
# I saw a few TODOs that do not have associated JIRA numbers. Should we file 
and point to them?
# According to my experience at Amazon, {{DynamoDB}} and {{Dynamo}} are two 
different systems though they share lots of core principles and design. Should 
we replace all {{dynamo}} in doc/comment as {{DynamoDB}}?
# In {{S3GuardTool}} L1130, {{code System.exit() on all exeuction paths.}} 
should be {{@code System.exit() on all exeuction paths.}} This has a broader 
question: we currently don't use javadoc to generate HTML doc anymore (don't 
we?), so perhaps we don't need those HTML tags in javadoc which most serves as 
comment. I saw some usage of {{}} for e.g.
# In doc, should we also mention sharing DDB table amortizes the provision 
burden besides cost-effective?
# In doc, there is duplicate "uses" in sentence {{+service uses uses the same 
authentication mechanisms as S3. S3Guard}}
# {{+### Delete a table: `s3guard  destroy`}} has double spaces before destroy
# In the testing doc,
{quote}
... launch the server if it is not yet started; creating the table if it does 
not exist. 
{quote}
{{DynamoDBLocalClientFactory}} is starting a new in-memory local server whose 
instance or data is not shared among tests. So it always starts a new server, 
and create new table. Need to confirm.

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch, 
> HADOOP-14749-HADOOP-13345-002.patch, HADOOP-14749-HADOOP-13345-003.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14752) TestCopyFromLocal#testCopyFromLocalWithThreads is fleaky

2017-08-09 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-14752:
-

 Summary: TestCopyFromLocal#testCopyFromLocalWithThreads is fleaky
 Key: HADOOP-14752
 URL: https://issues.apache.org/jira/browse/HADOOP-14752
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Andras Bokor
Assignee: Andras Bokor


In the test, we find out the number of threads using a random generator. When 
the random number is 0/1 we call copyFromLocal with one thread that means 
executor will not be used so the number of completed tasks will not equal the 
number of generated files but zero.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-09 Thread Ajay Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Yadav updated HADOOP-14729:

Status: Patch Available  (was: Open)

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Yadav
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-09 Thread Ajay Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Yadav updated HADOOP-14729:

Attachment: HADOOP-14729.001.patch

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Yadav
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120892#comment-16120892
 ] 

Hadoop QA commented on HADOOP-13835:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
51s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m  
8s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
59s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
55s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-mapreduce-client-nativetask in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
25s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 57s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}265m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
| JDK v1.8.0_131 Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain |
|   | hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HADOOP-13835 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-14183) Remove service loader config file for wasb fs

2017-08-09 Thread Esfandiar Manii (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120638#comment-16120638
 ] 

Esfandiar Manii commented on HADOOP-14183:
--

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.azure.TestAzureConcurrentOutOfBandIo
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.768 sec - in 
org.apache.hadoop.fs.azure.TestAzureConcurrentOutOfBandIo
Running 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorizationWithOwner
Tests run: 27, Failures: 0, Errors: 0, Skipped: 27, Time elapsed: 3.028 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAuthorizationWithOwner
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAtomicRenameDirList
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.996 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemAtomicRenameDirList
Running org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 0.107 sec - in 
org.apache.hadoop.fs.azure.TestShellDecryptionKeyProvider
Running org.apache.hadoop.fs.azure.TestWasbFsck
Tests run: 2, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.608 sec - in 
org.apache.hadoop.fs.azure.TestWasbFsck
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked
Tests run: 43, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 1.282 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractMocked
Running org.apache.hadoop.fs.azure.TestContainerChecks
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.104 sec - in 
org.apache.hadoop.fs.azure.TestContainerChecks
Running org.apache.hadoop.fs.azure.TestNativeAzureFSPageBlobLive
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 195.661 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFSPageBlobLive
Running org.apache.hadoop.fs.azure.TestBlobOperationDescriptor
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.018 sec - in 
org.apache.hadoop.fs.azure.TestBlobOperationDescriptor
Running org.apache.hadoop.fs.azure.TestBlockBlobInputStream
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.495 sec - 
in org.apache.hadoop.fs.azure.TestBlockBlobInputStream
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.735 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemMocked
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemLive
Tests run: 51, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 210.089 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemLive
Running org.apache.hadoop.fs.azure.TestWasbUriAndConfiguration
Tests run: 19, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 7.997 sec - in 
org.apache.hadoop.fs.azure.TestWasbUriAndConfiguration
Running org.apache.hadoop.fs.azure.TestBlobDataValidation
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.15 sec - in 
org.apache.hadoop.fs.azure.TestBlobDataValidation
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.901 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemConcurrency
Running org.apache.hadoop.fs.azure.TestBlobMetadata
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.725 sec - in 
org.apache.hadoop.fs.azure.TestBlobMetadata
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.471 sec - in 
org.apache.hadoop.fs.azure.TestNativeAzureFileSystemClientLogging
Running 
org.apache.hadoop.fs.azure.TestFileSystemOperationsExceptionHandlingMultiThreaded
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.597 sec - 
in 
org.apache.hadoop.fs.azure.TestFileSystemOperationsExceptionHandlingMultiThreaded
Running org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 1.499 sec - 
in org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper
Running org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractLive
Tests run: 43, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 35.392 sec - 
in org.apache.hadoop.fs.azure.TestNativeAzureFileSystemContractLive
Running org.apache.hadoop.fs.azure.contract.TestAzureNativeContractGetFileStatus
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.083 sec - 
in org.apache.hadoop.fs.azure.contract.TestAzureNativeContractGetFileStatus
Running org.apache.hadoop.fs.azure.contract.TestAzureNativeContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time 

[jira] [Updated] (HADOOP-14183) Remove service loader config file for wasb fs

2017-08-09 Thread Esfandiar Manii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-14183:
-
Attachment: HADOOP-14183.001.patch

> Remove service loader config file for wasb fs
> -
>
> Key: HADOOP-14183
> URL: https://issues.apache.org/jira/browse/HADOOP-14183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: John Zhuge
>Assignee: Esfandiar Manii
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-14183.001.patch
>
>
> Per discussion in HADOOP-14132. Remove the service loader config file 
> hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
>  and add property {{fs.wasb.impl}} to {{core-default.xml}}. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14183) Remove service loader config file for wasb fs

2017-08-09 Thread Esfandiar Manii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii reassigned HADOOP-14183:


Assignee: Esfandiar Manii

> Remove service loader config file for wasb fs
> -
>
> Key: HADOOP-14183
> URL: https://issues.apache.org/jira/browse/HADOOP-14183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.7.3
>Reporter: John Zhuge
>Assignee: Esfandiar Manii
>Priority: Minor
>  Labels: newbie
>
> Per discussion in HADOOP-14132. Remove the service loader config file 
> hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
>  and add property {{fs.wasb.impl}} to {{core-default.xml}}. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14748) Wasb input streams to implement CanUnbuffer

2017-08-09 Thread Esfandiar Manii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii reassigned HADOOP-14748:


Assignee: Esfandiar Manii

> Wasb input streams to implement CanUnbuffer
> ---
>
> Key: HADOOP-14748
> URL: https://issues.apache.org/jira/browse/HADOOP-14748
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Esfandiar Manii
>Priority: Minor
>
> HBase relies on FileSystems implementing CanUnbuffer.unbuffer() to force 
> input streams to free up remote connections (HBASE-9393Link). This works for 
> HDFS, but not elsewhere.
> WASB {{BlockBlobInputStream}} can implement this by closing the stream 
>  in ({{closeBlobInputStream}}, so it will be re-opened elsewhere.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14748) Wasb input streams to implement CanUnbuffer

2017-08-09 Thread Esfandiar Manii (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii reassigned HADOOP-14748:


Assignee: (was: Esfandiar Manii)

> Wasb input streams to implement CanUnbuffer
> ---
>
> Key: HADOOP-14748
> URL: https://issues.apache.org/jira/browse/HADOOP-14748
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
>
> HBase relies on FileSystems implementing CanUnbuffer.unbuffer() to force 
> input streams to free up remote connections (HBASE-9393Link). This works for 
> HDFS, but not elsewhere.
> WASB {{BlockBlobInputStream}} can implement this by closing the stream 
>  in ({{closeBlobInputStream}}, so it will be re-opened elsewhere.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-09 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120633#comment-16120633
 ] 

Subru Krishnan commented on HADOOP-14741:
-

Thanks [~goiri] for contributing this. The latest patch (v5) LGTM.

[~zhz]/[~leftnoteasy]/[~drankye], do you guys want to take a quick look before 
I go ahead and commit this? 

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14741-000.patch, HADOOP-14741-001.patch, 
> HADOOP-14741-002.patch, HADOOP-14741-003.patch, HADOOP-14741-004.patch, 
> HADOOP-14741-005.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120617#comment-16120617
 ] 

Íñigo Goiri commented on HADOOP-14741:
--

I ran the failed unit test locally and worked.
They didn't fail previously either.
So I guess these are spurious, I think this is good to go.
Comments?

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14741-000.patch, HADOOP-14741-001.patch, 
> HADOOP-14741-002.patch, HADOOP-14741-003.patch, HADOOP-14741-004.patch, 
> HADOOP-14741-005.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-13835:

Status: Patch Available  (was: Reopened)

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch, HADOOP-13835.007.patch, 
> HADOOP-13835.branch-2.007.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-13835:

Attachment: HADOOP-13835.branch-2.007.patch

Attached branch-2 patch (007).

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch, HADOOP-13835.007.patch, 
> HADOOP-13835.branch-2.007.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-08-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened HADOOP-13835:
-

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch, HADOOP-13835.007.patch, 
> HADOOP-13835.branch-2.007.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common

2017-08-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120579#comment-16120579
 ] 

Wangda Tan commented on HADOOP-13835:
-

[~ajisakaa], [~vvasudev], I think this patch should be backported to branch-2 
as well since we have other patches need gtest. For example YARN-6852, 
YARN-6033, is there any concerns of doing this?

> Move Google Test Framework code from mapreduce to hadoop-common
> ---
>
> Key: HADOOP-13835
> URL: https://issues.apache.org/jira/browse/HADOOP-13835
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, 
> HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, 
> HADOOP-13835.006.patch, HADOOP-13835.007.patch
>
>
> The mapreduce project has Google Test Framework code to allow testing of 
> native libraries. This should be moved to hadoop-common so that other 
> projects can use it as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120562#comment-16120562
 ] 

Aaron Fabbri commented on HADOOP-14749:
---

+1 on v3 patch.

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch, 
> HADOOP-14749-HADOOP-13345-002.patch, HADOOP-14749-HADOOP-13345-003.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-09 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120504#comment-16120504
 ] 

Andras Bokor commented on HADOOP-14698:
---

I'll address the JUnit failure, please hold on until I fix.

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch, 
> HADOOP-14698.06.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120465#comment-16120465
 ] 

Hadoop QA commented on HADOOP-14741:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
35s{color} | {color:green} root: The patch generated 0 new + 340 unchanged - 3 
fixed = 340 total (was 343) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 348 unchanged - 4 fixed = 348 total (was 352) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 28s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
|   | 

[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120335#comment-16120335
 ] 

Steve Loughran commented on HADOOP-14693:
-

Option 2, incremental.

it's not just the smaller changes, but we know that some tests (and I'm 
thinking of all the FS contract tests) are used downstream as part of the code 
to verify their filesystems are consistent with what Hadoop expects. We 
shouldn't break things unless we need to.

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2017-08-09 Thread Ajay Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120328#comment-16120328
 ] 

Ajay Yadav commented on HADOOP-14693:
-

 [~ajisakaa],[~andrew.wang] , As i understand , we can do it in two ways.
1. Update the junit dependency in hadoop-main to junit5 with 
junit-jupiter-engine. which will require changes in most of the test cases to 
move them to new junit5 api and platform. (More riskier)
2. Update the junit dependency in hadoop-main to junit5 while maintaining 
backward compatibility to test cases built in junit4 using 
(junit-vintage-engine). As a next step we can create new test cases using junit 
5 api and move old test cases to junit5 in steps. This will be incremental 
change with less risk to breaking old test cases.
Any ideas, suggestions on this?

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14467) S3Guard: Improve FNFE message when opening a stream

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120327#comment-16120327
 ] 

Steve Loughran commented on HADOOP-14467:
-

let's not worry about it for now

> S3Guard: Improve FNFE message when opening a stream
> ---
>
> Key: HADOOP-14467
> URL: https://issues.apache.org/jira/browse/HADOOP-14467
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14467-HADOOP-13345.001.patch
>
>
> Following up on the [discussion on 
> HADOOP-13345|https://issues.apache.org/jira/browse/HADOOP-13345?focusedCommentId=16030050=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16030050],
>  because S3Guard can serve getFileStatus() from the MetadataStore without 
> doing a HEAD on S3, a FileNotFound error on a file due to S3 GET 
> inconsistency does not happen on open(), but on the first read of the stream. 
>  We may add retries to the S3 client in the future, but for now we should 
> have an exception message that indicates this may be due to inconsistency 
> (assuming it isn't a more straightforward case like someone deleting the 
> object out from under you).
> This is expected to be a rare case, since the S3 service is now mostly 
> consistent for GET.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-09 Thread Ajay Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Yadav reassigned HADOOP-14729:
---

Assignee: Ajay Yadav

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Yadav
>  Labels: newbie
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120298#comment-16120298
 ] 

Hadoop QA commented on HADOOP-14749:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-13345 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
25s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 22 
new + 55 unchanged - 4 fixed = 77 total (was 59) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14749 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881045/HADOOP-14749-HADOOP-13345-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux dd0430918c9b 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / b4c2ab2 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12995/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12995/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12995/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12995/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> review s3guard docs & code prior to merge
> 

[jira] [Comment Edited] (HADOOP-14467) S3Guard: Improve FNFE message when opening a stream

2017-08-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120168#comment-16120168
 ] 

Aaron Fabbri edited comment on HADOOP-14467 at 8/9/17 5:13 PM:
---

I didn't find a nice clean way to add a new exception message that I liked 
here.  At least now folks can google it.  I feel like we could make more 
improvements as part of HADOOP-14468:  we could report existence in S3 (if we 
checked), Metadata Store, etc.


was (Author: fabbri):
I didn't find a nice clean way to add a new exception message that I liked 
here.  At least now folks can google it.  I feel like we could make more 
improvements as part of HADOOP-14735:  we could report existence in S3 (if we 
checked), Metadata Store, etc.

> S3Guard: Improve FNFE message when opening a stream
> ---
>
> Key: HADOOP-14467
> URL: https://issues.apache.org/jira/browse/HADOOP-14467
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14467-HADOOP-13345.001.patch
>
>
> Following up on the [discussion on 
> HADOOP-13345|https://issues.apache.org/jira/browse/HADOOP-13345?focusedCommentId=16030050=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16030050],
>  because S3Guard can serve getFileStatus() from the MetadataStore without 
> doing a HEAD on S3, a FileNotFound error on a file due to S3 GET 
> inconsistency does not happen on open(), but on the first read of the stream. 
>  We may add retries to the S3 client in the future, but for now we should 
> have an exception message that indicates this may be due to inconsistency 
> (assuming it isn't a more straightforward case like someone deleting the 
> object out from under you).
> This is expected to be a rare case, since the S3 service is now mostly 
> consistent for GET.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120273#comment-16120273
 ] 

Hadoop QA commented on HADOOP-14698:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 54 unchanged - 12 fixed = 54 total (was 66) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 49s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.shell.TestCopyFromLocal |
|   | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14698 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881039/HADOOP-14698.06.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 93d71fb88248 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 63cfcb9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12994/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12994/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12994/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12994/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This 

[jira] [Commented] (HADOOP-13998) Merge initial s3guard release into trunk

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120254#comment-16120254
 ] 

Steve Loughran commented on HADOOP-13998:
-

We are pretty much done here, down to those review-of-spelling nits. I'm about 
do do a merge into s3guard of trunk again, as I can see things have diverged 
(if I mix builds, I get errors about commons-lang3 missing)

> Merge initial s3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14749:

Status: Patch Available  (was: Open)

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch, 
> HADOOP-14749-HADOOP-13345-002.patch, HADOOP-14749-HADOOP-13345-003.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14749:

Attachment: HADOOP-14749-HADOOP-13345-003.patch

patch 003

Ewan and Aaron's suggestions, plus use S3Guard in all comments and strings 
where it is appropriate

Aaron, regarding "FileStatus' path" vs "If FileStatus's path" , I'm now worried 
that either I've got it wrong *or* there's one of those US:UK rule variants. 
I'll go with yours

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch, 
> HADOOP-14749-HADOOP-13345-002.patch, HADOOP-14749-HADOOP-13345-003.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120251#comment-16120251
 ] 

Hadoop QA commented on HADOOP-14553:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 101 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
24s{color} | {color:green} root generated 0 new + 1372 unchanged - 2 fixed = 
1372 total (was 1374) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  5s{color} | {color:orange} root: The patch generated 154 new + 212 
unchanged - 178 fixed = 366 total (was 390) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  8s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14553 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881033/HADOOP-14553-010.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux b9d3e1161573 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 63cfcb9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12992/artifact/patchprocess/diff-checkstyle-root.txt
 |
| whitespace | 

[jira] [Updated] (HADOOP-14735) ITestS3AEncryptionSSEC failing in parallel s3guard runs

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14735:

   Resolution: Fixed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

committed to HADOOP-13345 branch. It should go into trunk too, where the 
problem exists (but doesn't surface), but the forthcoming merge will do that.

> ITestS3AEncryptionSSEC failing in parallel s3guard runs
> ---
>
> Key: HADOOP-14735
> URL: https://issues.apache.org/jira/browse/HADOOP-14735
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14735-HADOOP-13345-001.patch
>
>
> in parallel test runs, {{ITestS3AEncryptionSSEC}} is failing (repeatedly) by 
> not throwing an exception when attempting to rename one file to another using 
> a different client key



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120177#comment-16120177
 ] 

Steve Loughran commented on HADOOP-14749:
-

+ feedback from [~ehiggs]
{code}
+  // with a metadata store, the object entries need tup be updated,
Grammar/spelling.
 
+   * This will always be non-null, but may be bound to the
If something will be not null, maybe use @NotNull. I don’t see any uses of it 
yet in the Hadoop codebase, so maybe someone decided against using it.
 
+  if (status == DirectoryStatus.DOES_NOT_EXIST
+  || status == DirectoryStatus.EXISTS_AND_IS_DIRECTORY_ON_S3_ONLY) {
I think this indents the || one too many. checkstyle should pick it up.
 
+  // TODO s3guard: retry on file not found exception
Other places you are normalizing spelling to use capital S and capital G (even 
in comments) and the nature of this patch is nit fixes... :)
 
+   * Generally,  callers should use {@link #initialize(FileSystem)}
+   * with an initialized S3 file system.
 
A wise man once said “Object Stores are not File Systems”. So do we want “with 
an initialized {@link S3AFileSystem} ? or “initialized S3 FileSystem” so it 
includes S3 and S3N (which will will be removed soon).
 
+   * Without a filesystem to act as a reference point, the configuration itself
file system or filesystem. cf previous comment.
 
+ Errpr `"DynamoDB table TABLE does not exist in region REGION; 
auto-creation is turned off"`
Error (spelling).
 
+
+### Warning About Concurrent Tests
+
+You must not run S3A and S3N tests in parallel on the same bucket.  This is
+especially true when S3Guard is enabled.  S3Guard requires that all clients
+that are modifying the bucket have S3Guard enabled, so having S3N
+integration tests running in parallel with S3A tests will cause strange
+failures.
 
So if someone adds to the bucket using s3cmd in production what will happen? 
This seems like a severe limitation that can effect of ephemeral mounts for 
Provided Storage where a purpose is to async repl between s3 and hdfs.
 
+The two S3Guard scale testse are `ITestDynamoDBMetadataStoreScale` and
tests (spelling)
{code}

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch, 
> HADOOP-14749-HADOOP-13345-002.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120172#comment-16120172
 ] 

Steve Loughran commented on HADOOP-14749:
-

Aaron: just seen your comments. Yes, the patch was out of date. And I have 
moved all s3guard testing into the "testing" doc as everyone testing s3a needs 
to know about it, while general s3guard users don't.

I'll do a revised patch

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch, 
> HADOOP-14749-HADOOP-13345-002.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14749:

Attachment: HADOOP-14749-HADOOP-13345-002.patch

Patch 002; sync with s3guard after the various patch-pending patches went in. 
Essentially: less to review.

While looking at the diff, I'm now worried about the high-ascii chars in the 
illustration in {{TestDynamoDBMetadataStore.verifyRootDirectory()}}. It's a 
lovely diagram, and I had to look at it to see how it was done —which is with 
chars > 0x80. I don't know how well this works in different locale; I do know 
we can't use other high ascii symbols, eg. "—" without encoding to  (I 
say that, but a quick scan for "—" shows lots of uses in hadoop-aws, and I 
probably the guilty party. We should perhaps fix that.

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch, 
> HADOOP-14749-HADOOP-13345-002.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14467) S3Guard: Improve FNFE message when opening a stream

2017-08-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120168#comment-16120168
 ] 

Aaron Fabbri commented on HADOOP-14467:
---

I didn't find a nice clean way to add a new exception message that I liked 
here.  At least now folks can google it.  I feel like we could make more 
improvements as part of HADOOP-14735:  we could report existence in S3 (if we 
checked), Metadata Store, etc.

> S3Guard: Improve FNFE message when opening a stream
> ---
>
> Key: HADOOP-14467
> URL: https://issues.apache.org/jira/browse/HADOOP-14467
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14467-HADOOP-13345.001.patch
>
>
> Following up on the [discussion on 
> HADOOP-13345|https://issues.apache.org/jira/browse/HADOOP-13345?focusedCommentId=16030050=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16030050],
>  because S3Guard can serve getFileStatus() from the MetadataStore without 
> doing a HEAD on S3, a FileNotFound error on a file due to S3 GET 
> inconsistency does not happen on open(), but on the first read of the stream. 
>  We may add retries to the S3 client in the future, but for now we should 
> have an exception message that indicates this may be due to inconsistency 
> (assuming it isn't a more straightforward case like someone deleting the 
> object out from under you).
> This is expected to be a rare case, since the S3 service is now mostly 
> consistent for GET.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14749:

Status: Open  (was: Patch Available)

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14467) S3Guard: Improve FNFE message when opening a stream

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120153#comment-16120153
 ] 

Steve Loughran commented on HADOOP-14467:
-

 I should add, do we actually want a followup task here?

> S3Guard: Improve FNFE message when opening a stream
> ---
>
> Key: HADOOP-14467
> URL: https://issues.apache.org/jira/browse/HADOOP-14467
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14467-HADOOP-13345.001.patch
>
>
> Following up on the [discussion on 
> HADOOP-13345|https://issues.apache.org/jira/browse/HADOOP-13345?focusedCommentId=16030050=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16030050],
>  because S3Guard can serve getFileStatus() from the MetadataStore without 
> doing a HEAD on S3, a FileNotFound error on a file due to S3 GET 
> inconsistency does not happen on open(), but on the first read of the stream. 
>  We may add retries to the S3 client in the future, but for now we should 
> have an exception message that indicates this may be due to inconsistency 
> (assuming it isn't a more straightforward case like someone deleting the 
> object out from under you).
> This is expected to be a rare case, since the S3 service is now mostly 
> consistent for GET.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-09 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14698:
--
Attachment: HADOOP-14698.06.patch

Thanks a lot [~msingh], it was a nice catch.
It pointed out that we have no test cases for copyFromLocal when the source 
comes from stdin. So I filed HADOOP-14751.

bq. should we modify the usage to eliminate "-t" option?
It does not mention the options at all that is why I did not touch the usage of 
MoveFromLocal but I mentioned in the description.
If you prefer to rewrite the usage I can upload another patch.

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch, 
> HADOOP-14698.06.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14467) S3Guard: Improve FNFE message when opening a stream

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14467.
-
   Resolution: Fixed
Fix Version/s: HADOOP-13345

+1

committed. Thanks

> S3Guard: Improve FNFE message when opening a stream
> ---
>
> Key: HADOOP-14467
> URL: https://issues.apache.org/jira/browse/HADOOP-14467
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14467-HADOOP-13345.001.patch
>
>
> Following up on the [discussion on 
> HADOOP-13345|https://issues.apache.org/jira/browse/HADOOP-13345?focusedCommentId=16030050=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16030050],
>  because S3Guard can serve getFileStatus() from the MetadataStore without 
> doing a HEAD on S3, a FileNotFound error on a file due to S3 GET 
> inconsistency does not happen on open(), but on the first read of the stream. 
>  We may add retries to the S3 client in the future, but for now we should 
> have an exception message that indicates this may be due to inconsistency 
> (assuming it isn't a more straightforward case like someone deleting the 
> object out from under you).
> This is expected to be a rare case, since the S3 service is now mostly 
> consistent for GET.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14735) ITestS3AEncryptionSSEC failing in parallel s3guard runs

2017-08-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120140#comment-16120140
 ] 

Aaron Fabbri commented on HADOOP-14735:
---

+1 looks good to me.  Thanks for doing this [~ste...@apache.org]

> ITestS3AEncryptionSSEC failing in parallel s3guard runs
> ---
>
> Key: HADOOP-14735
> URL: https://issues.apache.org/jira/browse/HADOOP-14735
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14735-HADOOP-13345-001.patch
>
>
> in parallel test runs, {{ITestS3AEncryptionSSEC}} is failing (repeatedly) by 
> not throwing an exception when attempting to rename one file to another using 
> a different client key



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120065#comment-16120065
 ] 

Steve Loughran edited comment on HADOOP-14553 at 8/9/17 3:58 PM:
-

patch 010

* rebase onto trunk
* new test for explicit invocation, CleanupTestContainers, address failure to 
cleanup containers
* patched a couple of contract tests to overwrite the test files they 
created...otherwise if interrupted, they fail the next time round


was (Author: ste...@apache.org):
patch 010

* rebase onto trunk
* new test for explicit invocation, CleanupTestContainers, address failure to 
cleanup containers

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch, HADOOP-14553-010.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.
> *UPDATE August 4, 2017*:  Adding a list of requirements to clarify the 
> acceptance criteria for this JIRA:
> # Parallelize test execution
> # Define test groups: i) UnitTests - self-contained, executed by Jenkins, ii) 
> IntegrationTests - requires Azure Storage account, executed by engineers 
> prior to check-in, and if needed, iii) ScaleTests – long running performance 
> and scalability tests.
> # Define configuration profiles to run tests with different settings.  Allows 
> an engineer to run “IntegrationTests” with fs.azure.secure.mode = true and 
> false.  Need to review settings to see what else would benefit.
> # Maven commands to run b) and c).  Turns out it is not easy to do with 
> Maven, so we might have to run it multiple times to run with different 
> configuration settings.
> # Document how to add and run tests and the process for contributing to 
> Apache Hadoop.  Steve shared an example at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
>  
> # UnitTests should run in under 2 minutes and IntegrationTests should run in 
> under 15 minutes, even on slower network connections.  (These are rough goals)
> # Ensure test data (containers/blobs/etc) is deleted.  Exceptions for large 
> persistent content used repeatedly to expedite test execution. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14741) Refactor curator based ZooKeeper communication into common library

2017-08-09 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-14741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-14741:
-
Attachment: HADOOP-14741-005.patch

Fixed unit test and javadoc.

> Refactor curator based ZooKeeper communication into common library
> --
>
> Key: HADOOP-14741
> URL: https://issues.apache.org/jira/browse/HADOOP-14741
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Subru Krishnan
>Assignee: Íñigo Goiri
> Attachments: HADOOP-14741-000.patch, HADOOP-14741-001.patch, 
> HADOOP-14741-002.patch, HADOOP-14741-003.patch, HADOOP-14741-004.patch, 
> HADOOP-14741-005.patch
>
>
> Currently we have ZooKeeper based store implementations for multiple state 
> stores like RM, YARN Federation, HDFS router-based federation, RM queue 
> configs etc. This jira proposes to unify the curator based ZK communication 
> to eliminate redundancies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14154) Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore

2017-08-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120131#comment-16120131
 ] 

Aaron Fabbri edited comment on HADOOP-14154 at 8/9/17 3:56 PM:
---

Hi [~ste...@apache.org].  I have a prototype patch sitting around somewhere 
which implements authoritative listings for DynamoDB.  The solution is more 
complex than the original description here implies, so we should either rename 
this jira or create a new one.

Whichever JIRA we use, it should go in a Phase II umbrella for post-merge work.


was (Author: fabbri):
Hi [~ste...@apache.org].  I have a prototype patch sitting around somewhere 
which implements authoritative listings for DynamoDB.  The solution is more 
complex than the original description here implies, so we should either rename 
this jira or create a new one.

> Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore
> --
>
> Key: HADOOP-14154
> URL: https://issues.apache.org/jira/browse/HADOOP-14154
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-14154-HADOOP-13345.001.patch, 
> HADOOP-14154-HADOOP-13345.002.patch
>
>
> Currently {{DynamoDBMetaStore::listChildren}} does not populate 
> {{isAuthoritative}} flag when creating {{DirListingMetadata}}. 
> This causes additional S3 lookups even when users have enabled 
> {{fs.s3a.metadatastore.authoritative}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14154) Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore

2017-08-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120131#comment-16120131
 ] 

Aaron Fabbri commented on HADOOP-14154:
---

Hi [~ste...@apache.org].  I have a prototype patch sitting around somewhere 
which implements authoritative listings for DynamoDB.  The solution is more 
complex than the original description here implies, so we should either rename 
this jira or create a new one.

> Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore
> --
>
> Key: HADOOP-14154
> URL: https://issues.apache.org/jira/browse/HADOOP-14154
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-14154-HADOOP-13345.001.patch, 
> HADOOP-14154-HADOOP-13345.002.patch
>
>
> Currently {{DynamoDBMetaStore::listChildren}} does not populate 
> {{isAuthoritative}} flag when creating {{DirListingMetadata}}. 
> This causes additional S3 lookups even when users have enabled 
> {{fs.s3a.metadatastore.authoritative}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14553:

Status: Patch Available  (was: Open)

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch, HADOOP-14553-010.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.
> *UPDATE August 4, 2017*:  Adding a list of requirements to clarify the 
> acceptance criteria for this JIRA:
> # Parallelize test execution
> # Define test groups: i) UnitTests - self-contained, executed by Jenkins, ii) 
> IntegrationTests - requires Azure Storage account, executed by engineers 
> prior to check-in, and if needed, iii) ScaleTests – long running performance 
> and scalability tests.
> # Define configuration profiles to run tests with different settings.  Allows 
> an engineer to run “IntegrationTests” with fs.azure.secure.mode = true and 
> false.  Need to review settings to see what else would benefit.
> # Maven commands to run b) and c).  Turns out it is not easy to do with 
> Maven, so we might have to run it multiple times to run with different 
> configuration settings.
> # Document how to add and run tests and the process for contributing to 
> Apache Hadoop.  Steve shared an example at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
>  
> # UnitTests should run in under 2 minutes and IntegrationTests should run in 
> under 15 minutes, even on slower network connections.  (These are rough goals)
> # Ensure test data (containers/blobs/etc) is deleted.  Exceptions for large 
> persistent content used repeatedly to expedite test execution. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14553:

Status: Open  (was: Patch Available)

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch, HADOOP-14553-010.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.
> *UPDATE August 4, 2017*:  Adding a list of requirements to clarify the 
> acceptance criteria for this JIRA:
> # Parallelize test execution
> # Define test groups: i) UnitTests - self-contained, executed by Jenkins, ii) 
> IntegrationTests - requires Azure Storage account, executed by engineers 
> prior to check-in, and if needed, iii) ScaleTests – long running performance 
> and scalability tests.
> # Define configuration profiles to run tests with different settings.  Allows 
> an engineer to run “IntegrationTests” with fs.azure.secure.mode = true and 
> false.  Need to review settings to see what else would benefit.
> # Maven commands to run b) and c).  Turns out it is not easy to do with 
> Maven, so we might have to run it multiple times to run with different 
> configuration settings.
> # Document how to add and run tests and the process for contributing to 
> Apache Hadoop.  Steve shared an example at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
>  
> # UnitTests should run in under 2 minutes and IntegrationTests should run in 
> under 15 minutes, even on slower network connections.  (These are rough goals)
> # Ensure test data (containers/blobs/etc) is deleted.  Exceptions for large 
> persistent content used repeatedly to expedite test execution. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14751) Add tests for copyFromLocal when source is stdin

2017-08-09 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-14751:
-

 Summary: Add tests for copyFromLocal when source is stdin
 Key: HADOOP-14751
 URL: https://issues.apache.org/jira/browse/HADOOP-14751
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Andras Bokor
Assignee: Andras Bokor
Priority: Minor


Currently we do not test copyFromLocal when the source is given by stdin.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14553:

Status: Patch Available  (was: Open)

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch, HADOOP-14553-010.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.
> *UPDATE August 4, 2017*:  Adding a list of requirements to clarify the 
> acceptance criteria for this JIRA:
> # Parallelize test execution
> # Define test groups: i) UnitTests - self-contained, executed by Jenkins, ii) 
> IntegrationTests - requires Azure Storage account, executed by engineers 
> prior to check-in, and if needed, iii) ScaleTests – long running performance 
> and scalability tests.
> # Define configuration profiles to run tests with different settings.  Allows 
> an engineer to run “IntegrationTests” with fs.azure.secure.mode = true and 
> false.  Need to review settings to see what else would benefit.
> # Maven commands to run b) and c).  Turns out it is not easy to do with 
> Maven, so we might have to run it multiple times to run with different 
> configuration settings.
> # Document how to add and run tests and the process for contributing to 
> Apache Hadoop.  Steve shared an example at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
>  
> # UnitTests should run in under 2 minutes and IntegrationTests should run in 
> under 15 minutes, even on slower network connections.  (These are rough goals)
> # Ensure test data (containers/blobs/etc) is deleted.  Exceptions for large 
> persistent content used repeatedly to expedite test execution. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16120068#comment-16120068
 ] 

Steve Loughran commented on HADOOP-14553:
-

Test results, without scale tests, 13 min. Still slow, but better than before
{code}
---
 T E S T S
---
Running org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractRename
Running org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractMkdir
Running org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractCreate
Running org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractSeek
Running 
org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractGetFileStatus
Running org.apache.hadoop.fs.azure.integration.ITestAzureHugeFiles
Running org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractDistCp
Running org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractAppend
Running org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractOpen
Running org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractDelete
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 6.017 sec - in 
org.apache.hadoop.fs.azure.integration.ITestAzureHugeFiles
Running org.apache.hadoop.fs.azure.ITestAzureConcurrentOutOfBandIo
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 12.752 sec - in 
org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractDistCp
Running org.apache.hadoop.fs.azure.ITestAzureConcurrentOutOfBandIoWithSecureMode
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.579 sec - in 
org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractOpen
Running org.apache.hadoop.fs.azure.ITestAzureFileSystemErrorConditions
Tests run: 7, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 23.806 sec - in 
org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractAppend
Running org.apache.hadoop.fs.azure.ITestBlobTypeSpeedDifference
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.035 sec - in 
org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractDelete
Running org.apache.hadoop.fs.azure.ITestContainerChecks
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.99 sec - in 
org.apache.hadoop.fs.azure.ITestAzureConcurrentOutOfBandIo
Running org.apache.hadoop.fs.azure.ITestFileSystemOperationExceptionHandling
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.834 sec - in 
org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractRename
Running org.apache.hadoop.fs.azure.ITestFileSystemOperationExceptionMessage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.655 sec - in 
org.apache.hadoop.fs.azure.ITestFileSystemOperationExceptionMessage
Running org.apache.hadoop.fs.azure.ITestNativeAzureFileSystemAppend
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.711 sec - in 
org.apache.hadoop.fs.azure.ITestContainerChecks
Running org.apache.hadoop.fs.azure.ITestNativeAzureFileSystemAtomicRenameDirList
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.189 sec - in 
org.apache.hadoop.fs.azure.ITestNativeAzureFileSystemAtomicRenameDirList
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.323 sec - in 
org.apache.hadoop.fs.azure.ITestBlobTypeSpeedDifference
Running org.apache.hadoop.fs.azure.ITestNativeAzureFileSystemClientLogging
Running org.apache.hadoop.fs.azure.ITestNativeAzureFileSystemContractEmulator
Tests run: 43, Failures: 0, Errors: 0, Skipped: 43, Time elapsed: 0.75 sec - in 
org.apache.hadoop.fs.azure.ITestNativeAzureFileSystemContractEmulator
Running org.apache.hadoop.fs.azure.ITestNativeAzureFileSystemContractLive
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.137 sec - in 
org.apache.hadoop.fs.azure.ITestNativeAzureFileSystemClientLogging
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.386 sec - in 
org.apache.hadoop.fs.azure.ITestAzureConcurrentOutOfBandIoWithSecureMode
Running 
org.apache.hadoop.fs.azure.ITestNativeAzureFileSystemContractPageBlobLive
Running org.apache.hadoop.fs.azure.ITestNativeAzureFSAuthorizationCaching
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.955 sec - 
in org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractCreate
Running org.apache.hadoop.fs.azure.ITestReadAndSeekPageBlobAfterWrite
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 2.299 sec - in 
org.apache.hadoop.fs.azure.ITestReadAndSeekPageBlobAfterWrite
Running org.apache.hadoop.fs.azure.ITestWasbUriAndConfiguration
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.526 sec - in 
org.apache.hadoop.fs.azure.contract.ITestAzureNativeContractMkdir
Running org.apache.hadoop.fs.azure.metrics.ITestAzureFileSystemInstrumentation
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.83 sec - in 

[jira] [Updated] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14553:

Attachment: HADOOP-14553-010.patch

patch 010

* rebase onto trunk
* new test for explicit invocation, CleanupTestContainers, address failure to 
cleanup containers

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch, HADOOP-14553-010.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.
> *UPDATE August 4, 2017*:  Adding a list of requirements to clarify the 
> acceptance criteria for this JIRA:
> # Parallelize test execution
> # Define test groups: i) UnitTests - self-contained, executed by Jenkins, ii) 
> IntegrationTests - requires Azure Storage account, executed by engineers 
> prior to check-in, and if needed, iii) ScaleTests – long running performance 
> and scalability tests.
> # Define configuration profiles to run tests with different settings.  Allows 
> an engineer to run “IntegrationTests” with fs.azure.secure.mode = true and 
> false.  Need to review settings to see what else would benefit.
> # Maven commands to run b) and c).  Turns out it is not easy to do with 
> Maven, so we might have to run it multiple times to run with different 
> configuration settings.
> # Document how to add and run tests and the process for contributing to 
> Apache Hadoop.  Steve shared an example at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
>  
> # UnitTests should run in under 2 minutes and IntegrationTests should run in 
> under 15 minutes, even on slower network connections.  (These are rough goals)
> # Ensure test data (containers/blobs/etc) is deleted.  Exceptions for large 
> persistent content used repeatedly to expedite test execution. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14553:

Status: Open  (was: Patch Available)

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch, HADOOP-14553-010.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.
> *UPDATE August 4, 2017*:  Adding a list of requirements to clarify the 
> acceptance criteria for this JIRA:
> # Parallelize test execution
> # Define test groups: i) UnitTests - self-contained, executed by Jenkins, ii) 
> IntegrationTests - requires Azure Storage account, executed by engineers 
> prior to check-in, and if needed, iii) ScaleTests – long running performance 
> and scalability tests.
> # Define configuration profiles to run tests with different settings.  Allows 
> an engineer to run “IntegrationTests” with fs.azure.secure.mode = true and 
> false.  Need to review settings to see what else would benefit.
> # Maven commands to run b) and c).  Turns out it is not easy to do with 
> Maven, so we might have to run it multiple times to run with different 
> configuration settings.
> # Document how to add and run tests and the process for contributing to 
> Apache Hadoop.  Steve shared an example at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
>  
> # UnitTests should run in under 2 minutes and IntegrationTests should run in 
> under 15 minutes, even on slower network connections.  (These are rough goals)
> # Ensure test data (containers/blobs/etc) is deleted.  Exceptions for large 
> persistent content used repeatedly to expedite test execution. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-09 Thread Lantao Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119992#comment-16119992
 ] 

Lantao Jin commented on HADOOP-14708:
-

Thanks [~jojochuang]. Maybe the 
[HDFS-3745|https://issues.apache.org/jira/browse/HDFS-3745] could fix my issue 
as well with this code:
{code}
-  /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
+  /** Same as getUGI(context, request, conf, KERBEROS, true). */
   public static UserGroupInformation getUGI(ServletContext context,
   HttpServletRequest request, Configuration conf) throws IOException {
-return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
true);
+return getUGI(context, request, conf, AuthenticationMethod.KERBEROS, true);
   }
{code}
So wait HDFS-3745 to be resolved?

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14553) Add (parallelized) integration tests to hadoop-azure

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119960#comment-16119960
 ] 

Steve Loughran commented on HADOOP-14553:
-

The next iteration of this will include a test class which must be explicitly 
invoked on the command line via a -Dtest=, which will list and delete all 
{{wasbtests-}} containers of an account. This can be used to clean up container 
leakage:
{code}
2017-08-09 15:08:54,335 INFO  [JUnit-testDeleteContainers]: 
azure.AbstractWasbTestBase 
(CleanupTestContainers.java:testDeleteContainers(79)) - Container 
wasbtests-stevel-1501782115769 URI 
http://contender.blob.core.windows.net/wasbtests-stevel-1501782115769
2017-08-09 15:08:54,390 INFO  [JUnit-testDeleteContainers]: 
azure.AbstractWasbTestBase 
(CleanupTestContainers.java:testDeleteContainers(79)) - Container 
wasbtests-stevel-1501782117324 URI 
http://contender.blob.core.windows.net/wasbtests-stevel-1501782117324
2017-08-09 15:08:54,444 INFO  [JUnit-testDeleteContainers]: 
azure.AbstractWasbTestBase 
(CleanupTestContainers.java:testDeleteContainers(79)) - Container 
wasbtests-stevel-1501782149411 URI 
http://contender.blob.core.windows.net/wasbtests-stevel-1501782149411
2017-08-09 15:08:54,497 INFO  [JUnit-testDeleteContainers]: 
azure.AbstractWasbTestBase 
(CleanupTestContainers.java:testDeleteContainers(86)) - Deleted 2436 test 
containers
{code}

> Add (parallelized) integration tests to hadoop-azure
> 
>
> Key: HADOOP-14553
> URL: https://issues.apache.org/jira/browse/HADOOP-14553
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14553-001.patch, HADOOP-14553-002.patch, 
> HADOOP-14553-003.patch, HADOOP-14553-004.patch, HADOOP-14553-005.patch, 
> HADOOP-14553-006.patch, HADOOP-14553-007.patch, HADOOP-14553-008.patch, 
> HADOOP-14553-009.patch
>
>
> The Azure tests are slow to run as they are serialized, as they are all 
> called Test* there's no clear differentiation from unit tests which Jenkins 
> can run, and integration tests which it can't.
> Move the azure tests {{Test*}} to integration tests {{ITest*}}, parallelize 
> (which includes having separate paths for every test suite). The code in 
> hadoop-aws's POM  show what to do.
> *UPDATE August 4, 2017*:  Adding a list of requirements to clarify the 
> acceptance criteria for this JIRA:
> # Parallelize test execution
> # Define test groups: i) UnitTests - self-contained, executed by Jenkins, ii) 
> IntegrationTests - requires Azure Storage account, executed by engineers 
> prior to check-in, and if needed, iii) ScaleTests – long running performance 
> and scalability tests.
> # Define configuration profiles to run tests with different settings.  Allows 
> an engineer to run “IntegrationTests” with fs.azure.secure.mode = true and 
> false.  Need to review settings to see what else would benefit.
> # Maven commands to run b) and c).  Turns out it is not easy to do with 
> Maven, so we might have to run it multiple times to run with different 
> configuration settings.
> # Document how to add and run tests and the process for contributing to 
> Apache Hadoop.  Steve shared an example at 
> https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
>  
> # UnitTests should run in under 2 minutes and IntegrationTests should run in 
> under 15 minutes, even on slower network connections.  (These are rough goals)
> # Ensure test data (containers/blobs/etc) is deleted.  Exceptions for large 
> persistent content used repeatedly to expedite test execution. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14154) Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119954#comment-16119954
 ] 

Steve Loughran commented on HADOOP-14154:
-

where are we with this? It sounds like the patch as it stands isn't something 
to consider, not yet,/not in its present form

> Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore
> --
>
> Key: HADOOP-14154
> URL: https://issues.apache.org/jira/browse/HADOOP-14154
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-14154-HADOOP-13345.001.patch, 
> HADOOP-14154-HADOOP-13345.002.patch
>
>
> Currently {{DynamoDBMetaStore::listChildren}} does not populate 
> {{isAuthoritative}} flag when creating {{DirListingMetadata}}. 
> This causes additional S3 lookups even when users have enabled 
> {{fs.s3a.metadatastore.authoritative}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119955#comment-16119955
 ] 

Aaron Fabbri commented on HADOOP-14749:
---

Thanks for the patch [~ste...@apache.org].  This is good stuff.

{noformat}
   /**
-   * Should not be called by clients.  Only used so {@link org.apache.hadoop
-   * .fs.s3a.s3guard.MetadataStore} can maintain this flag when caching
-   * FileStatuses on behalf of s3a.
+   * Should not be called by clients.  Only used so {@code MetadataStore}
+   * can maintain this flag when caching FileStatuses on behalf of s3a.
* @param value for directories: TRUE / FALSE if known empty/not-empty,
*  UNKNOWN otherwise
*/
{noformat}

Actually, can we remove {{setIsEmptyDirectory()}} now?  IIRC this is not used 
since I reworked the empty directory handling logic.

{noformat}
+  // with a metadata store, the object entries need tup be updated,
+  // including, potentially, the ancestors
{noformat}

/tup/to/

{noformat}
+  /**
+   * Determine the directory status of a path, going via any
+   * MetadataStore before checking S3.
+   * @param path path to check
+   * @return the determined status
+   * @throws IOException IO failure other than FileNotFoundException
+   */
   private DirectoryStatus checkPathForDirectory(Path path) throws
   IOException {
{noformat}

I thought HADOOP-14505 eliminated checkPathForDirectory()?  I had suggested 
just using getFileStatus() would be more efficient and less code.

{noformat}
+// metadata listing is authoritative, so return it directory
{noformat}

/directory/directly/ ?

{noformat}
-// If FileStatus' path is missing host, but should have one, add it.
+// If FileStatus's path is missing host, but should have one, add it.
{noformat}
Either is correct, BTW.

{noformat}
-assertQualified(srcRoot);
-assertQualified(srcPath);
-assertQualified(dstPath);
+assertQualified(srcRoot, srcPath, dstPath);
{noformat}
Nice.

{noformat}
+ Errpr `"DynamoDB table TABLE does not exist in region REGION; 
auto-creation is turned off"`
{noformat}
/Errpr/Error/

The docs changes look good, but the diff became a bit hard to follow.  Looks 
like you moved some stuff to testing doc, which is fine.


> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13743) error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials has too many spaces

2017-08-09 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119883#comment-16119883
 ] 

Hongyuan Li edited comment on HADOOP-13743 at 8/9/17 1:49 PM:
--

dig into the log4j source code, the message format is implemented by the 
stringbuilder to format error messages internally.


was (Author: hongyuan li):
dig into the log4j source code, the message format uses the stringbuilder to 
format error messages internally.

> error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials 
> has too many spaces
> 
>
> Key: HADOOP-13743
> URL: https://issues.apache.org/jira/browse/HADOOP-13743
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-13743-branch-2-001.patch, 
> HADOOP-14373-branch-2-002.patch
>
>
> The error message on a failed hadoop fs -ls command against an unauthed azure 
> container has an extra space in {{" them  in"}}
> {code}
> ls: org.apache.hadoop.fs.azure.AzureException: Unable to access container 
> demo in account example.blob.core.windows.net using anonymous credentials, 
> and no credentials found for them  in the configuration.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14505) simplify mkdirs() after S3Guard delete tracking change

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14505:

   Resolution: Fixed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

+1

reran all tests, apart from the ongoing SSE-C test failure, all is well. 
Committed & pushed up

Thanks

> simplify mkdirs() after S3Guard delete tracking change
> --
>
> Key: HADOOP-14505
> URL: https://issues.apache.org/jira/browse/HADOOP-14505
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14505-HADOOP-13345.001.patch
>
>
> I noticed after reviewing the S3Guard delete tracking changes for 
> HADOOP-13760, that mkdirs() can probably be simplified, replacing the use of 
> checkPathForDirectory() with a simple getFileStatus().
> Creating a separate JIRA so these changes can be reviewed / tested in 
> isolation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14633) S3Guard: optimize create codepath

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14633:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> S3Guard: optimize create codepath
> -
>
> Key: HADOOP-14633
> URL: https://issues.apache.org/jira/browse/HADOOP-14633
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
> Environment: 
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14633-HADOOP-13345.001.patch, 
> HADOOP-14633-HADOOP-13345.002.patch, HADOOP-14633-HADOOP-13345.003.patch
>
>
> Following up on HADOOP-14457, a couple of things to do that will improve 
> create performance as I mentioned in the comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-14457?focusedCommentId=16078465=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16078465]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14633) S3Guard: optimize create codepath

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119887#comment-16119887
 ] 

Steve Loughran commented on HADOOP-14633:
-

+1 pending completion of a local full scale test run (in progress)

> S3Guard: optimize create codepath
> -
>
> Key: HADOOP-14633
> URL: https://issues.apache.org/jira/browse/HADOOP-14633
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
> Environment: 
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14633-HADOOP-13345.001.patch, 
> HADOOP-14633-HADOOP-13345.002.patch, HADOOP-14633-HADOOP-13345.003.patch
>
>
> Following up on HADOOP-14457, a couple of things to do that will improve 
> create performance as I mentioned in the comment 
> [here|https://issues.apache.org/jira/browse/HADOOP-14457?focusedCommentId=16078465=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16078465]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13743) error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials has too many spaces

2017-08-09 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119883#comment-16119883
 ] 

Hongyuan Li commented on HADOOP-13743:
--

dig into the log4j source code, the message format uses the stringbuilder to 
format error messages internally.

> error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials 
> has too many spaces
> 
>
> Key: HADOOP-13743
> URL: https://issues.apache.org/jira/browse/HADOOP-13743
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-13743-branch-2-001.patch, 
> HADOOP-14373-branch-2-002.patch
>
>
> The error message on a failed hadoop fs -ls command against an unauthed azure 
> container has an extra space in {{" them  in"}}
> {code}
> ls: org.apache.hadoop.fs.azure.AzureException: Unable to access container 
> demo in account example.blob.core.windows.net using anonymous credentials, 
> and no credentials found for them  in the configuration.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14748) Wasb input streams to implement CanUnbuffer

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119870#comment-16119870
 ] 

Steve Loughran commented on HADOOP-14748:
-

Note that HBase running on WASB complains a lot about streams not having this 
feature...it now expects them to

> Wasb input streams to implement CanUnbuffer
> ---
>
> Key: HADOOP-14748
> URL: https://issues.apache.org/jira/browse/HADOOP-14748
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Priority: Minor
>
> HBase relies on FileSystems implementing CanUnbuffer.unbuffer() to force 
> input streams to free up remote connections (HBASE-9393Link). This works for 
> HDFS, but not elsewhere.
> WASB {{BlockBlobInputStream}} can implement this by closing the stream 
>  in ({{closeBlobInputStream}}, so it will be re-opened elsewhere.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14691) Shell command "hadoop fs -put" multiple close problem

2017-08-09 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119833#comment-16119833
 ] 

Andras Bokor commented on HADOOP-14691:
---

[~jzhuge],

I think we should eliminate the multiple close calls instead of handling it.
{code:title=CommandWithDestination#writeStreamToFile}
try {
out = create(target, lazyPersist, direct);
IOUtils.copyBytes(in, out, getConf(), true); // 1st close call
  } finally {
IOUtils.closeStream(out); // second close call
  }
{code}
I beleive the original author assumed that after calling {{IOUtils.copyBytes}} 
with true the out will be null so the second close call has no effect.
One possible solution is to call {{IOUtils.copyBytes}} with false and close the 
stream with try-catch-resource:
{code}
void writeStreamToFile(InputStream in, PathData target,
boolean lazyPersist, boolean direct)
throws IOException {
  try (FSDataOutputStream out = create(target, lazyPersist, direct)) {
IOUtils.copyBytes(in, out, getConf(), false);
  }
}
{code}
But I agree with the reporter about {{IOUtils.copyBytes}} is misleading and 
should be changed.
If we want to fix this specific double call I suggest go with the code few 
lines above.

But I still suggest fixing HADOOP-5943 since there are more double call in the 
code which shows the current API is misleading.
Please check {{FileContext.Util#copy}} for example.

> Shell command "hadoop fs -put" multiple close problem
> -
>
> Key: HADOOP-14691
> URL: https://issues.apache.org/jira/browse/HADOOP-14691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
> Environment: CentOS7.0
> JDK1.8.0_121
> hadoop2.7.3
>Reporter: Eric Lei
>Assignee: Eric Lei
>  Labels: close, filesystem, hadoop, multi
> Attachments: CommandWithDestination.patch, 
> hadoop_common_unit_test_result_after_modification.docx, 
> hadoop_common_unit_test_result_before_modification.docx, IOUtils.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> 1.Bug description
> Shell command “Hadoop fs -put” is a write operation. In this process, 
> FSDataOutputStream is new created and closed lastly. Finally, the 
> FSDataOutputStream.close() calls the close method in HDFS to end up the 
> communication of this write process between the server and client.
> With the command “Hadoop fs -put”, for each created FSDataOutputStream 
> object, FSDataOutputStream.close() is called twice, which means the close 
> method, in the underlying distributed file system, is called twice. This is 
> the error, that’s because the communication process, for example socket, 
> might be repeated shut down. Unfortunately, if there is no error protection 
> for the socket, there might be error for the socket in the second close. 
> Further, we think a correct upper file system design should keep the one time 
> close principle. It means that each creation of underlying distributed file 
> system object should correspond with close only once. 
> For the command “Hadoop fs -put”, there are double close as follows:
> a.The first close process:
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
> at 
> org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:267)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at 

[jira] [Commented] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-09 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119776#comment-16119776
 ] 

Mukul Kumar Singh commented on HADOOP-14698:


Hi [~boky01], Thanks for the latest patch.

1) CopyCommands.java:292, we are missing following lines in 
{{processArguments}} in the new implementation.
{code}
  // NOTE: this logic should be better, mimics previous implementation
  if (args.size() == 1 && args.get(0).toString().equals("-")) {
copyStreamToTarget(System.in, getTargetPath(args.get(0)));
return;
  }
{code}

2) MoveCommands.java, should we modify the usage to eliminate "-t" option ?

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119772#comment-16119772
 ] 

Hadoop QA commented on HADOOP-14749:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-14749 does not apply to HADOOP-13345. Rebase required? 
Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14749 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880921/HADOOP-14749-HADOOP-13345-001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12990/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13743) error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials has too many spaces

2017-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119769#comment-16119769
 ] 

Hadoop QA commented on HADOOP-13743:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-13743 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13743 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864488/HADOOP-14373-branch-2-002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12991/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials 
> has too many spaces
> 
>
> Key: HADOOP-13743
> URL: https://issues.apache.org/jira/browse/HADOOP-13743
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-13743-branch-2-001.patch, 
> HADOOP-14373-branch-2-002.patch
>
>
> The error message on a failed hadoop fs -ls command against an unauthed azure 
> container has an extra space in {{" them  in"}}
> {code}
> ls: org.apache.hadoop.fs.azure.AzureException: Unable to access container 
> demo in account example.blob.core.windows.net using anonymous credentials, 
> and no credentials found for them  in the configuration.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14733) ITestS3GuardConcurrentOps failing with -Ddynamodblocal -Ds3guard

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14733:

   Resolution: Fixed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

Thanks, committed

could you also take a quick look @ HADOOP-14735; that's the other test failing 
for me right now, and again, something we need to fix before the merge

> ITestS3GuardConcurrentOps failing with -Ddynamodblocal -Ds3guard
> 
>
> Key: HADOOP-14733
> URL: https://issues.apache.org/jira/browse/HADOOP-14733
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14733-HADOOP-13345-001.patch
>
>
> Test failure with local ddb server for s3guard
> {code}
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 128.876 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps
> testConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps)
>   Time elapsed: 128.785 sec  <<< ERROR!
> com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Cannot do 
> operations on a non-existent table (Service: AmazonDynamoDBv2; Status Code: 
> 400; Error Code: ResourceNotFoundException; Request ID: 
> 82dbf479-3ec1-40fa-bd5c-ca0f206685e7)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13743) error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials has too many spaces

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119724#comment-16119724
 ] 

Steve Loughran commented on HADOOP-13743:
-

Stringbuilder is way too much overkill; String.format() would work. But since 
all I'm doing here is removing one space, I don't think the effort is justified.

In other projects I've played with exceptions having an object taking an 
Object...args value and doing the format internally, e.g. "new 
MyException("Could not connect to host %s", hostname); the need to handle a 
nested exception complicates this a bit. Whatever SLF4J does should really be 
picked up. But again. overkill for this minor text cleanup

> error message in AzureNativeFileSystemStore.connectUsingAnonymousCredentials 
> has too many spaces
> 
>
> Key: HADOOP-13743
> URL: https://issues.apache.org/jira/browse/HADOOP-13743
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Trivial
> Attachments: HADOOP-13743-branch-2-001.patch, 
> HADOOP-14373-branch-2-002.patch
>
>
> The error message on a failed hadoop fs -ls command against an unauthed azure 
> container has an extra space in {{" them  in"}}
> {code}
> ls: org.apache.hadoop.fs.azure.AzureException: Unable to access container 
> demo in account example.blob.core.windows.net using anonymous credentials, 
> and no credentials found for them  in the configuration.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14598) Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory

2017-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119641#comment-16119641
 ] 

Steve Loughran commented on HADOOP-14598:
-

thanks for committing it. It's a funny one. FWIW the class cast wasn't 
downstream. it was in an external library. We will have to assume this is 
commonplace across the java codebase.

> Blacklist Http/HttpsFileSystem in FsUrlStreamHandlerFactory
> ---
>
> Key: HADOOP-14598
> URL: https://issues.apache.org/jira/browse/HADOOP-14598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14598-002.patch, HADOOP-14598-003.patch, 
> HADOOP-14598-004.patch, HADOOP-14598-005.patch
>
>
> my downstream-of-spark cloud integration tests (where I haven't been running 
> the azure ones for a while) now have a few of the tests failing
> {code}
>  org.apache.hadoop.fs.azure.AzureException: 
> com.microsoft.azure.storage.StorageException: 
> org.apache.hadoop.fs.FsUrlConnection cannot be cast to 
> java.net.HttpURLConnection
> {code}
> No obvious cause, and it's only apparently happening in some of the 
> (scalatest) tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14606) S3AInputStream: Handle http stream skip(n) skipping < n bytes in a forward seek

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14606:

Summary: S3AInputStream: Handle http stream skip(n) skipping < n bytes in a 
forward seek  (was: S3AInputStream: Handle skip(n) skipping < n bytes in a 
forward seek)

> S3AInputStream: Handle http stream skip(n) skipping < n bytes in a forward 
> seek
> ---
>
> Key: HADOOP-14606
> URL: https://issues.apache.org/jira/browse/HADOOP-14606
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>
> There's some hints in the InputStream docs that {{skip(n)}} may skip  bytes. Codepaths only seem to do this if read() returns -1, meaning end of 
> stream is reached. Wh
> If that happens when doing a forward seek via skip, then we have got our 
> numbers wrong and are in trouble. Look for a negative response, log @ ERROR 
> and revert to a close/reopen seek to an absolute position.
> *I have no evidence of this acutally occurring*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14606) S3AInputStream: Handle http stream skip(n) skipping < n bytes in a forward seek

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14606:

Description: 
There's some hints in the InputStream docs that {{skip(n)}} may skip  S3AInputStream: Handle http stream skip(n) skipping < n bytes in a forward 
> seek
> ---
>
> Key: HADOOP-14606
> URL: https://issues.apache.org/jira/browse/HADOOP-14606
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>
> There's some hints in the InputStream docs that {{skip(n)}} may skip  bytes. Codepaths only seem to do this if read() returns -1, meaning end of 
> stream is reached.
> If that happens when doing a forward seek via skip, then we have got our 
> numbers wrong and are in trouble. Look for a negative response, log @ ERROR 
> and revert to a close/reopen seek to an absolute position.
> *I have no evidence of this acutally occurring*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14749) review s3guard docs & code prior to merge

2017-08-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14749:

Status: Patch Available  (was: In Progress)

> review s3guard docs & code prior to merge
> -
>
> Key: HADOOP-14749
> URL: https://issues.apache.org/jira/browse/HADOOP-14749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14749-HADOOP-13345-001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Pre-merge cleanup while it's still easy to do
> * Read through all the docs, tune
> * Diff the trunk/branch files to see if we can reduce the delta (and hence 
> the changes)
> * Review the new tests



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-09 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119477#comment-16119477
 ] 

Wei-Chiu Chuang edited comment on HADOOP-14708 at 8/9/17 6:33 AM:
--

[~cltlfcjin]
bq. But KERBEROS_SSL is also kerberos, right?
Based on jira HDFS-3745 (unresolved), KERBEROS_SSL was meant to be SPNEGO. 
Probably some leftover relic.


was (Author: jojochuang):
[~cltlfcjin]
bq. But KERBEROS_SSL is also kerberos, right?
Based on jira HDFS-3745 (unresolved), KERBEROS_SSL is meant to be SPNEGO.

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-09 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119477#comment-16119477
 ] 

Wei-Chiu Chuang commented on HADOOP-14708:
--

[~cltlfcjin]
bq. But KERBEROS_SSL is also kerberos, right?
Based on jira HDFS-3745 (unresolved), KERBEROS_SSL is meant to be SPNEGO.

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14708) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-09 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16119473#comment-16119473
 ] 

John Zhuge commented on HADOOP-14708:
-

Sure if it decribes your fix well. The JIRA summary usually starts with a 
problem description, then changed to describe the fix.

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HADOOP-14708
> URL: https://issues.apache.org/jira/browse/HADOOP-14708
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Attachments: FSCK-2.log, FSCK.log, HADOOP-14708.001.patch
>
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org