[jira] [Commented] (HADOOP-14153) ADL module has messed doc structure

2017-03-09 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904638#comment-15904638
 ] 

John Zhuge commented on HADOOP-14153:
-

+1 Verified the patched doc. Like it, much cleaner.

> ADL module has messed doc structure
> ---
>
> Key: HADOOP-14153
> URL: https://issues.apache.org/jira/browse/HADOOP-14153
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>  Labels: documentaion
> Attachments: HADOOP-14153.000.patch, HADOOP-14153.001.patch, Screen 
> Shot 2017-03-09 at 11.28.27 AM.png
>
>
> RT



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14038) Rename ADLS credential properties

2017-03-09 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14038:

Status: Patch Available  (was: In Progress)

> Rename ADLS credential properties
> -
>
> Key: HADOOP-14038
> URL: https://issues.apache.org/jira/browse/HADOOP-14038
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14038.001.patch, HADOOP-14038.002.patch, 
> HADOOP-14038.003.patch, HADOOP-14038.004.patch, HADOOP-14038.005.patch
>
>
> Add ADLS credential configuration properties to {{core-default.xml}}. 
> Set/document the default value for 
> {{dfs.adls.oauth2.access.token.provider.type}} to {{ClientCredential}}.
> Fix {{AdlFileSystem#getAccessTokenProvider}} which implies the provider type 
> is {{Custom}}.
> Fix several unit tests that set {{dfs.adls.oauth2.access.token.provider}} but 
> does not set {{dfs.adls.oauth2.access.token.provider.type}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14038) Rename ADLS credential properties

2017-03-09 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14038:

Attachment: HADOOP-14038.005.patch

Patch 005
* Rename properties with prefix {{dfs.adls.}} to {{fs.adl.}}
* Rename {{adl.dfs.enable.client.latency.tracker}} to 
{{adl.enable.client.latency.tracker}}
* Rename {{dfs.adl.test.contract.enable}} to {{fs.adl.test.contract.enable}}
* Update doc index.md
* Remove the useless test class {{TestValidateConfiguration}}

Testing done
* Passed live unit tests with mixed old and new properties in auth-keys.xml
* Verified doc

Follow-up JIRA
* Switch {{fs.adl.oauth2.access.token.provider.type}} default from {{Custom}} 
to {{ClientCredential}}
* Add properties with default values to core-default.xml
* Remove unused {{TOKEN_PROVIDER_TYPE_CLIENT_CRED}} and 
{{ADL_EVENTS_TRACKING_SOURCE}}

[~vishwajeet.dusane], [~eddyxu], please take a look this patch. Should be much 
cleaner than 004.

> Rename ADLS credential properties
> -
>
> Key: HADOOP-14038
> URL: https://issues.apache.org/jira/browse/HADOOP-14038
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14038.001.patch, HADOOP-14038.002.patch, 
> HADOOP-14038.003.patch, HADOOP-14038.004.patch, HADOOP-14038.005.patch
>
>
> Add ADLS credential configuration properties to {{core-default.xml}}. 
> Set/document the default value for 
> {{dfs.adls.oauth2.access.token.provider.type}} to {{ClientCredential}}.
> Fix {{AdlFileSystem#getAccessTokenProvider}} which implies the provider type 
> is {{Custom}}.
> Fix several unit tests that set {{dfs.adls.oauth2.access.token.provider}} but 
> does not set {{dfs.adls.oauth2.access.token.provider.type}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13914) s3guard: improve S3AFileStatus#isEmptyDirectory handling

2017-03-09 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13914:
--
Attachment: HADOOP-13914-HADOOP-13345.008.patch

Attaching v8 patch.. Same as v7 except addition of findbugs exclude snippet 
from [~mackrorysd].


> s3guard: improve S3AFileStatus#isEmptyDirectory handling
> 
>
> Key: HADOOP-13914
> URL: https://issues.apache.org/jira/browse/HADOOP-13914
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13914-HADOOP-13345.000.patch, 
> HADOOP-13914-HADOOP-13345.002.patch, HADOOP-13914-HADOOP-13345.003.patch, 
> HADOOP-13914-HADOOP-13345.004.patch, HADOOP-13914-HADOOP-13345.005.patch, 
> HADOOP-13914-HADOOP-13345.006.patch, HADOOP-13914-HADOOP-13345.007.patch, 
> HADOOP-13914-HADOOP-13345.008.patch, s3guard-empty-dirs.md, 
> test-only-HADOOP-13914.patch
>
>
> As discussed in HADOOP-13449, proper support for the isEmptyDirectory() flag 
> stored in S3AFileStatus is missing from DynamoDBMetadataStore.
> The approach taken by LocalMetadataStore is not suitable for the DynamoDB 
> implementation, and also sacrifices good code separation to minimize 
> S3AFileSystem changes pre-merge to trunk.
> I will attach a design doc that attempts to clearly explain the problem and 
> preferred solution.  I suggest we do this work after merging the HADOOP-13345 
> branch to trunk, but am open to suggestions.
> I can also attach a patch of a integration test that exercises the missing 
> case and demonstrates a failure with DynamoDBMetadataStore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14147) Offline Image Viewer bug

2017-03-09 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HADOOP-14147.

Resolution: Duplicate

> Offline Image Viewer  bug
> -
>
> Key: HADOOP-14147
> URL: https://issues.apache.org/jira/browse/HADOOP-14147
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: gehaijiang
>
> $ hdfs oiv -p Delimited  -i fsimage_13752447421 -o fsimage.xml
> 17/03/04 08:40:22 INFO offlineImageViewer.FSImageHandler: Loading 757 strings
> 17/03/04 08:40:22 INFO offlineImageViewer.PBImageTextWriter: Loading 
> directories
> 17/03/04 08:40:22 INFO offlineImageViewer.PBImageTextWriter: Loading 
> directories in INode section.
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Found 4374109 
> directories in INode section.
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Finished loading 
> directories in 96798ms
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Loading INode 
> directory section.
> Exception in thread "main" java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.buildNamespace(PBImageTextWriter.java:570)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.loadINodeDirSection(PBImageTextWriter.java:522)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.visit(PBImageTextWriter.java:460)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageDelimitedTextWriter.visit(PBImageDelimitedTextWriter.java:46)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:182)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:124)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14147) Offline Image Viewer bug

2017-03-09 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang reopened HADOOP-14147:


> Offline Image Viewer  bug
> -
>
> Key: HADOOP-14147
> URL: https://issues.apache.org/jira/browse/HADOOP-14147
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: gehaijiang
>
> $ hdfs oiv -p Delimited  -i fsimage_13752447421 -o fsimage.xml
> 17/03/04 08:40:22 INFO offlineImageViewer.FSImageHandler: Loading 757 strings
> 17/03/04 08:40:22 INFO offlineImageViewer.PBImageTextWriter: Loading 
> directories
> 17/03/04 08:40:22 INFO offlineImageViewer.PBImageTextWriter: Loading 
> directories in INode section.
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Found 4374109 
> directories in INode section.
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Finished loading 
> directories in 96798ms
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Loading INode 
> directory section.
> Exception in thread "main" java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.buildNamespace(PBImageTextWriter.java:570)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.loadINodeDirSection(PBImageTextWriter.java:522)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.visit(PBImageTextWriter.java:460)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageDelimitedTextWriter.visit(PBImageDelimitedTextWriter.java:46)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:182)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:124)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14147) Offline Image Viewer bug

2017-03-09 Thread gehaijiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gehaijiang resolved HADOOP-14147.
-
Resolution: Fixed

> Offline Image Viewer  bug
> -
>
> Key: HADOOP-14147
> URL: https://issues.apache.org/jira/browse/HADOOP-14147
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: gehaijiang
>
> $ hdfs oiv -p Delimited  -i fsimage_13752447421 -o fsimage.xml
> 17/03/04 08:40:22 INFO offlineImageViewer.FSImageHandler: Loading 757 strings
> 17/03/04 08:40:22 INFO offlineImageViewer.PBImageTextWriter: Loading 
> directories
> 17/03/04 08:40:22 INFO offlineImageViewer.PBImageTextWriter: Loading 
> directories in INode section.
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Found 4374109 
> directories in INode section.
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Finished loading 
> directories in 96798ms
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Loading INode 
> directory section.
> Exception in thread "main" java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.buildNamespace(PBImageTextWriter.java:570)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.loadINodeDirSection(PBImageTextWriter.java:522)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.visit(PBImageTextWriter.java:460)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageDelimitedTextWriter.visit(PBImageDelimitedTextWriter.java:46)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:182)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:124)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14147) Offline Image Viewer bug

2017-03-09 Thread gehaijiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904449#comment-15904449
 ] 

gehaijiang commented on HADOOP-14147:
-

thanks 

> Offline Image Viewer  bug
> -
>
> Key: HADOOP-14147
> URL: https://issues.apache.org/jira/browse/HADOOP-14147
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: gehaijiang
>
> $ hdfs oiv -p Delimited  -i fsimage_13752447421 -o fsimage.xml
> 17/03/04 08:40:22 INFO offlineImageViewer.FSImageHandler: Loading 757 strings
> 17/03/04 08:40:22 INFO offlineImageViewer.PBImageTextWriter: Loading 
> directories
> 17/03/04 08:40:22 INFO offlineImageViewer.PBImageTextWriter: Loading 
> directories in INode section.
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Found 4374109 
> directories in INode section.
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Finished loading 
> directories in 96798ms
> 17/03/04 08:41:59 INFO offlineImageViewer.PBImageTextWriter: Loading INode 
> directory section.
> Exception in thread "main" java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:129)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.buildNamespace(PBImageTextWriter.java:570)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.loadINodeDirSection(PBImageTextWriter.java:522)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageTextWriter.visit(PBImageTextWriter.java:460)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.PBImageDelimitedTextWriter.visit(PBImageDelimitedTextWriter.java:46)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:182)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:124)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14123) Remove misplaced ADL service provider config file for FileSystem

2017-03-09 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14123:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2, branch-2.8, and branch-2.8.0, same as 
HADOOP-13037.

Thanks [~ste...@apache.org], [~vishwajeet.dusane], and [~eddyxu] for the review.

> Remove misplaced ADL service provider config file for FileSystem
> 
>
> Key: HADOOP-14123
> URL: https://issues.apache.org/jira/browse/HADOOP-14123
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14123.001.patch, HADOOP-14123.002.patch
>
>
> Per discussion in HADOOP-14132, do not attempt to move the service provider 
> config file to the right path. Remove it to speed up the load time for Hadoop 
> client code.
> Leave the property {{fs.adl.impl}} in core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Refactor Azure Data Lake Store as an independent FileSystem

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13037:
---
Release Note: Hadoop now supports integration with Azure Data Lake as an 
alternative Hadoop-compatible file system. Please refer to the Hadoop site 
documentation of Azure Data Lake for details on usage and configuration.

> Refactor Azure Data Lake Store as an independent FileSystem
> ---
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13037-001.patch, HADOOP-13037-002.patch, 
> HADOOP-13037-003.patch, HADOOP-13037-004.patch, HADOOP-13037.005.patch, 
> HADOOP-13037.006.patch, HADOOP-13037 Proposal.pdf
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14123) Remove misplaced ADL service provider config file for FileSystem

2017-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904307#comment-15904307
 ] 

Hudson commented on HADOOP-14123:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11384 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11384/])
HADOOP-14123. Remove misplaced ADL service provider config file for (jzhuge: 
rev c5ee7fded46dcb1ac1ea4c1ada4949c50bc89afb)
* (delete) 
hadoop-tools/hadoop-azure-datalake/src/main/resources/META-INF/org.apache.hadoop.fs.FileSystem


> Remove misplaced ADL service provider config file for FileSystem
> 
>
> Key: HADOOP-14123
> URL: https://issues.apache.org/jira/browse/HADOOP-14123
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14123.001.patch, HADOOP-14123.002.patch
>
>
> Per discussion in HADOOP-14132, do not attempt to move the service provider 
> config file to the right path. Remove it to speed up the load time for Hadoop 
> client code.
> Leave the property {{fs.adl.impl}} in core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14170) FileSystemContractBaseTest is not cleaning up test directory clearly

2017-03-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904301#comment-15904301
 ] 

Mingliang Liu commented on HADOOP-14170:


One simple solution is to clean up this directory as well in {{tearDown()}} 
method. This is ideal though.

Another approach is to use method-specific sub-directory for each test case in 
which way they don't interfere with each other. But we still need to clean up 
both {{/test}} and {{/user/bob/test}} after the whole test.

I think a good solution is to always use {{path("test")}} for normal tests 
except tests against root (e.g. {{testRootDirAlwaysExists()}}) in which case we 
use {{path("/test")}} instead. This should be clear in each test case - test 
root or not. I'll upload a patch for this solution.

> FileSystemContractBaseTest is not cleaning up test directory clearly
> 
>
> Key: HADOOP-14170
> URL: https://issues.apache.org/jira/browse/HADOOP-14170
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> In {{FileSystemContractBaseTest::tearDown()}} method, it cleans up the 
> {{path("/test")}} directory, which will be qualified as {{/test}} (against 
> root instead of working directory because it's absolute):
> {code}
>   @Override
>   protected void tearDown() throws Exception {
> try {
>   if (fs != null) {
> fs.delete(path("/test"), true);
>   }
> } catch (IOException e) {
>   LOG.error("Error deleting /test: " + e, e);
> }
>   }
> {code}
> But in the test, it uses {{path("test")}} sometimes, which will be made 
> qualified against the working directory (e.g. {{/user/bob/test}}).
> This makes some tests fail intermittently, e.g. 
> {{ITestS3AFileSystemContract}}. Also see the discussion in [HADOOP-13934].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14170) FileSystemContractBaseTest is not cleaning up test directory clearly

2017-03-09 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14170:
--

 Summary: FileSystemContractBaseTest is not cleaning up test 
directory clearly
 Key: HADOOP-14170
 URL: https://issues.apache.org/jira/browse/HADOOP-14170
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Mingliang Liu
Assignee: Mingliang Liu


In {{FileSystemContractBaseTest::tearDown()}} method, it cleans up the 
{{path("/test")}} directory, which will be qualified as {{/test}} (against root 
instead of working directory because it's absolute):
{code}
  @Override
  protected void tearDown() throws Exception {
try {
  if (fs != null) {
fs.delete(path("/test"), true);
  }
} catch (IOException e) {
  LOG.error("Error deleting /test: " + e, e);
}
  }
{code}
But in the test, it uses {{path("test")}} sometimes, which will be made 
qualified against the working directory (e.g. {{/user/bob/test}}).

This makes some tests fail intermittently, e.g. {{ITestS3AFileSystemContract}}. 
Also see the discussion in [HADOOP-13934].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10642) Provide option to limit heap memory consumed by dynamic metrics2 metrics

2017-03-09 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10642:

Description: 
User sunweiei provided the following jmap output in HBase 0.96 deployment:

{code}
 num #instances #bytes  class name
--
   1:  14917882 3396492464  [C
   2:   1996994 2118021808  [B
   3:  43341650 1733666000  java.util.LinkedHashMap$Entry
   4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
   5:  14446577  924580928  
org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
{code}
Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
metrics2/lib/MetricsRegistry.java.

This scenario would arise when large number of regions are tracked through 
metrics2 dynamically.
Interns class doesn't provide API to remove entries in its internal Map.

One solution is to provide an option that allows skipping calls to 
Interns.info() in metrics2/lib/MetricsRegistry.java

  was:
User sunweiei provided the following jmap output in HBase 0.96 deployment:
{code}
 num #instances #bytes  class name
--
   1:  14917882 3396492464  [C
   2:   1996994 2118021808  [B
   3:  43341650 1733666000  java.util.LinkedHashMap$Entry
   4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
   5:  14446577  924580928  
org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
{code}
Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
metrics2/lib/MetricsRegistry.java.

This scenario would arise when large number of regions are tracked through 
metrics2 dynamically.
Interns class doesn't provide API to remove entries in its internal Map.

One solution is to provide an option that allows skipping calls to 
Interns.info() in metrics2/lib/MetricsRegistry.java


> Provide option to limit heap memory consumed by dynamic metrics2 metrics
> 
>
> Key: HADOOP-10642
> URL: https://issues.apache.org/jira/browse/HADOOP-10642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Ted Yu
>
> User sunweiei provided the following jmap output in HBase 0.96 deployment:
> {code}
>  num #instances #bytes  class name
> --
>1:  14917882 3396492464  [C
>2:   1996994 2118021808  [B
>3:  43341650 1733666000  java.util.LinkedHashMap$Entry
>4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
>5:  14446577  924580928  
> org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
> {code}
> Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
> due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
> metrics2/lib/MetricsRegistry.java.
> This scenario would arise when large number of regions are tracked through 
> metrics2 dynamically.
> Interns class doesn't provide API to remove entries in its internal Map.
> One solution is to provide an option that allows skipping calls to 
> Interns.info() in metrics2/lib/MetricsRegistry.java



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-03-09 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904225#comment-15904225
 ] 

Andrew Wang commented on HADOOP-14104:
--

Hi Yongjun,

bq. When the target is the remote cluster, could we fail to find Codec info or 
wrong codec info because we don't have remote cluster's configuration?

The FileEncryptionInfo has a cipher type field. If the client doesn't support 
the cipher, then it can't read/write the file, and will throw an exception. If 
it does support that cipher, it uses its local config to determine the correct 
codec implementation to use (i.e. java or native).

>From a safety point of view, we're okay to use the local config. If the client 
>is too old and doesn't understand a new cipher type, it'll abort. Supporting a 
>new cipher necessarily requires upgrading the client (and potentially also 
>installing native libraries), so I think this behavior is okay.



> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch, 
> HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14169) Implement listStatusIterator, listLocatedStatus for ViewFs

2017-03-09 Thread Erik Krogen (JIRA)
Erik Krogen created HADOOP-14169:


 Summary: Implement listStatusIterator, listLocatedStatus for ViewFs
 Key: HADOOP-14169
 URL: https://issues.apache.org/jira/browse/HADOOP-14169
 Project: Hadoop Common
  Issue Type: Improvement
  Components: viewfs
Reporter: Erik Krogen
Assignee: Erik Krogen
Priority: Minor


Similar to what HADOOP-11812 did for ViewFileSystem, currently ViewFs does not 
respect optimizations to {{listStatusIterator}} or {{listLocatedStatus}}, using 
the naive implementations within {{AbstractFileSystem}}. This can cause 
performance issues if iterating over a large directory, especially if the 
locations are also needed. This can be easily fixed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14156) Grammar error in the ConfTest.java

2017-03-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904203#comment-15904203
 ] 

Akira Ajisaka commented on HADOOP-14156:


LGTM, +1.

> Grammar error in the ConfTest.java
> --
>
> Key: HADOOP-14156
> URL: https://issues.apache.org/jira/browse/HADOOP-14156
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Andrey Dyatlov
>Priority: Trivial
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java}}
> bq. does not defined
> should be replaced by
> bq. is not defined
> PR: https://github.com/apache/hadoop/pull/187/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14168) S3GuardTool tests should not run if S3Guard is not set up

2017-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904095#comment-15904095
 ] 

Hadoop QA commented on HADOOP-14168:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
25s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
37s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 5 unchanged - 0 fixed = 6 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14168 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12857115/HADOOP-14168-HADOOP-13345.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 54de6558522a 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / b968fb3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11797/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11797/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11797/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3GuardTool tests should not run if S3Guard is not set up
> -
>
> Key: HADOOP-14168
> URL: https://issues.apache.org/jira/browse/HADOOP-14168
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>

[jira] [Commented] (HADOOP-13914) s3guard: improve S3AFileStatus#isEmptyDirectory handling

2017-03-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904094#comment-15904094
 ] 

Mingliang Liu commented on HADOOP-13914:


Mirror Sean's comments, and +1 (again)

> s3guard: improve S3AFileStatus#isEmptyDirectory handling
> 
>
> Key: HADOOP-13914
> URL: https://issues.apache.org/jira/browse/HADOOP-13914
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13914-HADOOP-13345.000.patch, 
> HADOOP-13914-HADOOP-13345.002.patch, HADOOP-13914-HADOOP-13345.003.patch, 
> HADOOP-13914-HADOOP-13345.004.patch, HADOOP-13914-HADOOP-13345.005.patch, 
> HADOOP-13914-HADOOP-13345.006.patch, HADOOP-13914-HADOOP-13345.007.patch, 
> s3guard-empty-dirs.md, test-only-HADOOP-13914.patch
>
>
> As discussed in HADOOP-13449, proper support for the isEmptyDirectory() flag 
> stored in S3AFileStatus is missing from DynamoDBMetadataStore.
> The approach taken by LocalMetadataStore is not suitable for the DynamoDB 
> implementation, and also sacrifices good code separation to minimize 
> S3AFileSystem changes pre-merge to trunk.
> I will attach a design doc that attempts to clearly explain the problem and 
> preferred solution.  I suggest we do this work after merging the HADOOP-13345 
> branch to trunk, but am open to suggestions.
> I can also attach a patch of a integration test that exercises the missing 
> case and demonstrates a failure with DynamoDBMetadataStore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13037) Refactor Azure Data Lake Store as an independent FileSystem

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas resolved HADOOP-13037.

  Resolution: Fixed
Target Version/s: 2.8.0  (was: 2.9.0)

Committed through branch-2.8.0

> Refactor Azure Data Lake Store as an independent FileSystem
> ---
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13037-001.patch, HADOOP-13037-002.patch, 
> HADOOP-13037-003.patch, HADOOP-13037-004.patch, HADOOP-13037.005.patch, 
> HADOOP-13037.006.patch, HADOOP-13037 Proposal.pdf
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14049) Honour AclBit flag associated to file/folder permission for Azure datalake account

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14049:
---
Fix Version/s: 2.8.0

> Honour AclBit flag associated to file/folder permission for Azure datalake 
> account
> --
>
> Key: HADOOP-14049
> URL: https://issues.apache.org/jira/browse/HADOOP-14049
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14049-01.patch, HADOOP-14049.02.patch
>
>
> ADLS persist AclBit information on a file/folder. Since Java SDK 2.1.4 - 
> AclBit value can be retrieved using {{DirectoryEntry.aclBit}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Refactor Azure Data Lake Store as an independent FileSystem

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13037:
---
Fix Version/s: (was: 2.9.0)
   2.8.0

> Refactor Azure Data Lake Store as an independent FileSystem
> ---
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13037-001.patch, HADOOP-13037-002.patch, 
> HADOOP-13037-003.patch, HADOOP-13037-004.patch, HADOOP-13037.005.patch, 
> HADOOP-13037.006.patch, HADOOP-13037 Proposal.pdf
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14017) User friendly name for ADLS user and group

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14017:
---
Fix Version/s: 2.8.0

> User friendly name for ADLS user and group
> --
>
> Key: HADOOP-14017
> URL: https://issues.apache.org/jira/browse/HADOOP-14017
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: Vishwajeet Dusane
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HADOOP-14017.01.patch, HADOOP-14017.02.patch, 
> HADOOP-14017.03.patch
>
>
> ADLS displays GUID whenever user or group displayed, e.g., {{ls}}, 
> {{getfacl}}.
> ADLS requires GUID whenever user or group input is needed, e.g., {{setfacl}}, 
> {{chown}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13257) Improve Azure Data Lake contract tests.

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13257:
---
Fix Version/s: 2.8.0

> Improve Azure Data Lake contract tests.
> ---
>
> Key: HADOOP-13257
> URL: https://issues.apache.org/jira/browse/HADOOP-13257
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Reporter: Chris Nauroth
>Assignee: Vishwajeet Dusane
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13257.001.patch, HADOOP-13257.002.patch
>
>
> HADOOP-12875 provided the initial implementation of the FileSystem contract 
> tests covering Azure Data Lake.  This issue tracks subsequent improvements on 
> those test suites for improved coverage and matching the specified semantics 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13962) Update ADLS SDK to 2.1.4

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13962:
---
Fix Version/s: 2.8.0

> Update ADLS SDK to 2.1.4
> 
>
> Key: HADOOP-13962
> URL: https://issues.apache.org/jira/browse/HADOOP-13962
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13962.001.patch
>
>
> ADLS has multiple upgrades since the version 2.0.11 we are using: 2.1.1, 
> 2.1.2, and 2.1.4. Change list: 
> https://github.com/Azure/azure-data-lake-store-java/blob/master/CHANGES.md.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13956) Read ADLS credentials from Credential Provider

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13956:
---
Fix Version/s: 2.8.0

> Read ADLS credentials from Credential Provider
> --
>
> Key: HADOOP-13956
> URL: https://issues.apache.org/jira/browse/HADOOP-13956
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13956.001.patch, HADOOP-13956.002.patch, 
> HADOOP-13956.003.patch, HADOOP-13956.004.patch, HADOOP-13956.005.patch, 
> HADOOP-13956.006.patch
>
>
> Read ADLS credentials using Hadoop CredentialProvider API. See 
> https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13929) ADLS connector should not check in contract-test-options.xml

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13929:
---
Fix Version/s: 2.8.0

> ADLS connector should not check in contract-test-options.xml
> 
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch, 
> HADOOP-13929.006.patch, HADOOP-13929.007.patch, HADOOP-13929.008.patch, 
> HADOOP-13929.009.patch, HADOOP-13929.010.patch, HADOOP-13929.011.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13900) Remove snapshot version of SDK dependency from Azure Data Lake Store File System

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13900:
---
Fix Version/s: 2.8.0

> Remove snapshot version of SDK dependency from Azure Data Lake Store File 
> System
> 
>
> Key: HADOOP-13900
> URL: https://issues.apache.org/jira/browse/HADOOP-13900
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-11240-001.patch, HDFS-11240-002.patch
>
>
> Azure Data Lake Store File System dependent Azure Data Lake Store SDK is 
> released and has not need for further snapshot version dependency. This JIRA 
> removes the SDK snapshot dependency to released SDK candidate. There is not 
> functional change in the SDK and no impact to live contract test. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13037) Refactor Azure Data Lake Store as an independent FileSystem

2017-03-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13037:
---
Fix Version/s: 2.9.0

> Refactor Azure Data Lake Store as an independent FileSystem
> ---
>
> Key: HADOOP-13037
> URL: https://issues.apache.org/jira/browse/HADOOP-13037
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Reporter: Shrikant Naidu
>Assignee: Vishwajeet Dusane
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13037-001.patch, HADOOP-13037-002.patch, 
> HADOOP-13037-003.patch, HADOOP-13037-004.patch, HADOOP-13037.005.patch, 
> HADOOP-13037.006.patch, HADOOP-13037 Proposal.pdf
>
>
> The jira proposes an improvement over HADOOP-12666 to remove webhdfs 
> dependencies from the ADL file system client and build out a standalone 
> client. At a high level, this approach would extend the Hadoop file system 
> class to provide an implementation for accessing Azure Data Lake. The scheme 
> used for accessing the file system will continue to be 
> adl://.azuredatalake.net/path/to/file. 
> The Azure Data Lake Cloud Store will continue to provide a webHDFS rest 
> interface. The client will  access the ADLS store using WebHDFS Rest APIs 
> provided by the ADLS store. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13914) s3guard: improve S3AFileStatus#isEmptyDirectory handling

2017-03-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904055#comment-15904055
 ] 

Aaron Fabbri commented on HADOOP-13914:
---

Thanks for the review [~mackrorysd].  The innerRename() function length warning 
is existing.  Agree it should be tackled later (ideally after merge to trunk).

Thank you for the findbugs-exclude snippet. I thought about removing the null 
check but still think this is more future-proof.  I'll add your exclusion and 
post a new patch.




> s3guard: improve S3AFileStatus#isEmptyDirectory handling
> 
>
> Key: HADOOP-13914
> URL: https://issues.apache.org/jira/browse/HADOOP-13914
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13914-HADOOP-13345.000.patch, 
> HADOOP-13914-HADOOP-13345.002.patch, HADOOP-13914-HADOOP-13345.003.patch, 
> HADOOP-13914-HADOOP-13345.004.patch, HADOOP-13914-HADOOP-13345.005.patch, 
> HADOOP-13914-HADOOP-13345.006.patch, HADOOP-13914-HADOOP-13345.007.patch, 
> s3guard-empty-dirs.md, test-only-HADOOP-13914.patch
>
>
> As discussed in HADOOP-13449, proper support for the isEmptyDirectory() flag 
> stored in S3AFileStatus is missing from DynamoDBMetadataStore.
> The approach taken by LocalMetadataStore is not suitable for the DynamoDB 
> implementation, and also sacrifices good code separation to minimize 
> S3AFileSystem changes pre-merge to trunk.
> I will attach a design doc that attempts to clearly explain the problem and 
> preferred solution.  I suggest we do this work after merging the HADOOP-13345 
> branch to trunk, but am open to suggestions.
> I can also attach a patch of a integration test that exercises the missing 
> case and demonstrates a failure with DynamoDBMetadataStore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-03-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904047#comment-15904047
 ] 

Yongjun Zhang commented on HADOOP-14104:


HI [~rushabh.shah],

Thanks again for your work. Below are my comments in reply to yours. Just saw 
[~andrew.wang]'s, thanks Andrew!

1.
{quote}
final FileEncryptionInfo feInfo = dfsos.getFileEncryptionInfo();
final CryptoCodec codec = getCryptoCodec(conf, feInfo);
in createWrappedOutputStream, where conf is the configuration of local cluster. 
There is a possibility that the local configuration is different than remote 
cluster's. So it's possible to fail here.
{quote}
Sorry I was a bit unclear earlier, this comment is not about the change of this 
jira, but about the existing implementation. My concern is about Codec rather 
than keyProvider here. We get feInfo from the target file, getting Codec based 
on conf and feInfo. The conf here is the configuration of the *local* cluster. 
When the target is the remote cluster, could we fail to find Codec info or 
wrong codec info because we don't have remote cluster's configuration? That's 
what wanted to say. So I hope [~andrew.wang] who was involved in the original 
development of encryption feature could comment. (Andrew, saw you above 
comment, but would you please look at my comment here again and see if it makes 
sense?

2. It looks having a {{HADOOP_SECURITY_KEY_PROVIDER_PATH_DEFAULT}} constant 
would help making the code more consistent, even for the patch here, but having 
a separate jira works for me too. 

3. Keeping the name keyProviderUri instead of keyProviderPath is actually fine. 
I did some study before creating patch rev3, the format appears to be Uri. I 
wish HDFS-10489 made it uri instead of path. So I did not change uri to path in 
rev3.

4. About adding two methods to add/get from credentials, it's a way of 
encapsulating how it's handled in one place, and share the way how the key in 
the map is generated (e.g. uri.getScheme()+"://"+uri.getAuthority()). You can 
see my example in rev3. These methodd are also called in Test code. 

5. 
{quote}
I don't think key provider is used by WebHDFSFileSystem. Maybe I'm missing 
something.
Can you please elaborate your comment ?
{quote}
I was just guessing, but I'm not so sure, but hope [~daryn] can comment.

6. 
{quote}
7. About your question w.r.t. public boolean isHDFSEncryptionEnabled() throwing 
StandbyException. There is a solution, that is, we need to incorporate remote's 
cluster's nameservices configurations in the client (distcp for example) 
configuration, and let the client handle the NN failover and retry. We need to 
document this.
{quote}
For HA cluster, we can access NN via nameservice (for example, hadoop distcp 
hdfs://nameservice1:/xyz hdfs://nameservice2:/abc) , so the StandbyException 
can be detected and different NN will be tried automatically. See 
https://issues.apache.org/jira/browse/HDFS-6376. We actually see a 
non-intrusive way to do that without using dfs.internal.nameservices", that is, 
we copy the local cluster's conf to a new dir, and append the nameservice 
portion of the remote cluster's conf to the hdfs-site.xml in the new dir. Then 
pass the new dir to distcp.
 


 


> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch, 
> HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14154) Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore

2017-03-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904022#comment-15904022
 ] 

Aaron Fabbri edited comment on HADOOP-14154 at 3/9/17 10:54 PM:


{quote}
Would that isAuthoritative flag have to be setup by higher level applications 
like Pig/Hive/MR?
{quote}

No, it is internal to S3A.  S3A can tell when it has the full listing for a 
directory, and simply conveys that to the MetadataStore by setting the 
isAuthoritative bit.

e.g. in {{S3Guard#dirListingUnion(..)}}, it always sets the flag when it puts 
the listing into the MetadataStore, since this function always has the full 
listing for the directory:

{code}
  dirMeta.setAuthoritative(true); // This is the full directory contents
{code}

That codepath happens at the end of listStatus(), when it has finished 
computing the full directory contents.


was (Author: fabbri):
{quote}
Would that isAuthoritative flag have to be setup by higher level applications 
like Pig/Hive/MR?
{quote}

No, it is internal to S3A.  S3A can tell when it has the full listing for a 
directory, and simply conveys that to the MetadataStore by setting the 
isAuthoritative bit.

e.g. in {{S3Guard#dirListingUnion(..)}}, it always sets the flag when it puts 
the listing into the MetadataStore, since this function always has the full 
listing for the directory:

{code}
  dirMeta.setAuthoritative(true); // This is the full directory contents
{code}

> Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore
> --
>
> Key: HADOOP-14154
> URL: https://issues.apache.org/jira/browse/HADOOP-14154
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-14154-HADOOP-13345.001.patch, 
> HADOOP-14154-HADOOP-13345.002.patch
>
>
> Currently {{DynamoDBMetaStore::listChildren}} does not populate 
> {{isAuthoritative}} flag when creating {{DirListingMetadata}}. 
> This causes additional S3 lookups even when users have enabled 
> {{fs.s3a.metadatastore.authoritative}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14154) Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore

2017-03-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904022#comment-15904022
 ] 

Aaron Fabbri commented on HADOOP-14154:
---

{quote}
Would that isAuthoritative flag have to be setup by higher level applications 
like Pig/Hive/MR?
{quote}

No, it is internal to S3A.  S3A can tell when it has the full listing for a 
directory, and simply conveys that to the MetadataStore by setting the 
isAuthoritative bit.

e.g. in {{S3Guard#dirListingUnion(..)}}, it always sets the flag when it puts 
the listing into the MetadataStore, since this function always has the full 
listing for the directory:

{code}
  dirMeta.setAuthoritative(true); // This is the full directory contents
{code}

> Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore
> --
>
> Key: HADOOP-14154
> URL: https://issues.apache.org/jira/browse/HADOOP-14154
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-14154-HADOOP-13345.001.patch, 
> HADOOP-14154-HADOOP-13345.002.patch
>
>
> Currently {{DynamoDBMetaStore::listChildren}} does not populate 
> {{isAuthoritative}} flag when creating {{DirListingMetadata}}. 
> This causes additional S3 lookups even when users have enabled 
> {{fs.s3a.metadatastore.authoritative}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14168) S3GuardTool tests should not run if S3Guard is not set up

2017-03-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904018#comment-15904018
 ] 

Mingliang Liu commented on HADOOP-14168:


+1

I'm not a big fan of using Hamcrest for testing but there is 
{{is(not((usingNullImpl))}} if you like it. :)

> S3GuardTool tests should not run if S3Guard is not set up
> -
>
> Key: HADOOP-14168
> URL: https://issues.apache.org/jira/browse/HADOOP-14168
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14168-HADOOP-13345.001.patch, 
> HADOOP-14168-HADOOP-13345.002.patch
>
>
> I saw ITestS3GuardToolDynamoDB fail when running without any S3Guard 
> configuration set up because it will run even with -Ds3guard.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14168) S3GuardTool tests should not run if S3Guard is not set up

2017-03-09 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14168:
---
Status: Patch Available  (was: Open)

> S3GuardTool tests should not run if S3Guard is not set up
> -
>
> Key: HADOOP-14168
> URL: https://issues.apache.org/jira/browse/HADOOP-14168
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14168-HADOOP-13345.001.patch, 
> HADOOP-14168-HADOOP-13345.002.patch
>
>
> I saw ITestS3GuardToolDynamoDB fail when running without any S3Guard 
> configuration set up because it will run even with -Ds3guard.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14154) Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore

2017-03-09 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-14154:
--
Status: Open  (was: Patch Available)

> Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore
> --
>
> Key: HADOOP-14154
> URL: https://issues.apache.org/jira/browse/HADOOP-14154
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-14154-HADOOP-13345.001.patch, 
> HADOOP-14154-HADOOP-13345.002.patch
>
>
> Currently {{DynamoDBMetaStore::listChildren}} does not populate 
> {{isAuthoritative}} flag when creating {{DirListingMetadata}}. 
> This causes additional S3 lookups even when users have enabled 
> {{fs.s3a.metadatastore.authoritative}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14154) Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore

2017-03-09 Thread Rajesh Balamohan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904013#comment-15904013
 ] 

Rajesh Balamohan commented on HADOOP-14154:
---

Thanks for the clarification [~fabbri]. Would that isAuthoritative flag have to 
be setup by higher level applications like Pig/Hive/MR?

> Set isAuthoritative flag when creating DirListingMetadata in DynamoDBMetaStore
> --
>
> Key: HADOOP-14154
> URL: https://issues.apache.org/jira/browse/HADOOP-14154
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-14154-HADOOP-13345.001.patch, 
> HADOOP-14154-HADOOP-13345.002.patch
>
>
> Currently {{DynamoDBMetaStore::listChildren}} does not populate 
> {{isAuthoritative}} flag when creating {{DirListingMetadata}}. 
> This causes additional S3 lookups even when users have enabled 
> {{fs.s3a.metadatastore.authoritative}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14168) S3GuardTool tests should not run if S3Guard is not set up

2017-03-09 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14168:
---
Attachment: HADOOP-14168-HADOOP-13345.002.patch

Ah yes - I had wanted something along those lines but was looking for 
assumeEquals :)

> S3GuardTool tests should not run if S3Guard is not set up
> -
>
> Key: HADOOP-14168
> URL: https://issues.apache.org/jira/browse/HADOOP-14168
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14168-HADOOP-13345.001.patch, 
> HADOOP-14168-HADOOP-13345.002.patch
>
>
> I saw ITestS3GuardToolDynamoDB fail when running without any S3Guard 
> configuration set up because it will run even with -Ds3guard.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8039) mvn site:stage-deploy should not have broken links.

2017-03-09 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903952#comment-15903952
 ] 

Ravi Prakash commented on HADOOP-8039:
--

This problem still exists:
To recreate here's what I did:
{code}
$ mvn site
$ mvn site:stage-deploy -DstagingSiteURL=file:///home/raviprak/stag
{code}

> mvn site:stage-deploy should not have broken links.
> ---
>
> Key: HADOOP-8039
> URL: https://issues.apache.org/jira/browse/HADOOP-8039
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 0.23.1
>Reporter: Ravi Prakash
>
> The stage-deployed site has a lot of broken links / missing pages. We should 
> fix that.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-03-09 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903942#comment-15903942
 ] 

Andrew Wang commented on HADOOP-14104:
--

Here's my review on v2, along with responses to some of Yongjun's questions. 
Thanks for working on this Rushabh!

bq. in createWrappedOutputStream(, where conf is the configuration of local 
cluster. There is a possibility that the local configuration is different than 
remote cluster's. So it's possible to fail here.

The conf is only used to configure the CryptoCodec, which does not need a KMS 
URI. I think this is okay, all the CC parameters are included in the {{feInfo}}.

bq. @awang would you please confirm if it's ok to do so since this class is 
public), and use this constant at multiple places that current uses ""

Yea it's fine. I would like to improve this in this patch if possible, since it 
removes redundancies.

bq. 3. Notice that "dfs.encryption.key.provider.uri" is deprecated and replaced 
with hadoop.security.key.provider.path (see HDFS-10489). So suggest to replace 
variable name keyProviderUri with keyProviderPath

I think we can ignore this for now like Rushabh said, can handle it in another 
JIRA if necessary.

bq. Seems we need a similar change in WebHdfsFileSystem when calling 
addDelegationTokens

The DN does the encryption/decryption in WebHDFS, so the client doesn't need to 
do any KMS communication.

It does bring up a question regarding the DN DFSClient though. It looks like 
WebHdfsHandler creates a new DFSClient each time, which means we won't benefit 
from getServerDefaults caching.

Is the fix to make config preferred over getServerDefaults?

bq. 

Does this exception actually make it to clients? The HA RPC proxy normally 
catches the StandbyException and fails over to the other NN. We can write a 
unit test for this to verify if we're unsure.

My additional review comments:

* typo nameonde
* ServerDefaults#getKeyProviderUri needs javadoc explaining how to interpret 
null and empty (IIUC null means not set, empty means means set but not enabled)
* In docs, "DFSClients" is an internal name. Please rename to say "HDFS 
clients" or similar. Same for "dfs clients" in core-default.xml
* There are a lot of whitespace errors, please take a look at what's flagged by 
checkstyle. Recommend using IDE autoformatting in the future.
* An actual mini-cluster test that mimics an MR job submitter and task's call 
pattern would also be good.

TestEncryptionZones:
* Would like to see the test reversed so it covers the fallback behavior, i.e.
* set client config with kp1, check that it returns kp1
* mock getServerDefaults() with kp2, check it returns kp2
* set Credentials with kp3, check it returns kp3
* typo originalNameodeUri
* {{String lookUpKey = DFSClient.DFS_KMS_PREFIX + 
originalNameodeUri.toString();}} should this be a {{getKey}} helper method in 
DFSClient rather than having the test code also construct the key?

> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch, 
> HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14123) Remove misplaced ADL service provider config file for FileSystem

2017-03-09 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903940#comment-15903940
 ] 

Lei (Eddy) Xu commented on HADOOP-14123:


+1.  LGTM. 

Thanks [~jzhuge].

> Remove misplaced ADL service provider config file for FileSystem
> 
>
> Key: HADOOP-14123
> URL: https://issues.apache.org/jira/browse/HADOOP-14123
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-14123.001.patch, HADOOP-14123.002.patch
>
>
> Per discussion in HADOOP-14132, do not attempt to move the service provider 
> config file to the right path. Remove it to speed up the load time for Hadoop 
> client code.
> Leave the property {{fs.adl.impl}} in core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14168) S3GuardTool tests should not run if S3Guard is not set up

2017-03-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903933#comment-15903933
 ] 

Mingliang Liu commented on HADOOP-14168:


The fix looks good. You may like {{Assume.assumeThat("Unexpected S3Guard test 
state: shouldBeEnabled=" + shouldBeEnabled + " and isEnabled =" + isEnabled, 
shouldBeEnabled, is(isEnabled));}}. That's nit anyway.

> S3GuardTool tests should not run if S3Guard is not set up
> -
>
> Key: HADOOP-14168
> URL: https://issues.apache.org/jira/browse/HADOOP-14168
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14168-HADOOP-13345.001.patch
>
>
> I saw ITestS3GuardToolDynamoDB fail when running without any S3Guard 
> configuration set up because it will run even with -Ds3guard.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11232) jersey-core-1.9 has a faulty glassfish-repo setting

2017-03-09 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11232.
---
Resolution: Duplicate

HADOOP-9613 seems to have upgraded jersey to 1.19 . Please reopen if I'm 
mistaken

> jersey-core-1.9 has a faulty glassfish-repo setting
> ---
>
> Key: HADOOP-11232
> URL: https://issues.apache.org/jira/browse/HADOOP-11232
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Sushanth Sowmyan
>
> The following was reported by [~sushanth].
> hadoop-common brings in jersey-core-1.9 as a dependency by default.
> This is problematic, since the pom file for jersey 1.9 hardcode-specifies 
> glassfish-repo as the place to get further transitive dependencies, which 
> leads to a site that serves a static "this has moved" page instead of a 404. 
> This results in faulty parent resolutions, which when asked for a pom file, 
> get erroneous results.
> The only way around this seems to be to add a series of exclusions for 
> jersey-core, jersey-json, jersey-server and a bunch of others to 
> hadoop-common, then to hadoop-hdfs, then to hadoop-mapreduce-client-core. I 
> don't know how many more excludes are necessary before I can get this to work.
> If you update your jersey.version to 1.14, this faulty pom goes away. Please 
> either update that, or work with build infra to update our nexus pom for 
> jersey-1.9 so that it does not include the faulty glassfish repo.
> Another interesting note about this is that something changed yesterday 
> evening to cause this break in behaviour. We have not had this particular 
> problem in about 9+ months.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14168) S3GuardTool tests should not run if S3Guard is not set up

2017-03-09 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903891#comment-15903891
 ] 

Sean Mackrory commented on HADOOP-14168:


To test the attached patch I ran the affected tests both with and without the 
-Ddynamo -Ds3guard and associated configurations, and ensured that 
ITestS3ACredentialsInURL tests only ran without S3Guard, and 
ITestS3GuardToolDynamoDB only ran with it.

> S3GuardTool tests should not run if S3Guard is not set up
> -
>
> Key: HADOOP-14168
> URL: https://issues.apache.org/jira/browse/HADOOP-14168
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14168-HADOOP-13345.001.patch
>
>
> I saw ITestS3GuardToolDynamoDB fail when running without any S3Guard 
> configuration set up because it will run even with -Ds3guard.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903889#comment-15903889
 ] 

Hadoop QA commented on HADOOP-13786:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 26 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
45s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
58s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m  6s{color} 
| {color:red} root generated 10 new + 788 unchanged - 1 fixed = 798 total (was 
789) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  0s{color} | {color:orange} root: The patch generated 266 new + 82 unchanged 
- 14 fixed = 348 total (was 96) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-tools/hadoop-aws generated 4 new + 0 unchanged 
- 0 fixed = 4 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} 
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core 
generated 1 new + 2496 unchanged - 0 fixed = 2497 total (was 2496) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 23s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
2s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 58s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Dead store to length in 
org.apache.hadoop.fs.s3a.S3AFileSystem$WriteOperationHelper.newUploadPartRequest(String,
 int, int, InputStream, File, Long)  At 
S3AFileSystem.java:org.apache.hadoop.fs.s3a.S3AFileSystem$WriteOperationHelper.newUploadPartRequest(String,
 int, 

[jira] [Updated] (HADOOP-14168) S3GuardTool tests should not run if S3Guard is not set up

2017-03-09 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14168:
---
Attachment: HADOOP-14168-HADOOP-13345.001.patch

> S3GuardTool tests should not run if S3Guard is not set up
> -
>
> Key: HADOOP-14168
> URL: https://issues.apache.org/jira/browse/HADOOP-14168
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14168-HADOOP-13345.001.patch
>
>
> I saw ITestS3GuardToolDynamoDB fail when running without any S3Guard 
> configuration set up because it will run even with -Ds3guard.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14168) S3GuardTool tests should not run if S3Guard is not set up

2017-03-09 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14168:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-13345

> S3GuardTool tests should not run if S3Guard is not set up
> -
>
> Key: HADOOP-14168
> URL: https://issues.apache.org/jira/browse/HADOOP-14168
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>
> I saw ITestS3GuardToolDynamoDB fail when running without any S3Guard 
> configuration set up because it will run even with -Ds3guard.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14145) Ensure GenericOptionParser is used for S3Guard CLI

2017-03-09 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14145:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ensure GenericOptionParser is used for S3Guard CLI
> --
>
> Key: HADOOP-14145
> URL: https://issues.apache.org/jira/browse/HADOOP-14145
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14145-HADOOP-13345.001.patch
>
>
> As discussed in HADOOP-14094.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14168) S3GuardTool tests should not run if S3Guard is not set up

2017-03-09 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-14168:
--

 Summary: S3GuardTool tests should not run if S3Guard is not set up
 Key: HADOOP-14168
 URL: https://issues.apache.org/jira/browse/HADOOP-14168
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Mackrory
Assignee: Sean Mackrory


I saw ITestS3GuardToolDynamoDB fail when running without any S3Guard 
configuration set up because it will run even with -Ds3guard.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14157) FsUrlStreamHandlerFactory "Illegal character in path" parsing file URL on Windows

2017-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903734#comment-15903734
 ] 

Hadoop QA commented on HADOOP-14157:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  2s{color} | {color:orange} root: The patch generated 4 new + 8 unchanged - 
0 fixed = 12 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 20s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.fs.TestUrlStreamHandler |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14157 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12857046/HADOOP-14157.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c4db8a8b2a1a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 385d2cb |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11789/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11789/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Updated] (HADOOP-14167) UserIdentityProvider should use short user name in DecayRpcScheduler

2017-03-09 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-14167:

Status: Patch Available  (was: Open)

> UserIdentityProvider should use short user name in DecayRpcScheduler
> 
>
> Key: HADOOP-14167
> URL: https://issues.apache.org/jira/browse/HADOOP-14167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-14167.001.patch
>
>
> In secure cluster {{UserIdentityProvider}} use principal name for user, it 
> should use short name of principal.
> {noformat}
>   {
> "name" : 
> "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
>  .
>  .
>  .
> "Caller(hdfs/had...@hadoop.com).Volume" : 436,
> "Caller(hdfs/had...@hadoop.com).Priority" : 3,
> .
> .
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14167) UserIdentityProvider should use short user name in DecayRpcScheduler

2017-03-09 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-14167:

Attachment: HADOOP-14167.001.patch

Attached initial patch.
Please review..

> UserIdentityProvider should use short user name in DecayRpcScheduler
> 
>
> Key: HADOOP-14167
> URL: https://issues.apache.org/jira/browse/HADOOP-14167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-14167.001.patch
>
>
> In secure cluster {{UserIdentityProvider}} use principal name for user, it 
> should use short name of principal.
> {noformat}
>   {
> "name" : 
> "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
>  .
>  .
>  .
> "Caller(hdfs/had...@hadoop.com).Volume" : 436,
> "Caller(hdfs/had...@hadoop.com).Priority" : 3,
> .
> .
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14167) UserIdentityProvider should use short user name in DecayRpcScheduler

2017-03-09 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-14167:

Description: 
In secure cluster {{UserIdentityProvider}} use principal name for user, it 
should use short name of principal.

{noformat}
  {
"name" : "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
 .
 .
 .
"Caller(hdfs/had...@hadoop.com).Volume" : 436,
"Caller(hdfs/had...@hadoop.com).Priority" : 3,
.
.
  }
{noformat}


  was:
In secure cluster {{UserIdentityProvider}} use principal name for user, it 
should use shot name of principal.

{noformat}
  {
"name" : "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
 .
 .
 .
"Caller(hdfs/had...@hadoop.com).Volume" : 436,
"Caller(hdfs/had...@hadoop.com).Priority" : 3,
.
.
  }
{noformat}



> UserIdentityProvider should use short user name in DecayRpcScheduler
> 
>
> Key: HADOOP-14167
> URL: https://issues.apache.org/jira/browse/HADOOP-14167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>
> In secure cluster {{UserIdentityProvider}} use principal name for user, it 
> should use short name of principal.
> {noformat}
>   {
> "name" : 
> "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
>  .
>  .
>  .
> "Caller(hdfs/had...@hadoop.com).Volume" : 436,
> "Caller(hdfs/had...@hadoop.com).Priority" : 3,
> .
> .
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-03-09 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903720#comment-15903720
 ] 

Yongjun Zhang commented on HADOOP-14104:


Thanks for the update [~rushabh.shah],  sorry about that, but please be assured 
that I did not mean to intrude, my sincere apology if you felt so. 

I should have given some background, I was looking into HDFS-9868 earlier 
because we need a solution very soon to let distcp to be able to see the 
keyProvider of the remote cluster, then we found that HADOOP-14104 may be a 
better solution. As far as I know, the existing "providing key provider path 
via conf " implementation doesn't support external cluster, we could do an 
implementation to extend the conf support for external cluster for keyprovider, 
as an alternative solution for HADOOP-14104. 

Will comment on your other points soon.



> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch, 
> HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Status: Patch Available  (was: Open)

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Attachment: HADOOP-13786-HADOOP-13345-010.patch

Patch 010; work on the mock tests. Overall, ~50% success rate.

These tests are failing for two main reasons

# I've broken the code
# In changing the code, I've broken the test

A fair few of the tests are failing in that the mock calls don't follow the 
path they've expected; that's the problem with mocks...you are asserting about 
the internal operations, rather than the final observed state of the SUT. 
Sometimes that's good for  looking into the interior, but it's very, very 
brittle.

These tests are all doing something ugly to get a mock s3a FS set up for code 
to get when they ask for an FS. I plan to remove that wrapper mock and inject 
whatever mock FS is created straight into the FileSystem.get() cache. That's 
the proper way to do it.

After that, I'll look at why the tests are failing, focusing on ones where 
results are not what is expected, rather than just mock counter mismatch. I'll 
assume those are false alarms for now, and only worry about the details once 
the more functional tests have passed

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14156) Grammar error in the ConfTest.java

2017-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903691#comment-15903691
 ] 

Hadoop QA commented on HADOOP-14156:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14156 |
| GITHUB PR | https://github.com/apache/hadoop/pull/187 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ef7368c26c6e 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 385d2cb |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11792/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11792/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Grammar error in the ConfTest.java
> --
>
> Key: HADOOP-14156
> URL: https://issues.apache.org/jira/browse/HADOOP-14156
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Andrey Dyatlov
>Priority: Trivial
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java}}
> bq. does not defined
> should 

[jira] [Updated] (HADOOP-14166) Reset the DecayRpcScheduler AvgResponseTime metric to zero when queue is not used

2017-03-09 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-14166:

Status: Patch Available  (was: Open)

> Reset the DecayRpcScheduler AvgResponseTime metric to zero when queue is not 
> used
> -
>
> Key: HADOOP-14166
> URL: https://issues.apache.org/jira/browse/HADOOP-14166
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-14166.001.patch
>
>
> {noformat}
>  "name" : "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
> "modelerType" : "DecayRpcSchedulerMetrics2.ipc.8020",
> "tag.Context" : "ipc.8020",
> "tag.Hostname" : "host1",
> "DecayedCallVolume" : 3,
> "UniqueCallers" : 1,
> "Caller(root).Volume" : 266,
> "Caller(root).Priority" : 3,
> "Priority.0.AvgResponseTime" : 6.151201023385511E-5,
> "Priority.1.AvgResponseTime" : 0.0,
> "Priority.2.AvgResponseTime" : 0.0,
> "Priority.3.AvgResponseTime" : 1.184686336544601,
> "Priority.0.CompletedCallVolume" : 0,
> "Priority.1.CompletedCallVolume" : 0,
> "Priority.2.CompletedCallVolume" : 0,
> "Priority.3.CompletedCallVolume" : 2,
> "CallVolume" : 266
> {noformat}
> "Priority.0.AvgResponseTime" is always "6.151201023385511E-5" even the queue 
> is not used for long time.
> {code}
>   if (lastAvg > PRECISION || averageResponseTime > PRECISION) {
> if (enableDecay) {
>   final double decayed = decayFactor * lastAvg + averageResponseTime;
>   LOG.info("Decayed "  + i + " time " +   decayed);
>   responseTimeAvgInLastWindow.set(i, decayed);
> } else {
>   responseTimeAvgInLastWindow.set(i, averageResponseTime);
> }
>   }
> {code}
> we should reset it to zero when above condition is false.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Status: Open  (was: Patch Available)

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14166) Reset the DecayRpcScheduler AvgResponseTime metric to zero when queue is not used

2017-03-09 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-14166:

Attachment: HADOOP-14166.001.patch

Attached initial patch..
Please review...

> Reset the DecayRpcScheduler AvgResponseTime metric to zero when queue is not 
> used
> -
>
> Key: HADOOP-14166
> URL: https://issues.apache.org/jira/browse/HADOOP-14166
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HADOOP-14166.001.patch
>
>
> {noformat}
>  "name" : "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
> "modelerType" : "DecayRpcSchedulerMetrics2.ipc.8020",
> "tag.Context" : "ipc.8020",
> "tag.Hostname" : "host1",
> "DecayedCallVolume" : 3,
> "UniqueCallers" : 1,
> "Caller(root).Volume" : 266,
> "Caller(root).Priority" : 3,
> "Priority.0.AvgResponseTime" : 6.151201023385511E-5,
> "Priority.1.AvgResponseTime" : 0.0,
> "Priority.2.AvgResponseTime" : 0.0,
> "Priority.3.AvgResponseTime" : 1.184686336544601,
> "Priority.0.CompletedCallVolume" : 0,
> "Priority.1.CompletedCallVolume" : 0,
> "Priority.2.CompletedCallVolume" : 0,
> "Priority.3.CompletedCallVolume" : 2,
> "CallVolume" : 266
> {noformat}
> "Priority.0.AvgResponseTime" is always "6.151201023385511E-5" even the queue 
> is not used for long time.
> {code}
>   if (lastAvg > PRECISION || averageResponseTime > PRECISION) {
> if (enableDecay) {
>   final double decayed = decayFactor * lastAvg + averageResponseTime;
>   LOG.info("Decayed "  + i + " time " +   decayed);
>   responseTimeAvgInLastWindow.set(i, decayed);
> } else {
>   responseTimeAvgInLastWindow.set(i, averageResponseTime);
> }
>   }
> {code}
> we should reset it to zero when above condition is false.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13914) s3guard: improve S3AFileStatus#isEmptyDirectory handling

2017-03-09 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903685#comment-15903685
 ] 

Sean Mackrory edited comment on HADOOP-13914 at 3/9/17 7:26 PM:


I reviewed the .006. patch, and I'm a +1 on the code itself.

hadoop-common unit test results we've seen before and aren't related. There's 
still that checkstyle issue with innerRename - I'm of the opinion that we 
should try and refactor that to fit under 150, but after this is merged to 
trunk. That's probably one of the tricker things we'll have to merge and 
refactoring it that way will only make that worse. Anyone disagree?

We can add the following to ignore the findbugs issue:
{code}
diff --git a/hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aws/dev-support/findbugs
-exclude.xml
index ffb0a79..3464e71 100644
--- a/hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml
@@ -26,4 +26,10 @@
   
 
   
+  
+  
+
+
+
+  
 
{code}


was (Author: mackrorysd):
I reviewed the .006. patch, and I'm a +1 on the code itself.

hadoop-common unit test results we've seen before and aren't related. There's 
still that checkstyle issue with innerRename - I'm of the opinion that we 
should try and refactor that to fit under 150, but after this is merged to 
trunk. That's probably one of the tricker things we'll have to merge and 
refactoring it that way will only make that worse. Anyone disagree?

We can add the following to ignore the findbugs issue:
{quote}
diff --git a/hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aws/dev-support/findbugs
-exclude.xml
index ffb0a79..3464e71 100644
--- a/hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml
@@ -26,4 +26,10 @@
   
 
   
+  
+  
+
+
+
+  
 
{quote}

> s3guard: improve S3AFileStatus#isEmptyDirectory handling
> 
>
> Key: HADOOP-13914
> URL: https://issues.apache.org/jira/browse/HADOOP-13914
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13914-HADOOP-13345.000.patch, 
> HADOOP-13914-HADOOP-13345.002.patch, HADOOP-13914-HADOOP-13345.003.patch, 
> HADOOP-13914-HADOOP-13345.004.patch, HADOOP-13914-HADOOP-13345.005.patch, 
> HADOOP-13914-HADOOP-13345.006.patch, HADOOP-13914-HADOOP-13345.007.patch, 
> s3guard-empty-dirs.md, test-only-HADOOP-13914.patch
>
>
> As discussed in HADOOP-13449, proper support for the isEmptyDirectory() flag 
> stored in S3AFileStatus is missing from DynamoDBMetadataStore.
> The approach taken by LocalMetadataStore is not suitable for the DynamoDB 
> implementation, and also sacrifices good code separation to minimize 
> S3AFileSystem changes pre-merge to trunk.
> I will attach a design doc that attempts to clearly explain the problem and 
> preferred solution.  I suggest we do this work after merging the HADOOP-13345 
> branch to trunk, but am open to suggestions.
> I can also attach a patch of a integration test that exercises the missing 
> case and demonstrates a failure with DynamoDBMetadataStore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14153) ADL module has messed doc structure

2017-03-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14153:
---
Attachment: HADOOP-14153.001.patch
Screen Shot 2017-03-09 at 11.28.27 AM.png

Thanks [~ajisakaa] for your comments. I think that manual labels are not 
necessary. I removed that in the v1 patch. I also attached the screenshot for 
the built web page.


> ADL module has messed doc structure
> ---
>
> Key: HADOOP-14153
> URL: https://issues.apache.org/jira/browse/HADOOP-14153
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>  Labels: documentaion
> Attachments: HADOOP-14153.000.patch, HADOOP-14153.001.patch, Screen 
> Shot 2017-03-09 at 11.28.27 AM.png
>
>
> RT



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13914) s3guard: improve S3AFileStatus#isEmptyDirectory handling

2017-03-09 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903685#comment-15903685
 ] 

Sean Mackrory commented on HADOOP-13914:


I reviewed the .006. patch, and I'm a +1 on the code itself.

hadoop-common unit test results we've seen before and aren't related. There's 
still that checkstyle issue with innerRename - I'm of the opinion that we 
should try and refactor that to fit under 150, but after this is merged to 
trunk. That's probably one of the tricker things we'll have to merge and 
refactoring it that way will only make that worse. Anyone disagree?

We can add the following to ignore the findbugs issue:
{quote}
diff --git a/hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml 
b/hadoop-tools/hadoop-aws/dev-support/findbugs
-exclude.xml
index ffb0a79..3464e71 100644
--- a/hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml
@@ -26,4 +26,10 @@
   
 
   
+  
+  
+
+
+
+  
 
{quote}

> s3guard: improve S3AFileStatus#isEmptyDirectory handling
> 
>
> Key: HADOOP-13914
> URL: https://issues.apache.org/jira/browse/HADOOP-13914
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13914-HADOOP-13345.000.patch, 
> HADOOP-13914-HADOOP-13345.002.patch, HADOOP-13914-HADOOP-13345.003.patch, 
> HADOOP-13914-HADOOP-13345.004.patch, HADOOP-13914-HADOOP-13345.005.patch, 
> HADOOP-13914-HADOOP-13345.006.patch, HADOOP-13914-HADOOP-13345.007.patch, 
> s3guard-empty-dirs.md, test-only-HADOOP-13914.patch
>
>
> As discussed in HADOOP-13449, proper support for the isEmptyDirectory() flag 
> stored in S3AFileStatus is missing from DynamoDBMetadataStore.
> The approach taken by LocalMetadataStore is not suitable for the DynamoDB 
> implementation, and also sacrifices good code separation to minimize 
> S3AFileSystem changes pre-merge to trunk.
> I will attach a design doc that attempts to clearly explain the problem and 
> preferred solution.  I suggest we do this work after merging the HADOOP-13345 
> branch to trunk, but am open to suggestions.
> I can also attach a patch of a integration test that exercises the missing 
> case and demonstrates a failure with DynamoDBMetadataStore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-03-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903681#comment-15903681
 ] 

Mingliang Liu commented on HADOOP-13945:


Thanks for updating the patch! I'll review this week.

> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Attachments: HADOOP-13945.1.patch, HADOOP-13945.2.patch, 
> HADOOP-13945.3.patch, HADOOP-13945.4.patch, HADOOP-13945.5.patch, 
> HADOOP-13945.6.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13689) Do not attach javadoc and sources jars during non-dist build

2017-03-09 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903559#comment-15903559
 ] 

Andrew Wang commented on HADOOP-13689:
--

Best I can tell from my test, the hadoop jars are still in the tarball. If you 
could isolate the issue outside of Bigtop, like the maven command I ran in my 
previous comment, that would be really helpful in diagnosing this.

> Do not attach javadoc and sources jars during non-dist build
> 
>
> Key: HADOOP-13689
> URL: https://issues.apache.org/jira/browse/HADOOP-13689
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13689.001.patch
>
>
> Looking at maven output when running with "-Pdist", the source plugin 
> "test-jar" and "jar" goals are invoked twice. This is because it's turned on 
> by both the dist profile and on by default.
> Outside of the release context, it's not that important to have javadoc and 
> source JARs, so I think we can turn it off by default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14156) Grammar error in the ConfTest.java

2017-03-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903568#comment-15903568
 ] 

Akira Ajisaka commented on HADOOP-14156:


bq. I'll file a jira to Apache Yetus to fix this problem.
Filed YETUS-494.

> Grammar error in the ConfTest.java
> --
>
> Key: HADOOP-14156
> URL: https://issues.apache.org/jira/browse/HADOOP-14156
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Andrey Dyatlov
>Priority: Trivial
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java}}
> bq. does not defined
> should be replaced by
> bq. is not defined
> PR: https://github.com/apache/hadoop/pull/187/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14156) Grammar error in the ConfTest.java

2017-03-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903537#comment-15903537
 ] 

Akira Ajisaka commented on HADOOP-14156:


Submitted precommit job: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11791/
Probably it works.

> Grammar error in the ConfTest.java
> --
>
> Key: HADOOP-14156
> URL: https://issues.apache.org/jira/browse/HADOOP-14156
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Andrey Dyatlov
>Priority: Trivial
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java}}
> bq. does not defined
> should be replaced by
> bq. is not defined
> PR: https://github.com/apache/hadoop/pull/187/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14156) Grammar error in the ConfTest.java

2017-03-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903533#comment-15903533
 ] 

Akira Ajisaka commented on HADOOP-14156:


There is a workaround: comment the URL to the patch: 
https://github.com/apache/hadoop/pull/187.patch

> Grammar error in the ConfTest.java
> --
>
> Key: HADOOP-14156
> URL: https://issues.apache.org/jira/browse/HADOOP-14156
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Andrey Dyatlov
>Priority: Trivial
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java}}
> bq. does not defined
> should be replaced by
> bq. is not defined
> PR: https://github.com/apache/hadoop/pull/187/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903528#comment-15903528
 ] 

Steve Loughran commented on HADOOP-13786:
-

Right now I'm doing this out of an ASF branch, rebasing onto the HADOOP-13345 
branch regularly. That way I can do things without adult supervision

https://github.com/steveloughran/hadoop/tree/s3guard/HADOOP-13786-committer

Test wise, I've been focusing on the integration tests 
{{org.apache.hadoop.fs.s3a.commit.staging.ITestStagingCommitProtocol}}, which 
is derived from 
{{org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter}} ; because 
they're from there they form part of the expectations of the protocol 
implementation for commitment
(more precisely, they're the closest we have to any definition). All these 
intiial tests work, except for those generating files in a subdirectory, e.g. 
{{part-/subfile}}; something critical for the intermediate output of an MR 
job.

The scanner for files to upload is just doing a flat list and then getting into 
trouble when it gets handed a directory to upload instead of a simple file.
{code}
java.io.FileNotFoundException: Not a 
file/Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-aws/target/tmp/mapred/local/job_200707121733_0001/_temporary/0/_temporary/attempt_200707121733_0001_m_00_0/part-m-0

at 
org.apache.hadoop.fs.s3a.commit.staging.S3Util.multipartUpload(S3Util.java:104)
at 
org.apache.hadoop.fs.s3a.commit.staging.StagingS3GuardCommitter$8.run(StagingS3GuardCommitter.java:784)
at 
org.apache.hadoop.fs.s3a.commit.staging.StagingS3GuardCommitter$8.run(StagingS3GuardCommitter.java:771)
at 
org.apache.hadoop.fs.s3a.commit.staging.Tasks$Builder.runSingleThreaded(Tasks.java:122)
at 
org.apache.hadoop.fs.s3a.commit.staging.Tasks$Builder.run(Tasks.java:108)
at 
org.apache.hadoop.fs.s3a.commit.staging.StagingS3GuardCommitter.commitTaskInternal(StagingS3GuardCommitter.java:771)
at 
org.apache.hadoop.fs.s3a.commit.staging.StagingS3GuardCommitter.commitTask(StagingS3GuardCommitter.java:716)
at 
org.apache.hadoop.fs.s3a.commit.AbstractITCommitProtocol.commit(AbstractITCommitProtocol.java:684)
at 
org.apache.hadoop.fs.s3a.commit.AbstractITCommitProtocol.testMapFileOutputCommitter(AbstractITCommitProtocol.java:567)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

What I need to do is go from the flat listFiles() to the recursive one, then 
use that to create the offset of the final destination. 

Regarding the mock tests, it's all happening because bits of the code now 
expect and S3aFS, and its only a base FileSystem, with a limited set of 
operations, being mocked

It's not going to be quite enough to mock S3AFS, unless those extra methods 
come in (which will surface as we try). 

FWIW, I'd actually prefer that, wherever possible, real integration tests were 
used over mock ones. Yes, they are slower, no, yetus and jenkins don't run 
them, but because they really do test the endpoint, and will catch regressions 
in the S3A client itself, quirks in different S3 implementations, etc. Given 
you've written them, it'll be good to get working. And the fault generation is 
great to test the resilience of the committer.




> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> 

[jira] [Commented] (HADOOP-14156) Grammar error in the ConfTest.java

2017-03-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903525#comment-15903525
 ] 

Akira Ajisaka commented on HADOOP-14156:


Umm... smart-apply-patch is not working. 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11790/console
Normally PR is created after creating JIRA, and then ASF GitHub bot comments 
the URL to the patch here. However, this issue was created after the 
corresponding PR, so the bot did not comment the URL to the patch. Therefore 
smart-apply-patch does not work for this issue because smart-apply-patch search 
if there is a patch URL. I'll file a jira to Apache Yetus to fix this problem.

> Grammar error in the ConfTest.java
> --
>
> Key: HADOOP-14156
> URL: https://issues.apache.org/jira/browse/HADOOP-14156
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Andrey Dyatlov
>Priority: Trivial
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java}}
> bq. does not defined
> should be replaced by
> bq. is not defined
> PR: https://github.com/apache/hadoop/pull/187/



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14157) FsUrlStreamHandlerFactory "Illegal character in path" parsing file URL on Windows

2017-03-09 Thread Simon Scott (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Scott updated HADOOP-14157:
-
Target Version/s: 3.0.0-alpha2, 2.6.5, 2.7.3  (was: 2.7.3, 2.6.5, 
3.0.0-alpha2)
  Status: Patch Available  (was: Open)

> FsUrlStreamHandlerFactory "Illegal character in path" parsing file URL on 
> Windows
> -
>
> Key: HADOOP-14157
> URL: https://issues.apache.org/jira/browse/HADOOP-14157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha2, 2.6.5, 2.7.3
> Environment: Windows
>Reporter: Simon Scott
>Priority: Minor
> Attachments: HADOOP-14157.001.patch
>
>
> After registering the FsUrlStreamHandlerFactory with the JVM, subsequent 
> calls to convert a "file" URL to a URI can fail with "Illegal character in 
> path" where the illegal character is a backslash.
> For example:
> {code}
> URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
> File file = new File("C:/Users");
> final URL url = new URL("file:///" + file.getAbsolutePath());
> {code}
> gives stack trace:
> {noformat}
> java.net.URISyntaxException: Illegal character in path at index 8: 
> file:/C:\Users
> at java.net.URI$Parser.fail(URI.java:2848)
> at java.net.URI$Parser.checkChars(URI.java:3021)
> at java.net.URI$Parser.parseHierarchical(URI.java:3105)
> at java.net.URI$Parser.parse(URI.java:3053)
> at java.net.URI.(URI.java:588)
> at java.net.URL.toURI(URL.java:946)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14157) FsUrlStreamHandlerFactory "Illegal character in path" parsing file URL on Windows

2017-03-09 Thread Simon Scott (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Scott updated HADOOP-14157:
-
Attachment: HADOOP-14157.001.patch

Patch to exercise issue by updating existing test, and also to fix it.

> FsUrlStreamHandlerFactory "Illegal character in path" parsing file URL on 
> Windows
> -
>
> Key: HADOOP-14157
> URL: https://issues.apache.org/jira/browse/HADOOP-14157
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.3, 2.6.5, 3.0.0-alpha2
> Environment: Windows
>Reporter: Simon Scott
>Priority: Minor
> Attachments: HADOOP-14157.001.patch
>
>
> After registering the FsUrlStreamHandlerFactory with the JVM, subsequent 
> calls to convert a "file" URL to a URI can fail with "Illegal character in 
> path" where the illegal character is a backslash.
> For example:
> {code}
> URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
> File file = new File("C:/Users");
> final URL url = new URL("file:///" + file.getAbsolutePath());
> {code}
> gives stack trace:
> {noformat}
> java.net.URISyntaxException: Illegal character in path at index 8: 
> file:/C:\Users
> at java.net.URI$Parser.fail(URI.java:2848)
> at java.net.URI$Parser.checkChars(URI.java:3021)
> at java.net.URI$Parser.parseHierarchical(URI.java:3105)
> at java.net.URI$Parser.parse(URI.java:3053)
> at java.net.URI.(URI.java:588)
> at java.net.URL.toURI(URL.java:946)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-09 Thread Ryan Blue (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903352#comment-15903352
 ] 

Ryan Blue commented on HADOOP-13786:


Is there a branch where I can take a look at the S3A test issue? I can probably 
get them working.

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-03-09 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903373#comment-15903373
 ] 

Rushabh S Shah commented on HADOOP-14104:
-

[~andrew.wang] [~daryn]: Just FYI, please review  HADOOP-14104-trunk-v2.patch 
patch.

> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch, 
> HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-03-09 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HADOOP-14104:

Status: Open  (was: Patch Available)

> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch, 
> HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14167) UserIdentityProvider should use short user name in DecayRpcScheduler

2017-03-09 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HADOOP-14167:
---

 Summary: UserIdentityProvider should use short user name in 
DecayRpcScheduler
 Key: HADOOP-14167
 URL: https://issues.apache.org/jira/browse/HADOOP-14167
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


In secure cluster {{UserIdentityProvider}} use principal name for user, it 
should use shot name of principal.

{noformat}
  {
"name" : "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
 .
 .
 .
"Caller(hdfs/had...@hadoop.com).Volume" : 436,
"Caller(hdfs/had...@hadoop.com).Priority" : 3,
.
.
  }
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.

2017-03-09 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903191#comment-15903191
 ] 

Rushabh S Shah commented on HADOOP-14104:
-

{quote}
  final CryptoCodec codec = getCryptoCodec(conf, feInfo);
 in createWrappedOutputStream(, where conf is the configuration of local 
cluster. There is a possibility that the local configuration is different than 
remote cluster's. So it's possible to fail here.
{quote}
We are not reading the key provider path from conf within {{getCryptoCodec}} 
method. So I don't think my change (form v2 patch) will fail anything.

bq. public satic final String HADOOP_SECURITY_KEY_PROVIDER_PATH_DEFAULT = "";
Before the v2 of patch, there was no default value for 
{{HADOOP_SECURITY_KEY_PROVIDER_PATH_DEFAULT}} and I kept it the same way.
I think this change is out of the scope of this jira. Suggest you to create a 
new jira for this change. Let's not mix multiple things in one jira.

bq. 3. Notice that "dfs.encryption.key.provider.uri" is deprecated and replaced 
with hadoop.security.key.provider.path (see HDFS-10489). So suggest to replace 
variable name keyProviderUri with keyProviderPath
I didn't even know this config used to be called as 
{{dfs.encryption.key.provider.uri}}. 
But this {{hadoop.security.key.provider.path}} on the client side is just an 
URI and not path by any way. We convert this to a path via 
{{KMSClientProvider(URI uri, Configuration conf)}} where we extract the path 
via {{KMSClientProvider#extractKMSPath(URI uri)}}. Thats why I named it 
{{keyProviderUri}}.
But if you feel that strongly about the variable name, I can change it to 
provider path in my next revision.

{quote}
4. Suggest to add two methods of package scope in DFSClient
 void addKmsKeyProviderPath(...)
  String getKmsKeyProviderPath(...)
{quote}
I think we use add method if that data structure is going to contain more than 
one entry. I don't think the provider uri on the client side is going to 
contain more than one entry.
I already have {{getKeyProviderUri}}  in v2 of patch.

bq. 5.The uri used in DistributedFileSystem and DFSClient may be different, see 
DistributedFileSystem#initialize below
This is very good observation. I think its safe to flip these 2 statements in 
DistributedFilesystem class.
Will do it in next revision.
{noformat}
  this.dfs = new DFSClient(uri, conf, statistics);
this.uri = URI.create(uri.getScheme()+"://"+uri.getAuthority());
{noformat}

bq. 6. Seems we need a similar change in WebHdfsFileSystem when calling 
addDelegationTokens
I don't think key provider is used by WebHDFSFileSystem. Maybe I'm missing 
something.
Can you please elaborate your comment ?

{quote}
7. About your question w.r.t. public boolean isHDFSEncryptionEnabled() throwing 
StandbyException. There is a solution, that is, we need to incorporate remote's 
cluster's nameservices configurations in the client (distcp for example) 
configuration, and let the client handle the NN failover and retry. We need to 
document this.
{quote}
I didn't understand this comment also.  Please elaborate.

bq. We need a solution soon, so I hope you don't mind I just uploaded a patch 
to address most of my comments.
I don't see any sense of urgency from the community since providing key 
provider path via conf was the standard way since the feature was introduced 
since hadoop 2.6 (Aug 2014).
Having said that I was waiting for comments from [~daryn] and [~andrew.wang] 
before putting up a new patch.
I know [~daryn] was out of office for last couple of days and [~andrew.wang] 
was involved in 2.8.0 release work so I thought to wait for few days before 
pinging them.
[~yzhangal]: Since I am the assignee of the jira and I am updating patches 
regularly, _please allow_ me to upload the patches henceforth. I would like to 
get valuable review comments from you.
I hope you don't mind that.

> Client should always ask namenode for kms provider path.
> 
>
> Key: HADOOP-14104
> URL: https://issues.apache.org/jira/browse/HADOOP-14104
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch, 
> HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch
>
>
> According to current implementation of kms provider in client conf, there can 
> only be one kms.
> In multi-cluster environment, if a client is reading encrypted data from 
> multiple clusters it will only get kms token for local cluster.
> Not sure whether the target version is correct or not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-14145) Ensure GenericOptionParser is used for S3Guard CLI

2017-03-09 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903149#comment-15903149
 ] 

Sean Mackrory commented on HADOOP-14145:


Thanks [~fabbri]. Committed and pushed!

> Ensure GenericOptionParser is used for S3Guard CLI
> --
>
> Key: HADOOP-14145
> URL: https://issues.apache.org/jira/browse/HADOOP-14145
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14145-HADOOP-13345.001.patch
>
>
> As discussed in HADOOP-14094.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14166) Reset the DecayRpcScheduler AvgResponseTime metric to zero when queue is not used

2017-03-09 Thread Surendra Singh Lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HADOOP-14166:

Description: 
{noformat}
 "name" : "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
"modelerType" : "DecayRpcSchedulerMetrics2.ipc.8020",
"tag.Context" : "ipc.8020",
"tag.Hostname" : "host1",
"DecayedCallVolume" : 3,
"UniqueCallers" : 1,
"Caller(root).Volume" : 266,
"Caller(root).Priority" : 3,
"Priority.0.AvgResponseTime" : 6.151201023385511E-5,
"Priority.1.AvgResponseTime" : 0.0,
"Priority.2.AvgResponseTime" : 0.0,
"Priority.3.AvgResponseTime" : 1.184686336544601,
"Priority.0.CompletedCallVolume" : 0,
"Priority.1.CompletedCallVolume" : 0,
"Priority.2.CompletedCallVolume" : 0,
"Priority.3.CompletedCallVolume" : 2,
"CallVolume" : 266
{noformat}

"Priority.0.AvgResponseTime" is always "6.151201023385511E-5" even the queue is 
not used for long time.

{code}
  if (lastAvg > PRECISION || averageResponseTime > PRECISION) {
if (enableDecay) {
  final double decayed = decayFactor * lastAvg + averageResponseTime;
  LOG.info("Decayed "  + i + " time " +   decayed);
  responseTimeAvgInLastWindow.set(i, decayed);
} else {
  responseTimeAvgInLastWindow.set(i, averageResponseTime);
}
  }
{code}

we should reset it to zero when above condition is false.

  was:
{noformat}
 "name" : "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.65110",
"modelerType" : "DecayRpcSchedulerMetrics2.ipc.65110",
"tag.Context" : "ipc.65110",
"tag.Hostname" : "BLR106556",
"DecayedCallVolume" : 3,
"UniqueCallers" : 1,
"Caller(root).Volume" : 266,
"Caller(root).Priority" : 3,
"Priority.0.AvgResponseTime" : 6.151201023385511E-5,
"Priority.1.AvgResponseTime" : 0.0,
"Priority.2.AvgResponseTime" : 0.0,
"Priority.3.AvgResponseTime" : 1.184686336544601,
"Priority.0.CompletedCallVolume" : 0,
"Priority.1.CompletedCallVolume" : 0,
"Priority.2.CompletedCallVolume" : 0,
"Priority.3.CompletedCallVolume" : 2,
"CallVolume" : 266
{noformat}

"Priority.0.AvgResponseTime" is always "6.151201023385511E-5" even the queue is 
not used for long time.

{code}
  if (lastAvg > PRECISION || averageResponseTime > PRECISION) {
if (enableDecay) {
  final double decayed = decayFactor * lastAvg + averageResponseTime;
  LOG.info("Decayed "  + i + " time " +   decayed);
  responseTimeAvgInLastWindow.set(i, decayed);
} else {
  responseTimeAvgInLastWindow.set(i, averageResponseTime);
}
  }
{code}

we should reset it to zero when above condition is false.


> Reset the DecayRpcScheduler AvgResponseTime metric to zero when queue is not 
> used
> -
>
> Key: HADOOP-14166
> URL: https://issues.apache.org/jira/browse/HADOOP-14166
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>
> {noformat}
>  "name" : "Hadoop:service=NameNode,name=DecayRpcSchedulerMetrics2.ipc.8020",
> "modelerType" : "DecayRpcSchedulerMetrics2.ipc.8020",
> "tag.Context" : "ipc.8020",
> "tag.Hostname" : "host1",
> "DecayedCallVolume" : 3,
> "UniqueCallers" : 1,
> "Caller(root).Volume" : 266,
> "Caller(root).Priority" : 3,
> "Priority.0.AvgResponseTime" : 6.151201023385511E-5,
> "Priority.1.AvgResponseTime" : 0.0,
> "Priority.2.AvgResponseTime" : 0.0,
> "Priority.3.AvgResponseTime" : 1.184686336544601,
> "Priority.0.CompletedCallVolume" : 0,
> "Priority.1.CompletedCallVolume" : 0,
> "Priority.2.CompletedCallVolume" : 0,
> "Priority.3.CompletedCallVolume" : 2,
> "CallVolume" : 266
> {noformat}
> "Priority.0.AvgResponseTime" is always "6.151201023385511E-5" even the queue 
> is not used for long time.
> {code}
>   if (lastAvg > PRECISION || averageResponseTime > PRECISION) {
> if (enableDecay) {
>   final double decayed = decayFactor * lastAvg + averageResponseTime;
>   LOG.info("Decayed "  + i + " time " +   decayed);
>   responseTimeAvgInLastWindow.set(i, decayed);
> } else {
>   responseTimeAvgInLastWindow.set(i, averageResponseTime);
> }
>   }
> {code}
> we should reset it to zero when above condition is false.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903060#comment-15903060
 ] 

Hadoop QA commented on HADOOP-13945:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 7 
new + 74 unchanged - 2 fixed = 81 total (was 76) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13945 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12857002/HADOOP-13945.6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 69f0bb7cc1d2 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 385d2cb |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11788/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11788/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11788/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G 

[jira] [Updated] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-03-09 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HADOOP-13945:
--
Attachment: HADOOP-13945.6.patch

Thanks [~liuml07] for reviewing the patch.

I have created a rebased patch excluding the changes in HADOOP-13930 (as it is 
already committed) and addressed the following review comments, 

(1)  {{fs.azure.authorization.remote.service.url}} was introduced in 
HADOOP-13930. Current patch does not have any reference to it.

(2)  In code to log a message, keeping the exception as well.

(3) Unfortunately, 
{{UserGroupInformation.getCurrentUser().getCredentials().getToken(WasbDelegationTokenIdentifier.TOKEN_KIND)}}
 does not work, it takes only alias. Moving this logic to a util method.

(4) Fixed the nit by removing {{()}} for {{&&}} in {{if (isSecurityEnabled && 
(delegationToken != null && !delegationToken.isEmpty()))}}.

(5) Added {{package-info.java}} instead of {{package.html}}.

(6) Created Util methods to avoid duplicate code wherever possible.

(7), (8), (9) and (10) comments are related changes in HADOOP-13930 and already 
addressed there.

(11) Handled it appropriately to the best of my knowledge, let me know if think 
otherwise.


> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Attachments: HADOOP-13945.1.patch, HADOOP-13945.2.patch, 
> HADOOP-13945.3.patch, HADOOP-13945.4.patch, HADOOP-13945.5.patch, 
> HADOOP-13945.6.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13759) Split SFTP FileSystem into its own artifact

2017-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15903014#comment-15903014
 ] 

Steve Loughran commented on HADOOP-13759:
-

Certainly something to discuss —or it could be moved here. Discussion should be 
on hdfs-dev tho'

FWIW, ssh fencing isn't that reliable, as it only stops the remote service if 
somehow it is considered to have failed, but the remote server is still 
reachable via SSH. STONITH fencing via power supplies is generally state of the 
art, as per [RHAT 
docs|https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Configuration_Example_-_Fence_Devices/index.html].
 (Also there's VM-infra hosting with API calls, that's what we used for HA HDFS 
in VMWare)

> Split SFTP FileSystem into its own artifact
> ---
>
> Key: HADOOP-13759
> URL: https://issues.apache.org/jira/browse/HADOOP-13759
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Andrew Wang
>Assignee: Yuanbo Liu
>
> As discussed on HADOOP-13696, if we split the SFTP FileSystem into its own 
> artifact, we can save a jsch dependency in Hadoop Common.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to consistent S3 endpoints

2017-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902939#comment-15902939
 ] 

Steve Loughran commented on HADOOP-13786:
-

I was expecting a veto from yetus

> Add S3Guard committer for zero-rename commits to consistent S3 endpoints
> 
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14153) ADL module has messed doc structure

2017-03-09 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902781#comment-15902781
 ] 

Akira Ajisaka commented on HADOOP-14153:


Is there any specific reason to add labels such as {{}} manually? The labels are automatically generated.

> ADL module has messed doc structure
> ---
>
> Key: HADOOP-14153
> URL: https://issues.apache.org/jira/browse/HADOOP-14153
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>  Labels: documentaion
> Attachments: HADOOP-14153.000.patch
>
>
> RT



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14120) needless S3AFileSystem.setOptionalPutRequestParameters in S3ABlockOutputStream putObject()

2017-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902740#comment-15902740
 ] 

Hadoop QA commented on HADOOP-14120:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-tools/hadoop-aws: The patch generated 0 new + 
8 unchanged - 1 fixed = 8 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14120 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856958/HADOOP-14120.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b7fe8b4cf33a 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 570827a |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11786/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11786/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> needless S3AFileSystem.setOptionalPutRequestParameters in 
> S3ABlockOutputStream putObject()
> --
>
> Key: HADOOP-14120
> URL: https://issues.apache.org/jira/browse/HADOOP-14120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>

[jira] [Commented] (HADOOP-14111) cut some obsolete, ignored s3 tests in TestS3Credentials

2017-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902716#comment-15902716
 ] 

Hadoop QA commented on HADOOP-14111:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14111 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12856955/HADOOP-14111.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7001fb61ff85 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 570827a |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11787/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11787/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> cut some obsolete, ignored s3 tests in TestS3Credentials
> 
>
> Key: HADOOP-14111
> URL: https://issues.apache.org/jira/browse/HADOOP-14111
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14111.001.patch
>
>
> There's a couple of tests in {{TestS3Credentials}} which are tagged 
> {{@ignore}}. they aren't running, still have maintenance 

[jira] [Commented] (HADOOP-14052) Fix dead link in KMS document

2017-03-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15902703#comment-15902703
 ] 

Hudson commented on HADOOP-14052:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11378 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11378/])
HADOOP-14052. Fix dead link in KMS document. Contributed by Christina (jzhuge: 
rev 570827a819c586b31e88621a9bb1d8118d3c7df3)
* (edit) hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm


> Fix dead link in KMS document
> -
>
> Key: HADOOP-14052
> URL: https://issues.apache.org/jira/browse/HADOOP-14052
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha2
>Reporter: Akira Ajisaka
>Assignee: Christina Vu
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14052.001.patch
>
>
> The link for Rollover Key section is broken.
> {noformat:title=./hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm}
> This is usually useful after a [Rollover](Rollover_Key) of an encryption key.
> {noformat}
> (Rollover_Key) should be (#Rollover_Key)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14111) cut some obsolete, ignored s3 tests in TestS3Credentials

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14111:

Status: Patch Available  (was: Open)

> cut some obsolete, ignored s3 tests in TestS3Credentials
> 
>
> Key: HADOOP-14111
> URL: https://issues.apache.org/jira/browse/HADOOP-14111
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14111.001.patch
>
>
> There's a couple of tests in {{TestS3Credentials}} which are tagged 
> {{@ignore}}. they aren't running, still have maintenance cost and appear in 
> test runs as skipped. 
> Proposed: Cut them out entirely.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14120) needless S3AFileSystem.setOptionalPutRequestParameters in S3ABlockOutputStream putObject()

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14120:

Attachment: HADOOP-14120.001.patch

upload v1 patch for this JIRA

> needless S3AFileSystem.setOptionalPutRequestParameters in 
> S3ABlockOutputStream putObject()
> --
>
> Key: HADOOP-14120
> URL: https://issues.apache.org/jira/browse/HADOOP-14120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14120.001.patch
>
>
> There's a call to {{S3AFileSystem.setOptionalPutRequestParameters()}} in {{ 
> S3ABlockOutputStream putObject()}}
> The put request has already been created by the FS; this call is only 
> superflous and potentially confusing.
> Proposed: cut it, make the {{setOptionalPutRequestParameters()}} method 
> private.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14120) needless S3AFileSystem.setOptionalPutRequestParameters in S3ABlockOutputStream putObject()

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14120:

Status: Patch Available  (was: Open)

> needless S3AFileSystem.setOptionalPutRequestParameters in 
> S3ABlockOutputStream putObject()
> --
>
> Key: HADOOP-14120
> URL: https://issues.apache.org/jira/browse/HADOOP-14120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14120.001.patch
>
>
> There's a call to {{S3AFileSystem.setOptionalPutRequestParameters()}} in {{ 
> S3ABlockOutputStream putObject()}}
> The put request has already been created by the FS; this call is only 
> superflous and potentially confusing.
> Proposed: cut it, make the {{setOptionalPutRequestParameters()}} method 
> private.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14120) needless S3AFileSystem.setOptionalPutRequestParameters in S3ABlockOutputStream putObject()

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-14120:
---

Assignee: Yuanbo Liu

> needless S3AFileSystem.setOptionalPutRequestParameters in 
> S3ABlockOutputStream putObject()
> --
>
> Key: HADOOP-14120
> URL: https://issues.apache.org/jira/browse/HADOOP-14120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14120.001.patch
>
>
> There's a call to {{S3AFileSystem.setOptionalPutRequestParameters()}} in {{ 
> S3ABlockOutputStream putObject()}}
> The put request has already been created by the FS; this call is only 
> superflous and potentially confusing.
> Proposed: cut it, make the {{setOptionalPutRequestParameters()}} method 
> private.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14111) cut some obsolete, ignored s3 tests in TestS3Credentials

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14111:

Attachment: HADOOP-14111.001.patch

upload v1 patch

> cut some obsolete, ignored s3 tests in TestS3Credentials
> 
>
> Key: HADOOP-14111
> URL: https://issues.apache.org/jira/browse/HADOOP-14111
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HADOOP-14111.001.patch
>
>
> There's a couple of tests in {{TestS3Credentials}} which are tagged 
> {{@ignore}}. they aren't running, still have maintenance cost and appear in 
> test runs as skipped. 
> Proposed: Cut them out entirely.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14111) cut some obsolete, ignored s3 tests in TestS3Credentials

2017-03-09 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HADOOP-14111:
---

Assignee: Yuanbo Liu

> cut some obsolete, ignored s3 tests in TestS3Credentials
> 
>
> Key: HADOOP-14111
> URL: https://issues.apache.org/jira/browse/HADOOP-14111
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
>
> There's a couple of tests in {{TestS3Credentials}} which are tagged 
> {{@ignore}}. they aren't running, still have maintenance cost and appear in 
> test runs as skipped. 
> Proposed: Cut them out entirely.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org